query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
0e9bebb749f36ccfc7349c86c70ce298
|
Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs
|
[
{
"docid": "92008a84a80924ec8c0ad1538da2e893",
"text": "Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.",
"title": ""
}
] |
[
{
"docid": "1498977b6e68df3eeca6e25c550a5edd",
"text": "The Raven's Progressive Matrices (RPM) test is a commonly used test of intelligence. The literature suggests a variety of problem-solving methods for addressing RPM problems. For a graduate-level artificial intelligence class in Fall 2014, we asked students to develop intelligent agents that could address 123 RPM-inspired problems, essentially crowdsourcing RPM problem solving. The students in the class submitted 224 agents that used a wide variety of problem-solving methods. In this paper, we first report on the aggregate results of those 224 agents on the 123 problems, then focus specifically on four of the most creative, novel, and effective agents in the class. We find that the four agents, using four very different problem-solving methods, were all able to achieve significant success. This suggests the RPM test may be amenable to a wider range of problem-solving methods than previously reported. It also suggests that human computation might be an effective strategy for collecting a wide variety of methods for creative tasks.",
"title": ""
},
{
"docid": "fb4d8685bd880f44b489d7d13f5f36ed",
"text": "With the advancement in digitalization vast amount of Image data is uploaded and used via Internet in today’s world. With this revolution in uses of multimedia data, key problem in the area of Image processing, Computer vision and big data analytics is how to analyze, effectively process and extract useful information from such data. Traditional tactics to process such a data are extremely time and resource intensive. Studies recommend that parallel and distributed computing techniques have much more potential to process such data in efficient manner. To process such a complex task in efficient manner advancement in GPU based processing is also a candidate solution. This paper we introduce Hadoop-Mapreduce (Distributed system) and CUDA (Parallel system) based image processing. In our experiment using satellite images of different dimension we had compared performance or execution speed of canny edge detection algorithm. Performance is compared for CPU and GPU based Time Complexity.",
"title": ""
},
{
"docid": "09e9b51bdd42ec5fae7d332ce7543053",
"text": "This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy HUMAN–COMPUTER INTERACTION, 2004, Volume 19, pp. 183–223 Copyright © 2004, Lawrence Erlbaum Associates, Inc. Anthony Hornof is a computer scientist with interests in human–computer interaction, cognitive modeling, visual search, and eye tracking; he is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. that is used to coordinate the relevant perceptual and motor processes, a clear and useful visual hierarchy triggers a fundamentally different visual search strategy and effectively gives the user greater control over the visual navigation, and cognitive strategies will be an important component of a predictive visual search tool. The models provide insights pertaining to the visual-perceptual and oculomotor processes involved in visual search and contribute to the science base needed for predictive interface analysis. 184 HORNOF",
"title": ""
},
{
"docid": "acd93c6b041a975dcf52c7bafaf05b16",
"text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.",
"title": ""
},
{
"docid": "8bcc223389b7cc2ce2ef4e872a029489",
"text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.",
"title": ""
},
{
"docid": "9364e07801fc01e50d0598b61ab642aa",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "e442b7944062f6201e779aa1e7d6c247",
"text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.",
"title": ""
},
{
"docid": "82ca6a400bf287dc287df9fa751ddac2",
"text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.",
"title": ""
},
{
"docid": "45b1cb6c9393128c9a9dcf9dbeb50778",
"text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.",
"title": ""
},
{
"docid": "30d0453033d3951f5b5faf3213eacb89",
"text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.",
"title": ""
},
{
"docid": "b7959c06c8057418762e12ef2c0ce2ce",
"text": "According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian \"just so\" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal.",
"title": ""
},
{
"docid": "f012c0d9fe795a738b3cd82cef94ef19",
"text": "Fraud detection is an industry where incremental gains in predictive accuracy can have large benefits for banks and customers. Banks adapt models to the novel ways in which “fraudsters” commit credit card fraud. They collect data and engineer new features in order to increase predictive power. This research compares the algorithmic impact on the predictive power across three supervised classification models: logistic regression, gradient boosted trees, and deep learning. This research also explores the benefits of creating features using domain expertise and feature engineering using an autoencoder—an unsupervised feature engineering method. These two methods of feature engineering combined with the direct mapping of the original variables create six different feature sets. Across these feature sets this research compares the aforementioned models. This research concludes that creating features using domain expertise offers a notable improvement in predictive power. Additionally, the autoencoder offers a way to reduce the dimensionality of the data and slightly boost predictive power.",
"title": ""
},
{
"docid": "a1cd5424dea527e365f038fce60fd821",
"text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.",
"title": ""
},
{
"docid": "007791833b15bd3367c11bb17b7abf82",
"text": "When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.",
"title": ""
},
{
"docid": "f442354c5a99ece9571168648285f763",
"text": "A general closed-form subharmonic stability condition is derived for the buck converter with ripple-based constant on-time control and a feedback filter. The turn-on delay is included in the analysis. Three types of filters are considered: low-pass filter (LPF), phase-boost filter (PBF), and inductor current feedback (ICF) which changes the feedback loop frequency response like a filter. With the LPF, the stability region is reduced. With the PBF or ICF, the stability region is enlarged. Stability conditions are determined both for the case of a single output capacitor and for the case of two parallel-connected output capacitors having widely different time constants. The past research results related to the feedback filters become special cases. All theoretical predictions are verified by experiments.",
"title": ""
},
{
"docid": "3b5b3802d4863a6569071b346b65600d",
"text": "In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kNN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets.",
"title": ""
},
{
"docid": "794bba509b6c609e4f9204d96bf5fe9c",
"text": "Power law distributions are an increasingly common model for computer science applications; for example, they have been used to describe file size distributions and inand out-degree distributions for the Web and Internet graphs. Recently, the similar lognormal distribution has also been suggested as an appropriate alternative model for file size distributions. In this paper, we briefly survey some of the history of these distributions, focusing on work in other fields. We find that several recently proposed models have antecedents in work from decades ago. We also find that lognormal and power law distributions connect quite naturally, and hence it is not surprising that lognormal distributions arise as a possible alternative to power law distributions.",
"title": ""
},
{
"docid": "f74a0c176352b8378d9f27fdf93763c9",
"text": "The future of user interfaces will be dominated by hand gestures. In this paper, we explore an intuitive hand gesture based interaction for smartphones having a limited computational capability. To this end, we present an efficient algorithm for gesture recognition with First Person View (FPV), which focuses on recognizing a four swipe model (Left, Right, Up and Down) for smartphones through single monocular camera vision. This can be used with frugal AR/VR devices such as Google Cardboard1 andWearality2 in building AR/VR based automation systems for large scale deployments, by providing a touch-less interface and real-time performance. We take into account multiple cues including palm color, hand contour segmentation, and motion tracking, which effectively deals with FPV constraints put forward by a wearable. We also provide comparisons of swipe detection with the existing methods under the same limitations. We demonstrate that our method outperforms both in terms of gesture recognition accuracy and computational time.",
"title": ""
},
{
"docid": "6cfdad2bb361713616dd2971026758a7",
"text": "We consider the problem of controlling a system with unknown, stochastic dynamics to achieve a complex, time-sensitive task. An example of this problem is controlling a noisy aerial vehicle with partially known dynamics to visit a pre-specified set of regions in any order while avoiding hazardous areas. In particular, we are interested in tasks which can be described by signal temporal logic (STL) specifications. STL is a rich logic that can be used to describe tasks involving bounds on physical parameters, continuous time bounds, and logical relationships over time and states. STL is equipped with a continuous measure called the robustness degree that measures how strongly a given sample path exhibits an STL property [4, 3]. This measure enables the use of continuous optimization problems to solve learning [7, 6] or formal synthesis problems [9] involving STL.",
"title": ""
},
{
"docid": "3b49747ef98ebcfa515fb10a22f08017",
"text": "This paper reports a qualitative study of thriving older people and illustrates the findings with design fiction. Design research has been criticized as \"solutionist\" i.e. solving problems that don't exist or providing \"quick fixes\" for complex social, political and environmental problems. We respond to this critique by presenting a \"solutionist\" board game used to generate design concepts. Players are given data cards and technology dice, they move around the board by pitching concepts that would support positive aging. We argue that framing concept design as a solutionist game explicitly foregrounds play, irony and the limitations of technological intervention. Three of the game concepts are presented as design fictions in the form of advertisements for products and services that do not exist. The paper argues that design fiction can help create a space for design beyond solutionism.",
"title": ""
}
] |
scidocsrr
|
e09d38267455f5fcd48c41bd948716c1
|
Topic oriented community detection through social objects and link analysis in social networks
|
[
{
"docid": "bb2504b2275a20010c0d5f9050173d40",
"text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.",
"title": ""
}
] |
[
{
"docid": "210395d4f0c4db496546da0be3d2524d",
"text": "Crimes are a social irritation and cost our society deeply in several ways. Any research that can help in solving crimes quickly will pay for itself. About 10% of the criminals commit about 50% of the crimes [9]. The system is trained by feeding previous years record of crimes taken from legitimate online portal of India listing various crimes such as murder, kidnapping and abduction, dacoits, robbery, burglary, rape and other such crimes. As per data of Indian statistics, which gives data of various crime of past 14 years (2001–2014) a regression model is created and the crime rate for the following years in various states can be predicted [8]. We have used supervised, semi-supervised and unsupervised learning technique [4] on the crime records for knowledge discovery and to help in increasing the predictive accuracy of the crime. This work will be helpful to the local police stations in crime suppression.",
"title": ""
},
{
"docid": "7360c92ef44058694135338acad6838c",
"text": "Modern chip multiprocessor (CMP) systems employ multiple memory controllers to control access to main memory. The scheduling algorithm employed by these memory controllers has a significant effect on system throughput, so choosing an efficient scheduling algorithm is important. The scheduling algorithm also needs to be scalable — as the number of cores increases, the number of memory controllers shared by the cores should also increase to provide sufficient bandwidth to feed the cores. Unfortunately, previous memory scheduling algorithms are inefficient with respect to system throughput and/or are designed for a single memory controller and do not scale well to multiple memory controllers, requiring significant finegrained coordination among controllers. This paper proposes ATLAS (Adaptive per-Thread Least-Attained-Service memory scheduling), a fundamentally new memory scheduling technique that improves system throughput without requiring significant coordination among memory controllers. The key idea is to periodically order threads based on the service they have attained from the memory controllers so far, and prioritize those threads that have attained the least service over others in each period. The idea of favoring threads with least-attained-service is borrowed from the queueing theory literature, where, in the context of a single-server queue it is known that least-attained-service optimally schedules jobs, assuming a Pareto (or any decreasing hazard rate) workload distribution. After verifying that our workloads have this characteristic, we show that our implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput. Furthermore, since the periods over which we accumulate the attained service are long, the controllers coordinate very infrequently to form the ordering of threads, thereby making ATLAS scalable to many controllers. We evaluate ATLAS on a wide variety of multiprogrammed SPEC 2006 workloads and systems with 4–32 cores and 1–16 memory controllers, and compare its performance to five previously proposed scheduling algorithms. Averaged over 32 workloads on a 24-core system with 4 controllers, ATLAS improves instruction throughput by 10.8%, and system throughput by 8.4%, compared to PAR-BS, the best previous CMP memory scheduling algorithm. ATLAS's performance benefit increases as the number of cores increases.",
"title": ""
},
{
"docid": "a094fe8de029646a408bbb685824581c",
"text": "Will reading habit influence your life? Many say yes. Reading computational intelligence principles techniques and applications is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.",
"title": ""
},
{
"docid": "0b0273a1e2aeb98eb4115113c8957fd2",
"text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.",
"title": ""
},
{
"docid": "d1c33990b7642ea51a8a568fa348d286",
"text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.",
"title": ""
},
{
"docid": "8f6da9a81b4efe5e76356c6c30ddd6a6",
"text": "Recently, independent component analysis (ICA) has been widely used in the analysis of brain imaging data. An important problem with most ICA algorithms is, however, that they are stochastic; that is, their results may be somewhat different in different runs of the algorithm. Thus, the outputs of a single run of an ICA algorithm should be interpreted with some reserve, and further analysis of the algorithmic reliability of the components is needed. Moreover, as with any statistical method, the results are affected by the random sampling of the data, and some analysis of the statistical significance or reliability should be done as well. Here we present a method for assessing both the algorithmic and statistical reliability of estimated independent components. The method is based on running the ICA algorithm many times with slightly different conditions and visualizing the clustering structure of the obtained components in the signal space. In experiments with magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) data, the method was able to show that expected components are reliable; furthermore, it pointed out components whose interpretation was not obvious but whose reliability should incite the experimenter to investigate the underlying technical or physical phenomena. The method is implemented in a software package called Icasso.",
"title": ""
},
{
"docid": "8f930fc4f06f8b17e2826f0975af1fa1",
"text": "Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called \"anchor\" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.",
"title": ""
},
{
"docid": "c7162cc2e65c52d9575fe95e2c4f62f4",
"text": "The enactive approach to cognition is typically proposed as a viable alternative to traditional cognitive science. Enactive cognition displaces the explanatory focus from the internal representations of the agent to the direct sensorimotor interaction with its environment. In this paper, we investigate enactive learning through means of artificial agent simulations. We compare the performances of the enactive agent to an agent operating on classical reinforcement learning in foraging tasks within maze environments. The characteristics of the agents are analysed in terms of the accessibility of the environmental states, goals, and exploration/exploitation tradeoffs. We confirm that the enactive agent can successfully interact with its environment and learn to avoid unfavourable interactions using intrinsically defined goals. The performance of the enactive agent is shown to be limited by the number of affordable actions.",
"title": ""
},
{
"docid": "89c1ab96b509a80ff35103fa35d0a60c",
"text": "The mobile ad-hoc network (MANET) is a new wireless technology, having features like dynamic topology and self-configuring ability of nodes. The self configuring ability of nodes in MANET made it popular among the critical situation such as military use and emergency recovery. But due to open medium and broad distribution of nodes make MANET vulnerable to different attacks. So to protect MANET from various attacks, it is important to develop an efficient and secure system for MANET. Intrusion means any set of actions that attempt to compromise the integrity, confidentiality, or availability of a resource. Intrusion Prevention is the primary defense because it is the first step to make the systems secure from attacks by using passwords, biometrics etc. Even if intrusion prevention methods are used, the system may be subjected to some vulnerability. So we need a second wall of defense known as Intrusion Detection Systems (IDSs), to detect and produce responses whenever necessary. In this article we present a survey of various intrusion detection schemes available for ad hoc networks. We have also described some of the basic attacks present in ad hoc network and discussed their available solution.",
"title": ""
},
{
"docid": "124a50c2e797ffe549e1591d5720acda",
"text": "Temporal information has useful features for recognizing facial expressions. However, to manually design useful features requires a lot of effort. In this paper, to reduce this effort, a deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted. Our deep network is based on two different models. The first deep network extracts temporal appearance features from image sequences, while the other deep network extracts temporal geometry features from temporal facial landmark points. These two models are combined using a new integration method in order to boost the performance of the facial expression recognition. Through several experiments, we show that the two models cooperate with each other. As a result, we achieve superior performance to other state-of-the-art methods in the CK+ and Oulu-CASIA databases. Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.",
"title": ""
},
{
"docid": "71c31f41d116a51786a4e8ded2c5fb87",
"text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.",
"title": ""
},
{
"docid": "0bc53a10750de315d5a37275dd7ae4a7",
"text": "The term stigma refers to problems of knowledge (ignorance), attitudes (prejudice) and behaviour (discrimination). Most research in this area has been based on attitude surveys, media representations of mental illness and violence, has only focused upon schizophrenia, has excluded direct participation by service users, and has included few intervention studies. However, there is evidence that interventions to improve public knowledge about mental illness can be effective. The main challenge in future is to identify which interventions will produce behaviour change to reduce discrimination against people with mental illness.",
"title": ""
},
{
"docid": "2f632cc12346cb0d6aa9ce8e765acd14",
"text": "\\ Abstract: Earlier personality of person is predicted by spending lot of time with the person. As we know spending time with person is very difficult task. Referring to this problem, in the present study a method has been proposed for the behavioral prediction of a person through automated handwriting analysis. Handwriting analysis is a method to predict personality of a Person .This is done by Image Processing in MATLAB. In order to predict the personality we are going to take the writing sample and from it we are going to extract different features i.e. slant of letters and words, pen pressure, spacing between letter, spacing between word, size of letters, baseline Segmentation method is used to extract the feature of handwriting which are given to the SVM which shows the behavior of the writer sample. This gives optimum accuracy with the use of Redial Kernel function.",
"title": ""
},
{
"docid": "67ca9035e792e2c6164b87330937bb36",
"text": "In conventional full-duplex radio communication systems, the transmitter (Tx) is active at the same time as the receiver (Rx). The isolation between the Tx and the Rx is ensured by duplex filters. However, an increasing number of long-term evolution (LTE) bands crave multiband operation. Therefore, a new front-end architecture, addressing the increasing number of LTE bands, as well as multiple standards, is presented. In such an architecture, the Tx and Rx chains are separated throughout the front-end. Addition of bands is solved by making the antennas and filters tunable. Banks of duplex filters are replaced by tunable filters and antennas, providing a duplexer function over the air between the Tx and the Rx. A hardware system has been designed and fabricated to demonstrate the performance of this front-end architecture. Measurements demonstrate how the architecture addresses inter-modulation and Rx desensitization due to the Tx signal. The filters and antennas demonstrate tunability across multiple bands. System validation is detailed for LTE band I. Frequency response, as well as linearity measurements of the complete Tx and Rx front-end chains, show that the system requirements are fulfilled.",
"title": ""
},
{
"docid": "c51acd24cb864b050432a055fef2de9a",
"text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.",
"title": ""
},
{
"docid": "dfd88750bc1d42e8cc798d2097426910",
"text": "Melanoma is one of the most lethal forms of skin cancer. It occurs on the skin surface and develops from cells known as melanocytes. The same cells are also responsible for benign lesions commonly known as moles, which are visually similar to melanoma in its early stage. If melanoma is treated correctly, it is very often curable. Currently, much research is concentrated on the automated recognition of melanomas. In this paper, we propose an automated melanoma recognition system, which is based on deep learning method combined with so called hand-crafted RSurf features and Local Binary Patterns. The experimental evaluation on a large publicly available dataset demonstrates high classification accuracy, sensitivity, and specificity of our proposed approach when it is compared with other classifiers on the same dataset.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "a506f3f6c401f83eaba830abb20c8fff",
"text": "The mechanisms governing the recruitment of functional glutamate receptors at nascent excitatory postsynapses following initial axon-dendrite contact remain unclear. We examined here the ability of neurexin/neuroligin adhesions to mobilize AMPA-type glutamate receptors (AMPARs) at postsynapses through a diffusion/trap process involving the scaffold molecule PSD-95. Using single nanoparticle tracking in primary rat and mouse hippocampal neurons overexpressing or lacking neuroligin-1 (Nlg1), a striking inverse correlation was found between AMPAR diffusion and Nlg1 expression level. The use of Nlg1 mutants and inhibitory RNAs against PSD-95 demonstrated that this effect depended on intact Nlg1/PSD-95 interactions. Furthermore, functional AMPARs were recruited within 1 h at nascent Nlg1/PSD-95 clusters assembled by neurexin-1β multimers, a process requiring AMPAR membrane diffusion. Triggering novel neurexin/neuroligin adhesions also caused a depletion of PSD-95 from native synapses and a drop in AMPAR miniature EPSCs, indicating a competitive mechanism. Finally, both AMPAR level at synapses and AMPAR-dependent synaptic transmission were diminished in hippocampal slices from newborn Nlg1 knock-out mice, confirming an important role of Nlg1 in driving AMPARs to nascent synapses. Together, these data reveal a mechanism by which membrane-diffusing AMPARs can be rapidly trapped at PSD-95 scaffolds assembled at nascent neurexin/neuroligin adhesions, in competition with existing synapses.",
"title": ""
},
{
"docid": "4e41e762756c32edfb73ce144bf7ba49",
"text": "In this paper, we outline a model of semantics that integrates aspects of discourse-sensitive logics with the compositional mechanisms available from lexically-driven semantic interpretation. Specifically, we concentrate on developing a composition logic required to properly model complex types within the Generative Lexicon (henceforth GL), for which we employ SDRT principles. As we are presently interested in the composition of information to construct logical forms, we will build on one standard way of arriving at such representations, the lambda calculus, in which functional types are exploited. We outline a new type calculus that captures one of the fundamental ideas of GL: providing a set of techniques governing type shifting possibilities for various lexical items so as to allow for the combination of lexical items in cases where there is an apparent type mismatch. These techniques themselves should follow from the structure of the lexicon and its underlying logic.",
"title": ""
},
{
"docid": "78582e3594deb53149422cc41387e330",
"text": "Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity, which appears to have potential application to a wide variety of relatively short (greater than 100 points) and noisy time-series data. The development of ApEn was motivated by data length constraints commonly encountered, e.g., in heart rate, EEG, and endocrine hormone secretion data sets. We describe ApEn implementation and interpretation, indicating its utility to distinguish correlated stochastic processes, and composite deterministic/ stochastic models. We discuss the key technical idea that motivates ApEn, that one need not fully reconstruct an attractor to discriminate in a statistically valid manner-marginal probability distributions often suffice for this purpose. Finally, we discuss why algorithms to compute, e.g., correlation dimension and the Kolmogorov-Sinai (KS) entropy, often work well for true dynamical systems, yet sometimes operationally confound for general models, with the aid of visual representations of reconstructed dynamics for two contrasting processes. (c) 1995 American Institute of Physics.",
"title": ""
}
] |
scidocsrr
|
d902b33a1bb72273c6bbe7750eeac7dd
|
How to Measure Motivation : A Guide for the Experimental Social Psychologist
|
[
{
"docid": "3efb43150881649d020a0c721dc39ae5",
"text": "Six studies explore the role of goal shielding in self-regulation by examining how the activation of focal goals to which the individual is committed inhibits the accessibility of alternative goals. Consistent evidence was found for such goal shielding, and a number of its moderators were identified: Individuals' level of commitment to the focal goal, their degree of anxiety and depression, their need for cognitive closure, and differences in their goal-related tenacity. Moreover, inhibition of alternative goals was found to be more pronounced when they serve the same overarching purpose as the focal goal, but lessened when the alternative goals facilitate focal goal attainment. Finally, goal shielding was shown to have beneficial consequences for goal pursuit and attainment.",
"title": ""
}
] |
[
{
"docid": "6cbdb95791cc214a1b977e92e69904bb",
"text": "We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses onpolicy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.",
"title": ""
},
{
"docid": "553b72da13c28e56822ccc900ff114fa",
"text": "This paper presents some of the unique verification, validation, and certification challenges that must be addressed during the development of adaptive system software for use in safety-critical aerospace applications. The paper first discusses the challenges imposed by the current regulatory guidelines for aviation software. Next, a number of individual technologies being researched by NASA and others are discussed that focus on various aspects of the software challenges. These technologies include the formal methods of model checking, compositional verification, static analysis, program synthesis, and runtime analysis. Then the paper presents some validation challenges for adaptive control, including proving convergence over long durations, guaranteeing controller stability, using new tools to compute statistical error bounds, identifying problems in fault-tolerant software, and testing in the presence of adaptation. These specific challenges are presented in the context of a software validation effort in testing the Integrated Flight Control System (IFCS) neural control software at the Dryden Flight Research Center. Lastly, the challenges to develop technologies to help prevent aircraft system failures, detect and identify failures that do occur, and provide enhanced guidance and control capability to prevent and recover from vehicle loss of control are briefly cited in connection with ongoing work at the NASA Langley Research Center.",
"title": ""
},
{
"docid": "23ef781d3230124360f24cc6e38fb15f",
"text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5d5c2016ec936969d3d3a07e0b48f51e",
"text": "In an Information technology world, the ability to effectively process massive datasets has become integral to a broad range of scientific and other academic disciplines. We are living in an era of data deluge and as a result, the term “Big Data” is appearing in many contexts. It ranges from meteorology, genomics, complex physics simulations, biological and environmental research, finance and business to healthcare. Big Data refers to data streams of higher velocity and higher variety. The infrastructure required to support the acquisition of Big Data must deliver low, predictable latency in both capturing data and in executing short, simple queries. To be able to handle very high transaction volumes, often in a distributed environment; and support flexible, dynamic data structures. Data processing is considerably more challenging than simply locating, identifying, understanding, and citing data. For effective large-scale analysis all of this has to happen in a completely automated manner. This requires differences in data structure and semantics to be expressed in forms that are computer understandable, and then “robotically” resolvable. There is a strong body of work in data integration, mapping and transformations. However, considerable additional work is required to achieve automated error-free difference resolution. This paper proposes a framework on recent research for the Data Mining using Big Data.",
"title": ""
},
{
"docid": "5525b8ddce9a8a6430da93f48e93dea5",
"text": "One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of walls, which must conform to a Manhattan structure but is otherwise flexible, and the layout and extent of objects, modeled with CAD-like 3D shapes. We represent both the visible and occluded portions of the scene, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the wellknown challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scene. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and overall consistency. We demonstrate encouraging results on the NYU v2 dataset and highlight a variety of interesting directions for future work.",
"title": ""
},
{
"docid": "40c93dacc8318bc440d23fedd2acbd47",
"text": "An electrical-balance duplexer uses series connected step-down transformers to enhance linearity and power handling capability by reducing the voltage swing across nonlinear components. Wideband, dual-notch Tx-to-Rx isolation is demonstrated experimentally with a planar inverted-F antenna. The 0.18μm CMOS prototype achieves >50dB isolation for 220MHz aggregated bandwidth or >40dB dual-notch isolation for 160MHz bandwidth, +49dBm Tx-path IIP3 and -48dBc ACLR1 for +27dBm at the antenna.",
"title": ""
},
{
"docid": "0b0723466d6fc726154befea8a1d7398",
"text": "● Volume of pages makes efficient WWW navigation difficult ● Aim: To analyse users' navigation history to generate tools that increase navigational efficiency – ie. Predictive server prefetching ● Provides a mathematical foundation to several concepts",
"title": ""
},
{
"docid": "e4f4fe27fff75bd7ed079f3094deaedb",
"text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.",
"title": ""
},
{
"docid": "9ac7dbae53fe06937780a53dd3432f80",
"text": "Artefact evaluation is regarded as being crucial for Design Science Research (DSR) in order to rigorously proof an artefact’s relevance for practice. The availability of guidelines for structuring DSR processes notwithstanding, the current body of knowledge provides only rudimentary means for a design researcher to select and justify appropriate artefact evaluation strategies in a given situation. This paper proposes patterns that could be used to articulate and justify artefact evaluation strategies within DSR projects. These patterns have been synthesised from priorDSR literature concerned with evaluation strategies. They distinguish both ex ante as well as ex post evaluations and reflect current DSR approaches and evaluation criteria.",
"title": ""
},
{
"docid": "c2a3344c607cf06c24ed8d2664243284",
"text": "It is common for cloud users to require clusters of inter-connected virtual machines (VMs) in a geo-distributed IaaS cloud, to run their services. Compared to isolated VMs, key challenges on dynamic virtual cluster (VC) provisioning (computation + communication resources) lie in two folds: (1) optimal placement of VCs and inter-VM traffic routing involve NP-hard problems, which are non-trivial to solve offline, not to mention if an online efficient algorithm is sought; (2) an efficient pricing mechanism is missing, which charges a market-driven price for each VC as a whole upon request, while maximizing system efficiency or provider revenue over the entire span. This paper proposes efficient online auction mechanisms to address the above challenges. We first design SWMOA, a novel online algorithm for dynamic VC provisioning and pricing, achieving truthfulness, individual rationality, computation efficiency, and <inline-formula><tex-math notation=\"LaTeX\">$(1+2\\log \\mu)$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq1-2601905.gif\"/></alternatives></inline-formula>-competitiveness in social welfare, where <inline-formula><tex-math notation=\"LaTeX\">$\\mu$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2601905.gif\"/></alternatives></inline-formula> is related to the problem size. Next, applying a randomized reduction technique, we convert the social welfare maximizing auction into a revenue maximizing online auction, PRMOA, achieving <inline-formula><tex-math notation=\"LaTeX\">$O(\\log \\mu)$ </tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq3-2601905.gif\"/></alternatives></inline-formula> -competitiveness in provider revenue, as well as truthfulness, individual rationality and computation efficiency. We investigate auction design in different cases of resource cost functions in the system. We validate the efficacy of the mechanisms through solid theoretical analysis and trace-driven simulations.",
"title": ""
},
{
"docid": "dd53308cc19f85e2a7ab2e379e196b6c",
"text": "Due to the increasingly aging population, there is a rising demand for assistive living technologies for the elderly to ensure their health and well-being. The elderly are mostly chronic patients who require frequent check-ups of multiple vital signs, some of which (e.g., blood pressure and blood glucose) vary greatly according to the daily activities that the elderly are involved in. Therefore, the development of novel wearable intelligent systems to effectively monitor the vital signs continuously over a 24 hour period is in some cases crucial for understanding the progression of chronic symptoms in the elderly. In this paper, recent development of Wearable Intelligent Systems for e-Health (WISEs) is reviewed, including breakthrough technologies and technical challenges that remain to be solved. A novel application of wearable technologies for transient cardiovascular monitoring during water drinking is also reported. In particular, our latest results found that heart rate increased by 9 bpm (P < 0.001) and pulse transit time was reduced by 5 ms (P < 0.001), indicating a possible rise in blood pressure, during swallowing. In addition to monitoring physiological conditions during daily activities, it is anticipated that WISEs will have a number of other potentially viable applications, including the real-time risk prediction of sudden cardiovascular events and deaths. Category: Smart and intelligent computing",
"title": ""
},
{
"docid": "1c77e4e01e20b33aca309adabb37868d",
"text": "From the automated text processing point of view, natural language is very redundant in the sense that many different words share a common or similar meaning. For computer this can be hard to understand without some background knowledge. Latent Semantic Indexing (LSI) is a technique that helps in extracting some of this background knowledge from corpus of text documents. This can be also viewed as extraction of hidden semantic concepts from text documents. On the other hand visualization can be very helpful in data analysis, for instance, for finding main topics that appear in larger sets of documents. Extraction of main concepts from documents using techniques such as LSI, can make the results of visualizations more useful. For example, given a set of descriptions of European Research projects (6FP) one can find main areas that these projects cover including semantic web, e-learning, security, etc. In this paper we describe a method for visualization of document corpus based on LSI, the system implementing it and give results of using the system on several datasets.",
"title": ""
},
{
"docid": "98f052bd353437e70b4ccc15d933d961",
"text": "Current cloud providers use fixed-price based mechanisms to allocate Virtual Machine (VM) instances to their users. The fixed-price based mechanisms do not provide an efficient allocation of resources and do not maximize the revenue of the cloud providers. A better alternative would be to use combinatorial auction-based resource allocation mechanisms. In this PhD dissertation we will design, study and implement combinatorial auction-based mechanisms for efficient provisioning and allocation of VM instances in cloud computing environments. We present our preliminary results consisting of three combinatorial auction-based mechanisms for VM provisioning and allocation. We also present an efficient bidding algorithm that can be used by the cloud users to decide on how to bid for their requested bundles of VM instances.",
"title": ""
},
{
"docid": "ef142067a29f8662e36d68ee37c07bce",
"text": "The lack of assessment tools to analyze serious games and insufficient knowledge on their impact on players is a recurring critique in the field of game and media studies, education science and psychology. Although initial empirical studies on serious games usage deliver discussable results, numerous questions remain unacknowledged. In particular, questions regarding the quality of their formal conceptual design in relation to their purpose mostly stay uncharted. In the majority of cases the designers' good intentions justify incoherence and insufficiencies in their design. In addition, serious games are mainly assessed in terms of the quality of their content, not in terms of their intention-based design. This paper argues that analyzing a game's formal conceptual design, its elements, and their relation to each other based on the game's purpose is a constructive first step in assessing serious games. By outlining the background of the Serious Game Design Assessment Framework and exemplifying its use, a constructive structure to examine purpose-based games is introduced. To demonstrate how to assess the formal conceptual design of serious games we applied the SGDA Framework to the online games \"Sweatshop\" (2011) and \"ICED\" (2008).",
"title": ""
},
{
"docid": "566144a980fe85005f7434f7762bfeb9",
"text": "This article describes the rationale, development, and validation of the Scale for Suicide Ideation (SSI), a 19-item clinical research instrument designed to quantify and assess suicidal intention. The scale was found to have high internal consistency and moderately high correlations with clinical ratings of suicidal risk and self-administered measures of self-harm. Furthermore, it was sensitive to changes in levels of depression and hopelessness over time. Its construct validity was supported by two studies by different investigators testing the relationship between hopelessness, depression, and suicidal ideation and by a study demonstrating a significant relationship between high level of suicidal ideation and \"dichotomous\" attitudes about life and related concepts on a semantic differential test. Factor analysis yielded three meaningful factors: active suicidal desire, specific plans for suicide, and passive suicidal desire.",
"title": ""
},
{
"docid": "176c9231f27d22658be5107a74ab2f32",
"text": "The emerging ambient persuasive technology looks very promising for many areas of personal and ubiquitous computing. Persuasive applications aim at changing human attitudes or behavior through the power of software designs. This theory-creating article suggests the concept of a behavior change support system (BCSS), whether web-based, mobile, ubiquitous, or more traditional information system to be treated as the core of research into persuasion, influence, nudge, and coercion. This article provides a foundation for studying BCSSs, in which the key constructs are the O/C matrix and the PSD model. It will (1) introduce the archetypes of behavior change via BCSSs, (2) describe the design process for building persuasive BCSSs, and (3) exemplify research into BCSSs through the domain of health interventions. Recognizing the themes put forward in this article will help leverage the full potential of computing for producing behavioral changes.",
"title": ""
},
{
"docid": "11ae42bedc18dedd0c29004000a4ec00",
"text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.",
"title": ""
},
{
"docid": "39be1d73b84872b0ae1d61bbd0fc96f8",
"text": "Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.",
"title": ""
},
{
"docid": "f1d4323cbabd294723a2fd68321ad640",
"text": "Mycosis fungoides (MF), a low-grade lymphoproliferative disorder, is the most common type of cutaneous T-cell lymphoma. Typically, neoplastic T cells localize to the skin and produce patches, plaques, tumours or erythroderma. Diagnosis of MF can be difficult due to highly variable presentations and the sometimes nonspecific nature of histological findings. Molecular biology has improved the diagnostic accuracy. Nevertheless, clinical experience is of substantial importance as MF can resemble a wide variety of skin diseases. We performed a literature review and found that MF can mimic >50 different clinical entities. We present a structured framework of clinical variations of classical, unusual and distinct forms of MF. Distinct subforms such as ichthyotic MF, adnexotropic (including syringotropic and folliculotropic) MF, MF with follicular mucinosis, granulomatous MF with granulomatous slack skin and papuloerythroderma of Ofuji are delineated in more detail.",
"title": ""
},
{
"docid": "5c74348ce0028786990b4ca39b1e858d",
"text": "The terminology Internet of Things (IoT) refers to a future where every day physical objects are connected by the Internet in one form or the other, but outside the traditional desktop realm. The successful emergence of the IoT vision, however, will require computing to extend past traditional scenarios involving portables and smart-phones to the connection of everyday physical objects and the integration of intelligence with the environment. Subsequently, this will lead to the development of new computing features and challenges. The main purpose of this paper, therefore, is to investigate the features, challenges, and weaknesses that will come about, as the IoT becomes reality with the connection of more and more physical objects. Specifically, the study seeks to assess emergent challenges due to denial of service attacks, eavesdropping, node capture in the IoT infrastructure, and physical security of the sensors. We conducted a literature review about IoT, their features, challenges, and vulnerabilities. The methodology paradigm used was qualitative in nature with an exploratory research design, while data was collected using the desk research method. We found that, in the distributed form of architecture in IoT, attackers could hijack unsecured network devices converting them into bots to attack third parties. Moreover, attackers could target communication channels and extract data from the information flow. Finally, the perceptual layer in distributed IoT architecture is also found to be vulnerable to node capture attacks, including physical capture, brute force attack, DDoS attacks, and node privacy leaks.",
"title": ""
}
] |
scidocsrr
|
dfbe3ab81b76c649f8e79edf81f8c8df
|
Some Faces are More Equal than Others: Hierarchical Organization for Accurate and Efficient Large-Scale Identity-Based Face Retrieval
|
[
{
"docid": "535c8a15005505fce4b1dfc09d060981",
"text": "The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.",
"title": ""
}
] |
[
{
"docid": "ac910612672c2c46fb2abd039d65e1df",
"text": "In the last few years, there has been a wave of articles related to behavioral addictions; some of them have a focus on online pornography addiction. However, despite all efforts, we are still unable to profile when engaging in this behavior becomes pathological. Common problems include: sample bias, the search for diagnostic instrumentals, opposing approximations to the matter, and the fact that this entity may be encompassed inside a greater pathology (i.e., sex addiction) that may present itself with very diverse symptomatology. Behavioral addictions form a largely unexplored field of study, and usually exhibit a problematic consumption model: loss of control, impairment, and risky use. Hypersexual disorder fits this model and may be composed of several sexual behaviors, like problematic use of online pornography (POPU). Online pornography use is on the rise, with a potential for addiction considering the \"triple A\" influence (accessibility, affordability, anonymity). This problematic use might have adverse effects in sexual development and sexual functioning, especially among the young population. We aim to gather existing knowledge on problematic online pornography use as a pathological entity. Here we try to summarize what we know about this entity and outline some areas worthy of further research.",
"title": ""
},
{
"docid": "54f3c26ab9d9d6afdc9e1bf9e96f02f9",
"text": "Game designers use human playtesting to gather feedback about game design elements when iteratively improving a game. Playtesting, however, is expensive: human testers must be recruited, playtest results must be aggregated and interpreted, and changes to game designs must be extrapolated from these results. Can automated methods reduce this expense? We show how active learning techniques can formalize and automate a subset of playtesting goals. Specifically, we focus on the low-level parameter tuning required to balance a game once the mechanics have been chosen. Through a case study on a shoot-‘em-up game we demonstrate the efficacy of active learning to reduce the amount of playtesting needed to choose the optimal set of game parameters for two classes of (formal) design objectives. This work opens the potential for additional methods to reduce the human burden of performing playtesting for a variety of relevant design concerns.",
"title": ""
},
{
"docid": "b868a1bf3a3a45fbba8ea27527ca47fd",
"text": "Social media and microblog tools are increasingly used by individuals to express their feelings and opinions in the form of short text messages. Detecting emotions in text has a wide range of applications including identifying anxiety or depression of individuals and measuring well-being or public mood of a community. In this paper, we propose a new approach for automatically classifying text messages of individuals to infer their emotional states. To model emotional states, we utilize the well-established Circumplex model that characterizes affective experience along two dimensions: valence and arousal. We select Twitter messages as input data set, as they provide a very large, diverse and freely available ensemble of emotions. Using hash-tags as labels, our methodology trains supervised classifiers to detect multiple classes of emotion on potentially huge data sets with no manual effort. We investigate the utility of several features for emotion detection, including unigrams, emoticons, negations and punctuations. To tackle the problem of sparse and high dimensional feature vectors of messages, we utilize a lexicon of emotions. We have compared the accuracy of several machine learning algorithms, including SVM, KNN, Decision Tree, and Naive Bayes for classifying Twitter messages. Our technique has an accuracy of over 90%, while demonstrating robustness across learning algorithms.",
"title": ""
},
{
"docid": "518b96236ffa2ce0413a0e01d280937a",
"text": "In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint into the low-rankness property of high-dimensional data representation. The symmetric low-rank representation, which preserves the subspace structures of high-dimensional data, guarantees weight consistency for each pair of data points so that highly correlated data points of subspaces are represented together. Moreover, it can be efficiently calculated by solving a convex optimization problem. We provide a rigorous proof for minimizing the nuclear-norm regularized least square problem with a symmetric constraint. The affinity matrix for spectral clustering can be obtained by further exploiting the angular information of the principal directions of the symmetric low-rank representation. This is a critical step towards evaluating the memberships between data points. Experimental results on benchmark databases demonstrate the effectiveness and robustness of LRRSC compared with several state-of-the-art subspace clustering algorithms.",
"title": ""
},
{
"docid": "2074ab39d5cec1f9e645ff2ad457f3e3",
"text": "[Context and motivation] The current breakthrough of natural language processing (NLP) techniques can provide the requirements engineering (RE) community with powerful tools that can help addressing specific tasks of natural language (NL) requirements analysis, such as traceability, ambiguity detection and requirements classification, to name a few. [Question/problem] However, modern NLP techniques are mainly statistical, and need large NL requirements datasets, to support appropriate training, test and validation of the techniques. The RE community has experimented with NLP since long time, but datasets were often proprietary, or limited to few software projects for which requirements were publicly available. Hence, replication of the experiments and generalization have always been an issue. [Principal idea/results] Our near future commitment is to provide a publicly available NL requirements dataset. [Contribution] To this end, we are collecting requirements documents from the Web, and we are representing them in a common XML format. In this paper, we present the current version of the dataset, together with our agenda concerning formatting, extension, and annotation of the dataset.",
"title": ""
},
{
"docid": "8fac46b10cc8a439f9aa4eedfd2f413d",
"text": "How does a lack of sleep affect our brains? In contrast to the benefits of sleep, frameworks exploring the impact of sleep loss are relatively lacking. Importantly, the effects of sleep deprivation (SD) do not simply reflect the absence of sleep and the benefits attributed to it; rather, they reflect the consequences of several additional factors, including extended wakefulness. With a focus on neuroimaging studies, we review the consequences of SD on attention and working memory, positive and negative emotion, and hippocampal learning. We explore how this evidence informs our mechanistic understanding of the known changes in cognition and emotion associated with SD, and the insights it provides regarding clinical conditions associated with sleep disruption.",
"title": ""
},
{
"docid": "094dbd57522cb7b9b134b14852bea78b",
"text": "When encountering qualitative research for the first time, one is confronted with both the number of methods and the difficulty of collecting, analysing and presenting large amounts of data. In quantitative research, it is possible to make a clear distinction between gathering and analysing data. However, this distinction is not clear-cut in qualitative research. The objective of this paper is to provide insight for the novice researcher and the experienced researcher coming to grounded theory for the first time. For those who already have experience in the use of the method the paper provides further much needed discussion arising out of デエW マWデエラSげゲ ;Sラヮデキラミ キミ デエW I“ aキWノSく In this paper the authors present a practical application and illustrate how grounded theory method was applied to an interpretive case study research. The paper discusses grounded theory method and provides guidance for the use of the method in interpretive studies.",
"title": ""
},
{
"docid": "89dc7cad01e784f047774ab665fb53d4",
"text": "This paper studies a top-k hierarchical classification problem. In top-k classification, one is allowed to make k predictions and no penalty is incurred if at least one of k predictions is correct. In hierarchical classification, classes form a structured hierarchy, and misclassification costs depend on the relation between the correct class and the incorrect class in the hierarchy. Despite that the fact that both top-k classification and hierarchical classification have gained increasing interests, the two problems have always been studied separately. In this paper, we define a top-k hierarchical loss function using a real world application. We provide the Bayes-optimal solution that minimizes the expected top-k hierarchical misclassification cost. Via numerical experiments, we show that our solution outperforms two baseline methods that address only one of the two issues.",
"title": ""
},
{
"docid": "8401deada9010f05e3c9907a421d6760",
"text": "Heuristics evaluation is one of the common techniques being used for usability evaluation. The potential of HE has been explored in games design and development and later playability heuristics evaluation (PHE) is generated. PHE has been used in evaluating games. Issues in games usability covers forms of game usability, game interface, game mechanics, game narrative and game play. This general heuristics has the potential to be further explored in specific domain of games that is educational games. Combination of general heuristics of games (tailored based on specific domain) and education heuristics seems to be an excellent focus in order to evaluate the usability issues in educational games especially educational games produced in Malaysia.",
"title": ""
},
{
"docid": "3a5ac4dc112c079955104bda98f80b58",
"text": "This review examines vestibular compensation and vestibular rehabilitation from a unified translational research perspective. Laboratory studies illustrate neurobiological principles of vestibular compensation at the molecular, cellular and systems levels in animal models that inform vestibular rehabilitation practice. However, basic research has been hampered by an emphasis on 'naturalistic' recovery, with time after insult and drug interventions as primary dependent variables. The vestibular rehabilitation literature, on the other hand, provides information on how the degree of compensation can be shaped by specific activity regimens. The milestones of the early spontaneous static compensation mark the re-establishment of static gaze stability, which provides a common coordinate frame for the brain to interpret residual vestibular information in the context of visual, somatosensory and visceral signals that convey gravitoinertial information. Stabilization of the head orientation and the eye orientation (suppression of spontaneous nystagmus) appear to be necessary by not sufficient conditions for successful rehabilitation, and define a baseline for initiating retraining. The lessons from vestibular rehabilitation in animal models offer the possibility of shaping the recovery trajectory to identify molecular and genetic factors that can improve vestibular compensation.",
"title": ""
},
{
"docid": "cf52d720512c316dc25f8167d5571162",
"text": "BACKGROUND\nHidradenitis suppurativa (HS) is a chronic relapsing skin disease. Recent studies have shown promising results of anti-tumor necrosis factor-alpha treatment.\n\n\nOBJECTIVE\nTo compare the efficacy and safety of infliximab and adalimumab in the treatment of HS.\n\n\nMETHODS\nA retrospective study was performed to compare 2 cohorts of 10 adult patients suffering from severe, recalcitrant HS. In 2005, 10 patients were treated with infliximab intravenous (i.v.) (3 infusions of 5 mg/kg at weeks 0, 2, and 6). In 2009, 10 other patients were treated in the same hospital with adalimumab subcutaneous (s.c.) 40 mg every other week. Both cohorts were followed up for 1 year using identical evaluation methods [Sartorius score, quality of life index, reduction of erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP), patient and doctor global assessment, and duration of efficacy].\n\n\nRESULTS\nNineteen patients completed the study. In both groups, the severity of the HS diminished. Infliximab performed better in all aspects. The average Sartorius score was reduced to 54% of baseline for the infliximab group and 66% of baseline for the adalimumab group.\n\n\nCONCLUSIONS\nAdalimumab s.c. 40 mg every other week is less effective than infliximab i.v. 5 mg/kg at weeks 0, 2, and 6.",
"title": ""
},
{
"docid": "5af009ec32eeda769e309b0979f5fbd3",
"text": "A modified pole-and-and-knife (MPK) method of harvesting oil palms was designed and fabricated. The method was tested along with two existing methods, namely the bamboo pole-and-knife (BPK) and the single rope-and-cutlass (SRC) methods. Test results showed that the MPK method was superior to the other methods in reducing the time spent in searching for and collecting scattered loose fruits (and hence the harvesting time), increasing the recovery of scattered loose fruits, eliminating the waist problem of the fruit collectors and increasing the ease of transportation and use of the harvesting pole.",
"title": ""
},
{
"docid": "047480185afbea439eee2ee803b9d1f9",
"text": "The ability to perceive and analyze terrain is a key problem in mobile robot navigation. Terrain perception problems arise in planetary robotics, agriculture, mining, and, of course, self-driving cars. Here, we introduce the PTA (probabilistic terrain analysis) algorithm for terrain classification with a fastmoving robot platform. The PTA algorithm uses probabilistic techniques to integrate range measurements over time, and relies on efficient statistical tests for distinguishing drivable from nondrivable terrain. By using probabilistic techniques, PTA is able to accommodate severe errors in sensing, and identify obstacles with nearly 100% accuracy at speeds of up to 35mph. The PTA algorithm was an essential component in the DARPA Grand Challenge, where it enabled our robot Stanley to traverse the entire course in record time.",
"title": ""
},
{
"docid": "406d839d15c18ac9c462c5f5af6b10b7",
"text": "The Multiple Meanings of Open Government Data: Understanding Different Stakeholders and Their Perspectives Felipe Gonzalez-Zapata & Richard Heeks Centre for Development Informatics, University of Manchester, Manchester, M13 9PL, UK Corresponding author: Prof. Richard Heeks, Centre for Development Informatics, IDPM, SEED, University of Manchester, Manchester, M13 9PL, UK, +44-161-275-2870 richard.heeks@manchester.ac.uk",
"title": ""
},
{
"docid": "e059d7e04c3dba8ed570ad1d72a647b5",
"text": "An electronic throttle is a low-power dc servo drive which positions the throttle plate. Its application in modern automotive engines leads to improvements in vehicle drivability, fuel economy, and emissions. Transmission friction and the return spring limp-home nonlinearity significantly affect the electronic throttle performance. The influence of these effects is analyzed by means of computer simulations, experiments, and analytical calculations. A dynamic friction model is developed in order to adequately capture the experimentally observed characteristics of the presliding-displacement and breakaway effects. The linear part of electronic throttle process model is also analyzed and experimentally identified. A nonlinear control strategy is proposed, consisting of a proportional-integral-derivative (PID) controller and a feedback compensator for friction and limp-home effects. The PID controller parameters are analytically optimized according to the damping optimum criterion. The proposed control strategy is verified by computer simulations and experiments.",
"title": ""
},
{
"docid": "6a196d894d94b194627f6e3c127c83fb",
"text": "The advantages provided to memory by the distribution of multiple practice or study opportunities are among the most powerful effects in memory research. In this paper, we critically review the class of theories that presume contextual or encoding variability as the sole basis for the advantages of distributed practice, and recommend an alternative approach based on the idea that some study events remind learners of other study events. Encoding variability theory encounters serious challenges in two important phenomena that we review here: superadditivity and nonmonotonicity. The bottleneck in such theories lies in the assumption that mnemonic benefits arise from the increasing independence, rather than interdependence, of study opportunities. The reminding model accounts for many basic results in the literature on distributed practice, readily handles data that are problematic for encoding variability theories, including superadditivity and nonmonotonicity, and provides a unified theoretical framework for understanding the effects of repetition and the effects of associative relationships on memory.",
"title": ""
},
{
"docid": "b2de2955568a37301828708e15b5ed15",
"text": "ISPRS and CNES announced the HRS (High Resolution Stereo) Scientific Assessment Program during the ISPRS Commission I Symposium in Denver in November 2002. 9 test areas throughout the world have been selected for this program. One of the test sites is located in Bavaria, Germany, for which the PI comes from DLR. For a second region, which is situated in Catalonia – Barcelona and surroundings – DLR has the role of a Co-Investigator. The goal is to derive a DEM from the along-track stereo data of the SPOT HRS sensor and to assess the accuracy by comparison with ground control points and DEM data of superior quality. For the derivation of the DEM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera is used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data (DORIS and ULS) are extracted. According to CNES these data should lead to an absolute orientation accuracy of about 30 m. No bundle block adjustment with ground control is used in the first step of the photogrammetric evaluation. A dense image matching, using very dense positions as kernel centers provides the parallaxes. The quality of the matching is controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection leads to points in object space which are then interpolated to a DEM of the region in a regular grid. Additionally, orthoimages are generated from the images of the two looking directions. The orthoimage and DEM accuracy is determined by using the ground control points and the available DEM data of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). DEM filtering methods are applied and a comparison to SRTM-DEMs is performed. It is shown that a fusion of the DEMs derived from optical and radar data leads to higher accuracies. In the second step ground control points are used for bundle adjustment to improve the exterior orientation and the absolute accuracy of the SPOT-DEM.",
"title": ""
},
{
"docid": "4ca5fec568185d3699c711cc86104854",
"text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.",
"title": ""
},
{
"docid": "abda48a065aecbe34f86ce3490520402",
"text": "Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.",
"title": ""
}
] |
scidocsrr
|
463cfc839609d32f61e48ffd239310f4
|
Centering Theory in Spanish: Coding Manual
|
[
{
"docid": "c1e39be2fa21a4f47d163c1407490dc8",
"text": "Most existing anaphora resolution algorithms are designed to account only for anaphors with NP-antecedents. This paper describes an algorithm for the resolution of discourse deictic anaphors, which constitute a large percentage of anaphors in spoken dialogues. The success of the resolution is dependent on the classification of all pronouns and demonstratives into individual, discourse deictic and vague anaphora. Finally, the empirical results of the application of the algorithm to a corpus of spoken dialogues are presented.",
"title": ""
}
] |
[
{
"docid": "b35922663b4728c409528675be15d586",
"text": "High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.",
"title": ""
},
{
"docid": "6d5b7b5e1738993991a1344a1f584b68",
"text": "Smart route planning gathers increasing interest as cities become crowded and jammed. We present a system for individual trip planning that incorporates future traffic hazards in routing. Future traffic conditions are computed by a Spatio-Temporal Random Field based on a stream of sensor readings. In addition, our approach estimates traffic flow in areas with low sensor coverage using a Gaussian Process Regression. The conditioning of spatial regression on intermediate predictions of a discrete probabilistic graphical model allows to incorporate historical data, streamed online data and a rich dependency structure at the same time. We demonstrate the system and test model assumptions with a real-world use-case from Dublin city, Ireland.",
"title": ""
},
{
"docid": "14fb6228827657ba6f8d35d169ad3c63",
"text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"title": ""
},
{
"docid": "c8768e560af11068890cc097f1255474",
"text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.",
"title": ""
},
{
"docid": "052ae69b1fe396f66cb4788372dc3c79",
"text": "Model transformation by example is a novel approach in model-driven software engineering to derive model transformation rules from an initial prototypical set of interrelated source and target models, which describe critical cases of the model transformation problem in a purely declarative way. In the current paper, we automate this approach using inductive logic programming (Muggleton and Raedt in J Logic Program 19-20:629–679, 1994) which aims at the inductive construction of first-order clausal theories from examples and background knowledge.",
"title": ""
},
{
"docid": "b3962fd4000fced796f3764d009c929e",
"text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.",
"title": ""
},
{
"docid": "50570741405703e6b47d285237b6eeed",
"text": "The knowledge base is a machine-readable set of knowledge. More and more multi-domain and large-scale knowledge bases have emerged in recent years, and they play an essential role in many information systems and semantic annotation tasks. However we do not have a perfect knowledge base yet and maybe we will never have a perfect one, because all the knowledge bases have limited coverage while new knowledge continues to emerge. Therefore populating and enriching the existing knowledge base become important tasks. Traditional knowledge base population task usually leverages the information embedded in the unstructured free text. Recently researchers found that massive structured tables on the Web are high-quality relational data and easier to be utilized than the unstructured text. Our goal of this paper is to enrich the knowledge base using Wikipedia tables. Here, knowledge means binary relations between entities and we focus on the relations in some specific domains. There are two basic types of information can be used in this task: the existing relation instances and the connection between types and relations. We firstly propose two basic probabilistic models based on two types of information respectively. Then we propose a light-weight aggregated model to combine the advantages of basic models. The experimental results show that our method is an effective approach to enriching the knowledge base with both high precision and recall.",
"title": ""
},
{
"docid": "5bc1c336b8e495e44649365f11af4ab8",
"text": "Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.",
"title": ""
},
{
"docid": "c9e11acaa2fbee77d079ecafbb9ae93a",
"text": "Alcohol consumption is highly prevalent in university students. Early detection in future health professionals is important: their consumption might not only influence their own health but may determine how they deal with the implementation of preventive strategies in the future. The aim of this paper is to detect the prevalence of risky alcohol consumption in first- and last-degree year students and to compare their drinking patterns.Risky drinking in pharmacy students (n=434) was assessed and measured with the AUDIT questionnaire (Alcohol Use Disorders Identification Test). A comparative analysis between college students from the first and fifth years of the degree in pharmacy, and that of a group of professors was carried to see differences in their alcohol intake patterns.Risky drinking was detected in 31.3% of students. The highest prevalence of risky drinkers, and the total score of the AUDIT test was found in students in their first academic year. Students in the first academic level taking morning classes had a two-fold risk of risky drinking (OR=1.9 (IC 95%1.1-3.1)) compared with students in the fifth level. The frequency of alcohol consumption increases with the academic level, whereas the number of alcohol beverages per drinking occasion falls.Risky drinking is high during the first year of university. As alcohol consumption might decrease with age, it is important to design preventive strategies that will strengthen this tendency.",
"title": ""
},
{
"docid": "b753eb752d4f87dbff82d77e8417f389",
"text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:",
"title": ""
},
{
"docid": "fa0f02cde08a3cee4b691788815cb757",
"text": "Control strategies for these contaminants will require a better understanding of how they move around the globe.",
"title": ""
},
{
"docid": "39803815c3edfaa2327327efaef80804",
"text": "Spatial pyramid matching (SPM) based pooling has been the dominant choice for state-of-art image classification systems. In contrast, we propose a novel object-centric spatial pooling (OCP) approach, following the intuition that knowing the location of the object of interest can be useful for image classification. OCP consists of two steps: (1) inferring the location of the objects, and (2) using the location information to pool foreground and background features separately to form the image-level representation. Step (1) is particularly challenging in a typical classification setting where precise object location annotations are not available during training. To address this challenge, we propose a framework that learns object detectors using only image-level class labels, or so-called weak labels. We validate our approach on the challenging PASCAL07 dataset. Our learned detectors are comparable in accuracy with stateof-the-art weakly supervised detection methods. More importantly, the resulting OCP approach significantly outperforms SPM-based pooling in image classification.",
"title": ""
},
{
"docid": "0f1a36a4551dc9c6b4ae127c34ff7330",
"text": "Internet of Things (IoT) is reshaping our daily lives by bridging the gaps between physical and digital world. To enable ubiquitous sensing, seamless connection and real-time processing for IoT applications, fog computing is considered as a key component in a heterogeneous IoT architecture, which deploys storage and computing resources to network edges. However, the fog-based IoT architecture can lead to various security and privacy risks, such as compromised fog nodes that may impede developments of IoT by attacking the data collection and gathering period. In this paper, we propose a novel privacy-preserving and reliable scheme for the fog-based IoT to address the data privacy and reliability challenges of the selective data aggregation service. Specifically, homomorphic proxy re-encryption and proxy re-authenticator techniques are respectively utilized to deal with the data privacy and reliability issues of the service, which supports data aggregation over selective data types for any type-driven applications. We define a new threat model to formalize the non-collusive and collusive attacks of compromised fog nodes, and it is demonstrated that the proposed scheme can prevent both non-collusive and collusive attacks in our model. In addition, performance evaluations show the efficiency of the scheme in terms of computational costs and communication overheads.",
"title": ""
},
{
"docid": "1a9e75efcc710b3bc8c5d450d29eea7c",
"text": "This paper presents the tuning of the structure and parameters of a neural network using an improved genetic algorithm (GA). It is also shown that the improved GA performs better than the standard GA based on some benchmark test functions. A neural network with switches introduced to its links is proposed. By doing this, the proposed neural network can learn both the input-output relationships of an application and the network structure using the improved GA. The number of hidden nodes is chosen manually by increasing it from a small number until the learning performance in terms of fitness value is good enough. Application examples on sunspot forecasting and associative memory are given to show the merits of the improved GA and the proposed neural network.",
"title": ""
},
{
"docid": "912c92dd4755cfb280f948bd4264ded7",
"text": "A decision is a commitment to a proposition or plan of action based on information and values associated with the possible outcomes. The process operates in a flexible timeframe that is free from the immediacy of evidence acquisition and the real time demands of action itself. Thus, it involves deliberation, planning, and strategizing. This Perspective focuses on perceptual decision making in nonhuman primates and the discovery of neural mechanisms that support accuracy, speed, and confidence in a decision. We suggest that these mechanisms expose principles of cognitive function in general, and we speculate about the challenges and directions before the field.",
"title": ""
},
{
"docid": "a671c6eff981b5e3a0466e53f22c4521",
"text": "This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.",
"title": ""
},
{
"docid": "f8d50c7fe96fdf8fbe06332ab7e1a2a6",
"text": "There is a strong need for advanced control methods in battery management systems, especially in the plug-in hybrid and electric vehicles sector, due to cost and safety issues of new high-power battery packs and high-energy cell design. Limitations in computational speed and available memory require the use of very simple battery models and basic control algorithms, which in turn result in suboptimal utilization of the battery. This work investigates the possible use of optimal control strategies for charging. We focus on the minimum time charging problem, where different constraints on internal battery states are considered. Based on features of the open-loop optimal charging solution, we propose a simple one-step predictive controller, which is shown to recover the time-optimal solution, while being feasible for real-time computations. We present simulation results suggesting a decrease in charging time by 50% compared to the conventional constant-current / constant-voltage method for lithium-ion batteries.",
"title": ""
},
{
"docid": "e05270c1d2abeda1cee99f1097c1c5d5",
"text": "E-transactions have become promising and very much convenient due to worldwide and usage of the internet. The consumer reviews are increasing rapidly in number on various products. These large numbers of reviews are beneficial to manufacturers and consumers alike. It is a big task for a potential consumer to read all reviews to make a good decision of purchasing. It is beneficial to mine available consumer reviews for popular products from various product review sites of consumer. The first step is performing sentiment analysis to decide the polarity of a review. On the basis of polarity, we can then classify the review. Comparison is made among the different WEKA classifiers in the form of charts and graphs.",
"title": ""
},
{
"docid": "dbf683e908ea9e5962d0830e6b8d24fd",
"text": "This paper studies physical layer security in a wireless ad hoc network with numerous legitimate transmitter–receiver pairs and eavesdroppers. A hybrid full-duplex (FD)/half-duplex receiver deployment strategy is proposed to secure legitimate transmissions, by letting a fraction of legitimate receivers work in the FD mode sending jamming signals to confuse eavesdroppers upon their information receptions, and letting the other receivers work in the half-duplex mode just receiving their desired signals. The objective of this paper is to choose properly the fraction of FD receivers for achieving the optimal network security performance. Both accurate expressions and tractable approximations for the connection outage probability and the secrecy outage probability of an arbitrary legitimate link are derived, based on which the area secure link number, network-wide secrecy throughput, and network-wide secrecy energy efficiency are optimized, respectively. Various insights into the optimal fraction are further developed, and its closed-form expressions are also derived under perfect self-interference cancellation or in a dense network. It is concluded that the fraction of FD receivers triggers a non-trivial tradeoff between reliability and secrecy, and the proposed strategy can significantly enhance the network security performance.",
"title": ""
}
] |
scidocsrr
|
08d3c2023248fc0fa4a853f3fd55733b
|
Measurement Issues in Galvanic Intrabody Communication: Influence of Experimental Setup
|
[
{
"docid": "ff0b13d3841913de36104e37cc893b26",
"text": "Modeling of intrabody communication (IBC) entails the understanding of the interaction between electromagnetic fields and living tissues. At the same time, an accurate model can provide practical hints toward the deployment of an efficient and secure communication channel for body sensor networks. In the literature, two main IBC coupling techniques have been proposed: galvanic and capacitive coupling. Nevertheless, models that are able to emulate both coupling approaches have not been reported so far. In this paper, a simple model based on a distributed parameter structure with the flexibility to adapt to both galvanic and capacitive coupling has been proposed. In addition, experimental results for both coupling methods were acquired by means of two harmonized measurement setups. The model simulations have been subsequently compared with the experimental data, not only to show their validity but also to revise the practical frequency operation range for both techniques. Finally, the model, along with the experimental results, has also allowed us to provide some practical rules to optimally tackle IBC design.",
"title": ""
},
{
"docid": "df1124c8b5b3295f09da347d19f152f6",
"text": "The signal transmission mechanism on the surface of the human body is studied for the application to body channel communication (BCC). From Maxwell's equations, the complete equation of electrical field on the human body is developed to obtain a general BCC model. The mechanism of BCC consists of three parts according to the operating frequencies and channel distances: the quasi-static near-field coupling part, the reactive induction-field radiation part, and the surface wave far-field propagation part. The general BCC model by means of the near-field and far-field approximation is developed to be valid in the frequency range from 100 kHz to 100 MHz and distance up to 1.3 m based on the measurements of the body channel characteristics. Finally, path loss characteristics of BCC are formulated for the design of BCC systems and many potential applications.",
"title": ""
},
{
"docid": "704611db1aea020103b093a2156cd94d",
"text": "With the growing number of wearable devices and applications, there is an increasing need for a flexible body channel communication (BCC) system that supports both scalable data rate and low power operation. In this paper, a highly flexible frequency-selective digital transmission (FSDT) transmitter that supports both data scalability and low power operation with the aid of two novel implementation methods is presented. In an FSDT system, data rate is limited by the number of Walsh spreading codes available for use in the optimal body channel band of 40-80 MHz. The first method overcomes this limitation by applying multi-level baseband coding scheme to a carrierless FSDT system to enhance the bandwidth efficiency and to support a data rate of 60 Mb/s within a 40-MHz bandwidth. The proposed multi-level coded FSDT system achieves six times higher data rate as compared to other BCC systems. The second novel implementation method lies in the use of harmonic frequencies of a Walsh encoded FSDT system that allows the BCC system to operate in the optimal channel bandwidth between 40-80 MHz with half the clock frequency. Halving the clock frequency results in a power consumption reduction of 32%. The transmitter was fabricated in a 65-nm CMOS process. It occupies a core area of 0.24 × 0.3 mm 2. When operating under a 60-Mb/s data-rate mode, the transmitter consumes 1.85 mW and it consumes only 1.26 mW when operating under a 5-Mb/s data-rate mode.",
"title": ""
}
] |
[
{
"docid": "0da9197d2f6839d01560b46cbb1fbc8d",
"text": "Estimating the traversability of rough terrain is a critical task for an outdoor mobile robot. While classifying structured environment can be learned from large number of training data, it is an extremely difficult task to learn and estimate traversability of unstructured rough terrain. Moreover, in many cases information from a single sensor may not be sufficient for estimating traversability reliably in the absence of artificial landmarks such as lane markings or curbs. Our approach estimates traversability of the terrain and build a 2D probabilistic grid map online using 3D-LIDAR and camera. The combination of LIDAR and camera is favoured in many robotic application because they provide complementary information. Our approach assumes the data captured by these two sensors are independent and build separate traversability maps, each with information captured from one sensor. Traversability estimation with vision sensor autonomously collects training data and update classifier without human intervention as the vehicle traverse the terrain. Traversability estimation with 3D-LIDAR measures the slopes of the ground to predict the traversability. Two independently built probabilistic maps are fused using Bayes' rule to improve the detection performance. This is in contrast with other methods in which each sensor performs different tasks. We have implemented the algorithm on a UGV(Unmanned Ground Vehicle) and tested our approach on a rough terrain to evaluate the detection performance.",
"title": ""
},
{
"docid": "cb95831a960ae9ec2d1ea4279cfa6ac2",
"text": "In vivo fluorescence imaging suffers from suboptimal signal-to-noise ratio and shallow detection depth, which is caused by the strong tissue autofluorescence under constant external excitation and the scattering and absorption of short-wavelength light in tissues. Here we address these limitations by using a novel type of optical nanoprobes, photostimulable LiGa5O8:Cr(3+) near-infrared (NIR) persistent luminescence nanoparticles, which, with very-long-lasting NIR persistent luminescence and unique photo-stimulated persistent luminescence (PSPL) capability, allow optical imaging to be performed in an excitation-free and hence, autofluorescence-free manner. LiGa5O8:Cr(3+) nanoparticles pre-charged by ultraviolet light can be repeatedly (>20 times) stimulated in vivo, even in deep tissues, by short-illumination (~15 seconds) with a white light-emitting-diode flashlight, giving rise to multiple NIR PSPL that expands the tracking window from several hours to more than 10 days. Our studies reveal promising potential of these nanoprobes in cell tracking and tumor targeting, exhibiting exceptional sensitivity and penetration that far exceed those afforded by conventional fluorescence imaging.",
"title": ""
},
{
"docid": "faa8bb95a4b05bed78dbdfaec1cd147c",
"text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.",
"title": ""
},
{
"docid": "e2c2d56a92aa66453804c552ad0892b9",
"text": "By analyzing the relationship of S-parameter between two-port differential and four-port single-ended networks, a method is found for measuring the S-parameter of a differential amplifier on wafer by using a normal two-port vector network analyzer. With this method, it should not especially purchase a four-port vector network analyzer. Furthermore, the method was also suitable for measuring S-parameter of any multi-port circuit by using two-ports measurement set.",
"title": ""
},
{
"docid": "75ed78f9a59ec978432f16fd4407df60",
"text": "The transition from user requirements to UML diagrams is a difficult task for the designer espec ially when he handles large texts expressing these needs. Modelin g class Diagram must be performed frequently, even during t he development of a simple application. This paper prop oses an approach to facilitate class diagram extraction from textual requirements using NLP techniques and domain ontolog y. Keywords-component; Class Diagram, Natural Language Processing, GATE, Domain ontology, requirements.",
"title": ""
},
{
"docid": "147b8f02031ba9bc8788600dc48301c9",
"text": "This paper gives an overview on different research activities on electronically steerable antennas at Ka-band within the framework of the SANTANA project. In addition, it gives an outlook on future objectives, namely the perspective of testing SANTANA technologies with the projected German research satellite “Heinrich Hertz”.",
"title": ""
},
{
"docid": "af983aa7ac103dd41dfd914af452758f",
"text": "The fast-growing nature of instant messaging applications usage on Android mobile devices brought about a proportional increase on the number of cyber-attack vectors that could be perpetrated on them. Android mobile phones store significant amount of information in the various memory partitions when Instant Messaging (IM) applications (WhatsApp, Skype, and Facebook) are executed on them. As a result of the enormous crimes committed using instant messaging applications, and the amount of electronic based traces of evidence that can be retrieved from the suspect’s device where an investigation could convict or refute a person in the court of law and as such, mobile phones have become a vulnerable ground for digital evidence mining. This paper aims at using forensic tools to extract and analyse left artefacts digital evidence from IM applications on Android phones using android studio as the virtual machine. Digital forensic investigation methodology by Bill Nelson was applied during this research. Some of the key results obtained showed how digital forensic evidence such as call logs, contacts numbers, sent/retrieved messages, and images can be mined from simulated android phones when running these applications. These artefacts can be used in the court of law as evidence during cybercrime investigation.",
"title": ""
},
{
"docid": "1ebb333d5a72c649cd7d7986f5bf6975",
"text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under stand. The notion of plans is introduced to ac count for general knowledge about novel situa tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys tem or any working computational system the res triction of world knowledge need not critically concern him. Our feeling is that an effective characteri zation of knowledge can result in a real under standing system in the not too distant future. We expect that programs based on the theory we out …",
"title": ""
},
{
"docid": "a8da8a2d902c38c6656ea5db841a4eb1",
"text": "The uses of the World Wide Web on the Internet for commerce and information access continue to expand. The e-commerce business has proven to be a promising channel of choice for consumers as it is gradually transforming into a mainstream business activity. However, lack of trust has been identified as a major obstacle to the adoption of online shopping. Empirical study of online trust is constrained by the shortage of high-quality measures of general trust in the e-commence contexts. Based on theoretical or empirical studies in the literature of marketing or information system, nine factors have sound theoretical sense and support from the literature. A survey method was used for data collection in this study. A total of 172 usable questionnaires were collected from respondents. This study presents a new set of instruments for use in studying online trust of an individual. The items in the instrument were analyzed using a factors analysis. The results demonstrated reliable reliability and validity in the instrument.This study identified seven factors has a significant impact on online trust. The seven dominant factors are reputation, third-party assurance, customer service, propensity to trust, website quality, system assurance and brand. As consumers consider that doing business with online vendors involves risk and uncertainty, online business organizations need to overcome these barriers. Further, implication of the finding also provides e-commerce practitioners with guideline for effectively engender online customer trust.",
"title": ""
},
{
"docid": "5289fc231c716e2ce9e051fb0652ce94",
"text": "Noninvasive body contouring has become one of the fastest-growing areas of esthetic medicine. Many patients appear to prefer nonsurgical less-invasive procedures owing to the benefits of fewer side effects and shorter recovery times. Increasingly, 635-nm low-level laser therapy (LLLT) has been used in the treatment of a variety of medical conditions and has been shown to improve wound healing, reduce edema, and relieve acute pain. Within the past decade, LLLT has also emerged as a new modality for noninvasive body contouring. Research has shown that LLLT is effective in reducing overall body circumference measurements of specifically treated regions, including the hips, waist, thighs, and upper arms, with recent studies demonstrating the long-term effectiveness of results. The treatment is painless, and there appears to be no adverse events associated with LLLT. The mechanism of action of LLLT in body contouring is believed to stem from photoactivation of cytochrome c oxidase within hypertrophic adipocytes, which, in turn, affects intracellular secondary cascades, resulting in the formation of transitory pores within the adipocytes' membrane. The secondary cascades involved may include, but are not limited to, activation of cytosolic lipase and nitric oxide. Newly formed pores release intracellular lipids, which are further metabolized. Future studies need to fully outline the cellular and systemic effects of LLLT as well as determine optimal treatment protocols.",
"title": ""
},
{
"docid": "bf257fae514c28dc3b4c31ff656a00e9",
"text": "The objective of the present study is to evaluate the acute effects of low-level laser therapy (LLLT) on functional capacity, perceived exertion, and blood lactate in hospitalized patients with heart failure (HF). Patients diagnosed with systolic HF (left ventricular ejection fraction <45 %) were randomized and allocated prospectively into two groups: placebo LLLT group (n = 10)—subjects who were submitted to placebo laser and active LLLT group (n = 10)—subjects who were submitted to active laser. The 6-min walk test (6MWT) was performed, and blood lactate was determined at rest (before LLLT application and 6MWT), immediately after the exercise test (time 0) and recovery (3, 6, and 30 min). A multi-diode LLLT cluster probe (DMC, São Carlos, Brazil) was used. Both groups increased 6MWT distance after active or placebo LLLT application compared to baseline values (p = 0.03 and p = 0.01, respectively); however, no difference was observed during intergroup comparison. The active LLLT group showed a significant reduction in the perceived exertion Borg (PEB) scale compared to the placebo LLLT group (p = 0.006). In addition, the group that received active LLLT showed no statistically significant difference for the blood lactate level through the times analyzed. The placebo LLLT group demonstrated a significant increase in blood lactate between the rest and recovery phase (p < 0.05). Acute effects of LLLT irradiation on skeletal musculature were not able to improve the functional capacity of hospitalized patients with HF, although it may favorably modulate blood lactate metabolism and reduce perceived muscle fatigue.",
"title": ""
},
{
"docid": "1dd8fdb5f047e58f60c228e076aa8b66",
"text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.",
"title": ""
},
{
"docid": "ba1b3fb5f147b5af173e5f643a2794e0",
"text": "The objective of this study is to examine how personal factors such as lifestyle, personality, and economic situations affect the consumer behavior of Malaysian university students. A quantitative approach was adopted and a self-administered questionnaire was distributed to collect data from university students. Findings illustrate that ‘personality’ influences the consumer behavior among Malaysian university student. This study also noted that the economic situation had a negative relationship with consumer behavior. Findings of this study improve our understanding of consumer behavior of Malaysian University Students. The findings of this study provide valuable insights in identifying and taking steps to improve on the services, ambience, and needs of the student segment of the Malaysian market.",
"title": ""
},
{
"docid": "71fe8a71c2855499834b2f6a60b2a759",
"text": "The pomegranate, Punica granatum L., is an ancient, mystical, unique fruit borne on a small, long-living tree cultivated throughout the Mediterranean region, as far north as the Himalayas, in Southeast Asia, and in California and Arizona in the United States. In addition to its ancient historical uses, pomegranate is used in several systems of medicine for a variety of ailments. The synergistic action of the pomegranate constituents appears to be superior to that of single constituents. In the past decade, numerous studies on the antioxidant, anticarcinogenic, and anti-inflammatory properties of pomegranate constituents have been published, focusing on treatment and prevention of cancer, cardiovascular disease, diabetes, dental conditions, erectile dysfunction, bacterial infections and antibiotic resistance, and ultraviolet radiation-induced skin damage. Other potential applications include infant brain ischemia, male infertility, Alzheimer's disease, arthritis, and obesity.",
"title": ""
},
{
"docid": "5718c733a80805c5dbb4333c2d298980",
"text": "{Portions reprinted, with permission from Keim et al. #2001 IEEE Abstract Simple presentation graphics are intuitive and easy-to-use, but show only highly aggregated data presenting only a very small number of data values (as in the case of bar charts) and may have a high degree of overlap occluding a significant portion of the data values (as in the case of the x-y plots). In this article, the authors therefore propose a generalization of traditional bar charts and x-y plots, which allows the visualization of large amounts of data. The basic idea is to use the pixels within the bars to present detailed information of the data records. The so-called pixel bar charts retain the intuitiveness of traditional bar charts while allowing very large data sets to be visualized in an effective way. It is shown that, for an effective pixel placement, a complex optimization problem has to be solved. The authors then present an algorithm which efficiently solves the problem. The application to a number of real-world ecommerce data sets shows the wide applicability and usefulness of this new idea, and a comparison to other well-known visualization techniques (parallel coordinates and spiral techniques) shows a number of clear advantages. Information Visualization (2002) 1, 20 – 34. DOI: 10.1057/palgrave/ivs/9500003",
"title": ""
},
{
"docid": "b290b3b9db5e620e8a049ad9cd68346b",
"text": "THE USE OF OBSERVATIONAL RESEARCH METHODS in the field of palliative care is vital to building the evidence base, identifying best practices, and understanding disparities in access to and delivery of palliative care services. As discussed in the introduction to this series, research in palliative care encompasses numerous areas in which the gold standard research design, the randomized controlled trial (RCT), is not appropriate, adequate, or even possible.1,2 The difficulties in conducting RCTs in palliative care include patient and family recruitment, gate-keeping by physicians, crossover contamination, high attrition rates, small sample sizes, and limited survival times. Furthermore, a number of important issues including variation in access to palliative care and disparities in the use and provision of palliative care simply cannot be answered without observational research methods. As research in palliative care broadens to encompass study designs other than the RCT, the collective understanding of the use, strengths, and limitations of observational research methods is critical. The goals of this first paper are to introduce the major types of observational study designs, discuss the issues of precision and validity, and provide practical insights into how to critically evaluate this literature in our field.",
"title": ""
},
{
"docid": "a8fe62e387610682f90018ca1a56ba04",
"text": "Aarskog-Scott syndrome (AAS), also known as faciogenital dysplasia (FGD, OMIM # 305400), is an X-linked disorder of recessive inheritance, characterized by short stature and facial, skeletal, and urogenital abnormalities. AAS is caused by mutations in the FGD1 gene (Xp11.22), with over 56 different mutations identified to date. We present the clinical and molecular analysis of four unrelated families of Mexican origin with an AAS phenotype, in whom FGD1 sequencing was performed. This analysis identified two stop mutations not previously reported in the literature: p.Gln664* and p.Glu380*. Phenotypically, every male patient met the clinical criteria of the syndrome, whereas discrepancies were found between phenotypes in female patients. Our results identify two novel mutations in FGD1, broadening the spectrum of reported mutations; and provide further delineation of the phenotypic variability previously described in AAS.",
"title": ""
},
{
"docid": "f9aa9bdad364b7c4b6a4b67120686d9a",
"text": "In this paper, we describe an SDN-based plastic architecture for 5G networks, designed to fulfill functional and performance requirements of new generation services and devices. The 5G logical architecture is presented in detail, and key procedures for dynamic control plane instantiation, device attachment, and service request and mobility management are specified. Key feature of the proposed architecture is flexibility, needed to support efficiently a heterogeneous set of services, including Machine Type Communication, Vehicle to X and Internet of Things traffic. These applications are imposing challenging targets, in terms of end-to-end latency, dependability, reliability and scalability. Additionally, backward compatibility with legacy systems is guaranteed by the proposed solution, and Control Plane and Data Plane are fully decoupled. The three levels of unified signaling unify Access, Non-access and Management strata, and a clean-slate forwarding layer, designed according to the software defined networking principle, replaces tunneling protocols for carrier grade mobility. Copyright © 2014 John Wiley & Sons, Ltd. *Correspondence R. Trivisonno, Huawei European Research Institute, Munich, Germany. E-mail: riccardo.trivisonno@huawei.com Received 13 October 2014; Revised 5 November 2014; Accepted 8 November 2014",
"title": ""
},
{
"docid": "b93022efa40379ca7cc410d8b10ba48e",
"text": "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue.\n To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74% while maintaining provider revenue neutrality.",
"title": ""
},
{
"docid": "8f54f2c6e9736a63ea4a99f89090e0a2",
"text": "This article demonstrates how documents prepared in hypertext or word processor format can be saved in portable document format (PDF). These files are self-contained documents that that have the same appearance on screen and in print, regardless of what kind of computer or printer are used, and regardless of what software package was originally used to for their creation. PDF files are compressed documents, invariably smaller than the original files, hence allowing rapid dissemination and download.",
"title": ""
}
] |
scidocsrr
|
641cb2cdc570ee6410bc86e68ecb1800
|
PGX.D: a fast distributed graph processing engine
|
[
{
"docid": "e92ab865f33c7548c21ba99785912d03",
"text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.",
"title": ""
},
{
"docid": "88862d86e43d491ec4368410a61c13fb",
"text": "With the proliferation of large, irregular, and sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query, and compute on the topological structure of the network represented as sets of edges relating sets of vertices. To store and process Facebook-scale datasets, software and algorithms must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we conduct a qualitative study and a performance comparison of 12 open source graph databases using four fundamental graph algorithms on networks containing up to 256 million edges.",
"title": ""
}
] |
[
{
"docid": "5f30867cb3071efa8fb0d34447b8a8f6",
"text": "Money laundering is a global problem that affects all countries to various degrees. Although, many countries take benefits from money laundering, by accepting the money from laundering but keeping the crime abroad, at the long run, “money laundering attracts crime”. Criminals come to know a country, create networks and eventually also locate their criminal activities there. Most financial institutions have been implementing antimoney laundering solutions (AML) to fight investment fraud. The key pillar of a strong Anti-Money Laundering system for any financial institution depends mainly on a well-designed and effective monitoring system. The main purpose of the Anti-Money Laundering transactions monitoring system is to identify potential suspicious behaviors embedded in legitimate transactions. This paper presents a monitor framework that uses various techniques to enhance the monitoring capabilities. This framework is depending on rule base monitoring, behavior detection monitoring, cluster monitoring and link analysis based monitoring. The monitor detection processes are based on a money laundering deterministic finite automaton that has been obtained from their corresponding regular expressions. Index Terms – Anti Money Laundering system, Money laundering monitoring and detecting, Cycle detection monitoring, Suspected Link monitoring.",
"title": ""
},
{
"docid": "9a7e491e4d4490f630b55a94703a6f00",
"text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.",
"title": ""
},
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "c3aaa53892e636f34d6923831a3b66bc",
"text": "OBJECTIVES\nTo evaluate whether 7-mm-long implants could be an alternative to longer implants placed in vertically augmented posterior mandibles.\n\n\nMATERIALS AND METHODS\nSixty patients with posterior mandibular edentulism with 7-8 mm bone height above the mandibular canal were randomized to either vertical augmentation with anorganic bovine bone blocks and delayed 5-month placement of ≥10 mm implants or to receive 7-mm-long implants. Four months after implant placement, provisional prostheses were delivered, replaced after 4 months, by definitive prostheses. The outcome measures were prosthesis and implant failures, any complications and peri-implant marginal bone levels. All patients were followed to 1 year after loading.\n\n\nRESULTS\nOne patient dropped out from the short implant group. In two augmented mandibles, there was not sufficient bone to place 10-mm-long implants possibly because the blocks had broken apart during insertion. One prosthesis could not be placed when planned in the 7 mm group vs. three prostheses in the augmented group, because of early failure of one implant in each patient. Four complications (wound dehiscence) occurred during graft healing in the augmented group vs. none in the 7 mm group. No complications occurred after implant placement. These differences were not statistically significant. One year after loading, patients of both groups lost an average of 1 mm of peri-implant bone. There no statistically significant differences in bone loss between groups.\n\n\nCONCLUSIONS\nWhen residual bone height over the mandibular canal is between 7 and 8 mm, 7 mm short implants might be a preferable choice than vertical augmentation, reducing the chair time, expenses and morbidity. These 1-year preliminary results need to be confirmed by follow-up of at least 5 years.",
"title": ""
},
{
"docid": "b8fa649e8b5a60a05aad257a0a364b51",
"text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.",
"title": ""
},
{
"docid": "7fd1ac60f18827dbe10bc2c10f715ae9",
"text": "Sentiment analysis in Twitter is a field that has recently attracted research interest. Twitter is one of the most popular microblog platforms on which users can publish their thoughts and opinions. Sentiment analysis in Twitter tackles the problem of analyzing the tweets in terms of the opinion they express. This survey provides an overview of the topic by investigating and briefly describing the algorithms that have been proposed for sentiment analysis in Twitter. The presented studies are categorized according to the approach they follow. In addition, we discuss fields related to sentiment analysis in Twitter including Twitter opinion retrieval, tracking sentiments over time, irony detection, emotion detection, and tweet sentiment quantification, tasks that have recently attracted increasing attention. Resources that have been used in the Twitter sentiment analysis literature are also briefly presented. The main contributions of this survey include the presentation of the proposed approaches for sentiment analysis in Twitter, their categorization according to the technique they use, and the discussion of recent research trends of the topic and its related fields.",
"title": ""
},
{
"docid": "658fbe3164e93515d4222e634b413751",
"text": "A prediction market is a place where individuals can wager on the outcomes of future events. Those who forecast the outcome correctly win money, and if they forecast incorrectly, they lose money. People value money, so they are incentivized to forecast such outcomes as accurately as they can. Thus, the price of a prediction market can serve as an excellent indicator of how likely an event is to occur [1, 2]. Augur is a decentralized platform for prediction markets. Our goal here is to provide a blueprint of a decentralized prediction market using Bitcoin’s input/output-style transactions. Many theoretical details of this project, such as its game-theoretic underpinning, are touched on lightly or not at all. This work builds on (and is intended to be read as a companion to) the theoretical foundation established in [3].",
"title": ""
},
{
"docid": "2f7e5807415398cb95f8f1ab36a0438f",
"text": "We present a Convolutional Neural Network (CNN) regression based framework for 2-D/3-D medical image registration, which directly estimates the transformation parameters from image features extracted from the DRR and the X-ray images using learned hierarchical regressors. Our framework consists of learning and application stages. In the learning stage, CNN regressors are trained using supervised machine learning to reveal the correlation between the transformation parameters and the image features. In the application stage, CNN regressors are applied on extracted image features in a hierarchical manner to estimate the transformation parameters. Our experiment results demonstrate that the proposed method can achieve real-time 2-D/3-D registration with very high (i.e., sub-milliliter) accuracy.",
"title": ""
},
{
"docid": "083cb6546aecdc12c2a1e36a9b8d9b67",
"text": "Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semisupervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.1",
"title": ""
},
{
"docid": "12f5447d9e83890c3e953e03a2e92c8f",
"text": "BACKGROUND\nLong-term continuous systolic blood pressure (SBP) and heart rate (HR) monitors are of tremendous value to medical (cardiovascular, circulatory and cerebrovascular management), wellness (emotional and stress tracking) and fitness (performance monitoring) applications, but face several major impediments, such as poor wearability, lack of widely accepted robust SBP models and insufficient proofing of the generalization ability of calibrated models.\n\n\nMETHODS\nThis paper proposes a wearable cuff-less electrocardiography (ECG) and photoplethysmogram (PPG)-based SBP and HR monitoring system and many efforts are made focusing on above challenges. Firstly, both ECG/PPG sensors are integrated into a single-arm band to provide a super wearability. A highly convenient but challenging single-lead configuration is proposed for weak single-arm-ECG acquisition, instead of placing the electrodes on the chest, or two wrists. Secondly, to identify heartbeats and estimate HR from the motion artifacts-sensitive weak arm-ECG, a machine learning-enabled framework is applied. Then ECG-PPG heartbeat pairs are determined for pulse transit time (PTT) measurement. Thirdly, a PTT&HR-SBP model is applied for SBP estimation, which is also compared with many PTT-SBP models to demonstrate the necessity to introduce HR information in model establishment. Fourthly, the fitted SBP models are further evaluated on the unseen data to illustrate the generalization ability. A customized hardware prototype was established and a dataset collected from ten volunteers was acquired to evaluate the proof-of-concept system.\n\n\nRESULTS\nThe semi-customized prototype successfully acquired from the left upper arm the PPG signal, and the weak ECG signal, the amplitude of which is only around 10% of that of the chest-ECG. The HR estimation has a mean absolute error (MAE) and a root mean square error (RMSE) of only 0.21 and 1.20 beats per min, respectively. Through the comparative analysis, the PTT&HR-SBP models significantly outperform the PTT-SBP models. The testing performance is 1.63 ± 4.44, 3.68, 4.71 mmHg in terms of mean error ± standard deviation, MAE and RMSE, respectively, indicating a good generalization ability on the unseen fresh data.\n\n\nCONCLUSIONS\nThe proposed proof-of-concept system is highly wearable, and its robustness is thoroughly evaluated on different modeling strategies and also the unseen data, which are expected to contribute to long-term pervasive hypertension, heart health and fitness management.",
"title": ""
},
{
"docid": "7e74cc21787c1e21fd64a38f1376c6a9",
"text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.",
"title": ""
},
{
"docid": "6bd7a3d4b330972328257d958ec2730e",
"text": "Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented method outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.",
"title": ""
},
{
"docid": "67b5bd59689c325365ac765a17886169",
"text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.",
"title": ""
},
{
"docid": "ee37a743edd1b87d600dcf2d0050ca18",
"text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.",
"title": ""
},
{
"docid": "4737fe7f718f79c74595de40f8778da2",
"text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.",
"title": ""
},
{
"docid": "e8f46d6e58c070965f83ca244e15c3d6",
"text": "OBJECTIVES\nUrinalysis is one of the most commonly performed tests in the clinical laboratory. However, manual microscopic sediment examination is labor-intensive, time-consuming, and lacks standardization in high-volume laboratories. In this study, the concordance of analyses between manual microscopic examination and two different automatic urine sediment analyzers has been evaluated.\n\n\nDESIGN AND METHODS\n209 urine samples were analyzed by the Iris iQ200 ELITE (İris Diagnostics, USA), Dirui FUS-200 (DIRUI Industrial Co., China) automatic urine sediment analyzers and by manual microscopic examination. The degree of concordance (Kappa coefficient) and the rates within the same grading were evaluated.\n\n\nRESULTS\nFor erythrocytes, leukocytes, epithelial cells, bacteria, crystals and yeasts, the degree of concordance between the two instruments was better than the degree of concordance between the manual microscopic method and the individual devices. There was no concordance between all methods for casts.\n\n\nCONCLUSION\nThe results from the automated analyzers for erythrocytes, leukocytes and epithelial cells were similar to the result of microscopic examination. However, in order to avoid any error or uncertainty, some images (particularly: dysmorphic cells, bacteria, yeasts, casts and crystals) have to be analyzed by manual microscopic examination by trained staff. Therefore, the software programs which are used in automatic urine sediment analysers need further development to recognize urinary shaped elements more accurately. Automated systems are important in terms of time saving and standardization.",
"title": ""
},
{
"docid": "bcbba4f99e33ac0daea893e280068304",
"text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).",
"title": ""
},
{
"docid": "e17a1429f4ca9de808caaa842ee5a441",
"text": "Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of 〈subject, relation, object〉 triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80, 000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53, 000+ objects and 29, 000+ relations, a scale at which no previous work has been evaluated at. We show superiority of our model over competitive baselines on the original Visual Genome dataset with 80, 000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.",
"title": ""
},
{
"docid": "486e3f5614f69f60d8703d8641c73416",
"text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.",
"title": ""
},
{
"docid": "754163e498679e1d3c1449424c03a71f",
"text": "J. K. Strosnider P. Nandi S. Kumaran S. Ghosh A. Arsanjani The current approach to the design, maintenance, and governance of service-oriented architecture (SOA) solutions has focused primarily on flow-driven assembly and orchestration of reusable service components. The practical application of this approach in creating industry solutions has been limited, because flow-driven assembly and orchestration models are too rigid and static to accommodate complex, real-world business processes. Furthermore, the approach assumes a rich, easily configured library of reusable service components when in fact the development, maintenance, and governance of these libraries is difficult. An alternative approach pioneered by the IBM Research Division, model-driven business transformation (MDBT), uses a model-driven software synthesis technology to automatically generate production-quality business service components from high-level business process models. In this paper, we present the business entity life cycle analysis (BELA) technique for MDBT-based SOA solution realization and its integration into serviceoriented modeling and architecture (SOMA), the end-to-end method from IBM for SOA application and solution development. BELA shifts the process-modeling paradigm from one that is centered on activities to one that is centered on entities. BELA teams process subject-matter experts with IT and data architects to identify and specify business entities and decompose business processes. Supporting synthesis tools then automatically generate the interacting business entity service components and their associated data stores and service interface definitions. We use a large-scale project as an example demonstrating the benefits of this innovation, which include an estimated 40 percent project cost reduction and an estimated 20 percent reduction in cycle time when compared with conventional SOA approaches.",
"title": ""
}
] |
scidocsrr
|
1b7342cc547f410c6e149ec7a5d69b16
|
Towards Personality-driven Persuasive Health Games and Gamified Systems
|
[
{
"docid": "372ab07026a861acd50e7dd7c605881d",
"text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.",
"title": ""
},
{
"docid": "8777063bfba463c05e46704f0ad2c672",
"text": "Amazon's Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.",
"title": ""
}
] |
[
{
"docid": "71da47c6837022a80dccabb0a1f5c00e",
"text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.",
"title": ""
},
{
"docid": "16a5313b414be4ae740677597291d580",
"text": "We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.",
"title": ""
},
{
"docid": "81387b0f93b68e8bd6a56a4fd81477e9",
"text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.",
"title": ""
},
{
"docid": "47b9d5585a0ca7d10cb0fd9da673dd0f",
"text": "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.",
"title": ""
},
{
"docid": "1a26a00f0915e2eac01edf8cad0152c9",
"text": "This paper describes the application of Rao-Blackwellised Gibbs sampling (RBGS) to speech recognition using switching linear dynamical systems (SLDSs) as the acoustic model. The SLDS is a hybrid of standard hidden Markov models (HMMs) and linear dynamical systems. It is an extension of the stochastic segment model (SSM) where segments are assumed independent. SLDSs explicitly take into account the strong co-articulation present in speech using a Gauss-Markov process in a low dimensional, latent, state space. Unfortunately , inference in SLDS is intractable unless the discrete state sequence is known. RBGS is one approach that may be applied for both improved training and decoding for this form of intractable model. The theory of SLDS and RBGS is described, along with an efficient proposal distribution. The performance of the SLDS and SSM using RBGS for training and inference is evaluated on the ARPA Resource Management task.",
"title": ""
},
{
"docid": "da4ec6dcf7f47b8ec0261195db7af5ca",
"text": "Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales. To fully pursue this paradigm, flexibility in the production means as well as in their timely organization is of paramount importance. AI is planning a major role in this transition, but the scenarios encountered in practice might be challenging for current tools. Task planning is one example where AI enables more efficient and flexible operation through an online automated adaptation and rescheduling of the activities to cope with new operational constraints and demands. In this paper we present SMarTplan, a task planner specifically conceived to deal with real-world scenarios in the emerging smart factory paradigm. Including both special-purpose and general-purpose algorithms, SMarTplan is based on current automated reasoning technology and it is designed to tackle complex application domains. In particular, we show its effectiveness on a logistic scenario, by comparing its specialized version with the general purpose one, and extending the comparison to other state-of-the-art task planners.",
"title": ""
},
{
"docid": "1ca4fbc998c41cec99abe68c5ebe944e",
"text": "Wheeled mobile robots are increasingly being utilized in unknown and dangerous situations such as planetary surface exploration. Based on force analysis of the differential joints and force analysis between the wheels and the ground, this paper established the quasi-static mathematical model of the 6-wheel mobile system of planetary exploration rover with rocker-bogie structure. Considering the constraint conditions, with the method of finding the wheels’friction force solution space feasible region, obstacle-climbing capability of the mobile mechanism was analyzed. Given the same obstacle heights and contact angles of wheel-ground, the single side forward obstacle-climbing of the wheels was simulated respectively, and the results show that the rear wheel has the best obstacle-climbing capability, the middle wheel is the worst, and the front wheel is moderate.",
"title": ""
},
{
"docid": "9b254da42083948029120552ede69652",
"text": "Smart contracts are computer programs that can be consistently executed by a network of mutually distrusting nodes, without the arbitration of a trusted authority. Because of their resilience to tampering , smart contracts are appealing in many scenarios, especially in those which require transfers of money to respect certain agreed rules (like in financial services and in games). Over the last few years many platforms for smart contracts have been proposed, and some of them have been actually implemented and used. We study how the notion of smart contract is interpreted in some of these platforms. Focussing on the two most widespread ones, Bitcoin and Ethereum, we quantify the usage of smart contracts in relation to their application domain. We also analyse the most common programming patterns in Ethereum, where the source code of smart contracts is available.",
"title": ""
},
{
"docid": "a41444799f295e5fc325626fd663d77d",
"text": "Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure.",
"title": ""
},
{
"docid": "b4e56855d6f41c5829b441a7d2765276",
"text": "College student attendance management of class plays an important position in the work of management of college student, this can help to urge student to class on time, improve learning efficiency, increase learning grade, and thus entirely improve the education level of the school. Therefore, colleges need an information system platform of check attendance management of class strongly to enhance check attendance management of class using the information technology which gathers the basic information of student automatically. According to current reality and specific needs of check attendance and management system of college students and the exist device of the system. Combined with the study of college attendance system, this paper gave the node design of check attendance system of class which based on RFID on the basic of characteristics of embedded ARM and RFID technology.",
"title": ""
},
{
"docid": "88ffb30f1506bedaf7c1a3f43aca439e",
"text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.",
"title": ""
},
{
"docid": "ade88f8a9aa8a47dd2dc5153b3584695",
"text": "A software environment is described which provides facilities at a variety of levels for “animating” algorithms: exposing properties of programs by displaying multiple dynamic views of the program and associated data structures. The system is operational on a network of graphics-based, personal workstations and has been used successfully in several applications for teaching and research in computer science and mathematics. In this paper, we outline the conceptual framework that we have developed for animating algorithms, describe the system that we have implemented, and give several examples drawn from the host of algorithms that we have animated.",
"title": ""
},
{
"docid": "426a7c1572e9d68f4ed2429f143387d5",
"text": "Face tracking is an active area of computer vision research and an important building block for many applications. However, opposed to face detection, there is no common benchmark data set to evaluate a tracker’s performance, making it hard to compare results between different approaches. In this challenge we propose a data set, annotation guidelines and a well defined evaluation protocol in order to facilitate the evaluation of face tracking systems in the future.",
"title": ""
},
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
},
{
"docid": "1a02d963590683c724a814f341f94f92",
"text": "The concept of the quality attribute scenario was introduced in 2003 to support the development of software architectures. This concept is useful because it provides an operational means to represent the quality requirements of a system. It also provides a more concrete basis with which to teach software architecture. Teaching this concept however has some unexpected issues. In this paper, I present my experiences of teaching quality attribute scenarios and outline Bus Tracker, a case study I have developed to support my teaching.",
"title": ""
},
{
"docid": "5935224c53222d0234adffddae23eb04",
"text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.",
"title": ""
},
{
"docid": "6646b66370ed02eb84661c8505eb7563",
"text": "Re-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. Synthetic data represents a good compromise between realistic imagery, usually not required in re-identification since surveillance cameras capture low-resolution silhouettes, and complete control of the samples, which is useful in order to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, outperforms all competitors, matching subjects even with different apparel. The combination of synthetic data with Inception architectures opens up new research avenues in re-identification.",
"title": ""
},
{
"docid": "fc1009e9515d83166e97e4e01ae9ca69",
"text": "In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the \"one-shot-learning\" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for \"user independent\" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.",
"title": ""
},
{
"docid": "906659aa61bbdb5e904a1749552c4741",
"text": "The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "0ea07af19fc199f6a9909bd7df0576a1",
"text": "Detection of overlapping communities in complex networks has motivated recent research in the relevant fields. Aiming this problem, we propose a Markov dynamics based algorithm, called UEOC, which means, “unfold and extract overlapping communities”. In UEOC, when identifying each natural community that overlaps, a Markov random walk method combined with a constraint strategy, which is based on the corresponding annealed network (degree conserving random network), is performed to unfold the community. Then, a cutoff criterion with the aid of a local community function, called conductance, which can be thought of as the ratio between the number of edges inside the community and those leaving it, is presented to extract this emerged community from the entire network. The UEOC algorithm depends on only one parameter whose value can be easily set, and it requires no prior knowledge on the hidden community structures. The proposed UEOC has been evaluated both on synthetic benchmarks and on some real-world networks, and was compared with a set of competing algorithms. Experimental result has shown that UEOC is highly effective and efficient for discovering overlapping communities.",
"title": ""
}
] |
scidocsrr
|
71901f57a6acfafe99eb5e4efad3f2f5
|
Vision-Based Autonomous Navigation System Using ANN and FSM Control
|
[
{
"docid": "c4feca5e27cfecdd2913e18cc7b7a21a",
"text": "one component of intelligent transportation systems, IV systems use sensing and intelligent algorithms to understand the vehicle’s immediate environment, either assisting the driver or fully controlling the vehicle. Following the success of information-oriented systems, IV systems will likely be the “next wave” for ITS, functioning at the control layer to enable the driver–vehicle “subsystem” to operate more effectively. This column provides a broad overview of applications and selected activities in this field. IV application areas",
"title": ""
}
] |
[
{
"docid": "6f34ef57fcf0a2429e7dc2a3e56a99fd",
"text": "Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.",
"title": ""
},
{
"docid": "caae0254ea28dad0abf2f65fcadc7971",
"text": "Deregulation within the financial service industries and the widespread acceptance of new technologies is increasing competition in the finance marketplace. Central to the business strategy of every financial service company is the ability to retain existing customers and reach new prospective customers. Data mining is adopted to play an important role in these efforts. In this paper, we present a data mining approach for analyzing retailing bank customer attrition. We discuss the challenging issues such as highly skewed data, time series data unrolling, leaker field detection etc, and the procedure of a data mining project for the attrition analysis for retailing bank customers. We use lift as a proper measure for attrition analysis and compare the lift of data mining models of decision tree, boosted naïve Bayesian network, selective Bayesian network, neural network and the ensemble of classifiers of the above methods. Some interesting findings are reported. Our research work demonstrates the effectiveness and efficiency of data mining in attrition analysis for retailing bank.",
"title": ""
},
{
"docid": "6fd3f4ab064535d38c01f03c0135826f",
"text": "BACKGROUND\nThere is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment.\n\n\nMETHODS\nWe searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach.\n\n\nRESULTS\nWe retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations.\n\n\nCONCLUSIONS\nThere are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.",
"title": ""
},
{
"docid": "1de5bb16d9304cbfc7c2854ea02f4e5c",
"text": "Language acquisition is one of the most fundamental human traits, and it is obviously the brain that undergoes the developmental changes. During the years of language acquisition, the brain not only stores linguistic information but also adapts to the grammatical regularities of language. Recent advances in functional neuroimaging have substantially contributed to systems-level analyses of brain development. In this Viewpoint, I review the current understanding of how the \"final state\" of language acquisition is represented in the mature brain and summarize new findings on cortical plasticity for second language acquisition, focusing particularly on the function of the grammar center.",
"title": ""
},
{
"docid": "25793a93fec7a1ccea0869252a8a0141",
"text": "Condition monitoring of induction motors is a fast emerging technology for online detection of incipient faults. It avoids unexpected failure of a critical system. Approximately 30-40% of faults of induction motors are stator faults. This work presents a comprehensive review of various stator faults, their causes, detection parameters/techniques, and latest trends in the condition monitoring technology. It is aimed at providing a broad perspective on the status of stator fault monitoring to researchers and application engineers using induction motors. A list of 183 research publications on the subject is appended for quick reference.",
"title": ""
},
{
"docid": "857132b27d87727454ec3019e52039ba",
"text": "In this paper we will introduce an ensemble of codes called irregular repeat-accumulate (IRA) codes. IRA codes are a generalization of the repeat-accumluate codes introduced in [1], and as such have a natural linear-time encoding algorithm. We shall prove that on the binary erasure channel, IRA codes can be decoded reliably in linear time, using iterative sum-product decoding, at rates arbitrarily close to channel capacity. A similar result appears to be true on the AWGN channel, although we have no proof of this. We illustrate our results with numerical and experimental examples.",
"title": ""
},
{
"docid": "643599f9b0dcfd270f9f3c55567ed985",
"text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.",
"title": ""
},
{
"docid": "d83d672642531e1744afe77ed8379b90",
"text": "Customer churn prediction in Telecom Industry is a core research topic in recent years. A huge amount of data is generated in Telecom Industry every minute. On the other hand, there is lots of development in data mining techniques. Customer churn has emerged as one of the major issues in Telecom Industry. Telecom research indicates that it is more expensive to gain a new customer than to retain an existing one. In order to retain existing customers, Telecom providers need to know the reasons of churn, which can be realized through the knowledge extracted from Telecom data. This paper surveys the commonly used data mining techniques to identify customer churn patterns. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered.",
"title": ""
},
{
"docid": "2a7dce77aaff56b810f4a80c32dc80ea",
"text": "Automatically segmenting and classifying clinical free text into sections is an important first step to automatic information retrieval, information extraction and data mining tasks, as it helps to ground the significance of the text within. In this work we describe our approach to automatic section segmentation of clinical records such as hospital discharge summaries and radiology reports, along with section classification into pre-defined section categories. We apply machine learning to the problems of section segmentation and section classification, comparing a joint (one-step) and a pipeline (two-step) approach. We demonstrate that our systems perform well when tested on three data sets, two for hospital discharge summaries and one for radiology reports. We then show the usefulness of section information by incorporating it in the task of extracting comorbidities from discharge summaries.",
"title": ""
},
{
"docid": "277cec4e1df1bfe15376cba3cd23fa85",
"text": "In this paper, we report the development, evaluation, and application of ultra-small low-power wireless sensor nodes for advancing animal husbandry, as well as for innovation of medical technologies. A radio frequency identification (RFID) chip with hybrid interface and neglectable power consumption was introduced to enable switching of ON/OFF and measurement mode after implantation. A wireless power transmission system with a maximum efficiency of 70% and an access distance of up to 5 cm was developed to allow the sensor node to survive for a duration of several weeks from a few minutes' remote charge. The results of field tests using laboratory mice and a cow indicated the high accuracy of the collected biological data and bio-compatibility of the package. As a result of extensive application of the above technologies, a fully solid wireless pH sensor and a surgical navigation system using artificial magnetic field and a 3D MEMS magnetic sensor are introduced in this paper, and the preliminary experimental results are presented and discussed.",
"title": ""
},
{
"docid": "127692e52e1dfb3d71be11e67b1013e6",
"text": "Internet social networks may be an abundant source of opportunities giving space to the “parallel world” which can and, in many ways, does surpass the realty. People share data about almost every aspect of their lives, starting with giving opinions and comments on global problems and events, friends tagging at locations up to the point of multimedia personalized content. Therefore, decentralized mini-campaigns about educational, cultural, political and sports novelties could be conducted. In this paper we have applied clustering algorithm to social network profiles with the aim of obtaining separate groups of people with different opinions about political views and parties. For network case, where some centroids are interconnected, we have implemented edge constraints into classical k-means algorithm. This approach enables fast and effective information analysis about the present state of affairs, but also discovers new tendencies in observed political sphere. All profile data, friendships, fanpage likes and statuses with interactions are collected by already developed software for neurolinguistics social network analysis “Symbols”.",
"title": ""
},
{
"docid": "a40e91ecac0f70e04cc1241797786e77",
"text": "In much of his writings on poverty, famines, and malnutrition, Amartya Sen argues that Democracy is the best way to avoid famines partly because of its ability to use a free press, and that the Indian experience since independence confirms this. His argument is partly empirical, but also relies on some a priori assumptions about human motivation. In his “Democracy as a Universal Value” he claims: Famines are easy to prevent if there is a serious effort to do so, and a democratic government, facing elections and criticisms from opposition parties and independent newspapers, cannot help but make such an effort. Not surprisingly, while India continued to have famines under British rule right up to independence ...they disappeared suddenly with the establishment of a multiparty democracy and a free press.",
"title": ""
},
{
"docid": "da04a904a236c9b4c3c335eb7c65246e",
"text": "BACKGROUND\nIdentifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals.\n\n\nMETHODS\nEmotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature 'Hurst' was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers - Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm.\n\n\nRESULTS\nAnalysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively.\n\n\nCONCLUSIONS\nThe results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system.",
"title": ""
},
{
"docid": "10da9f0fd1be99878e280d261ea81ba3",
"text": "The fuzzy vault scheme is a cryptographic primitive being considered for storing fingerprint minutiae protected. A well-known problem of the fuzzy vault scheme is its vulnerability against correlation attack -based cross-matching thereby conflicting with the unlinkability requirement and irreversibility requirement of effective biometric information protection. Yet, it has been demonstrated that in principle a minutiae-based fuzzy vault can be secured against the correlation attack by passing the to-beprotected minutiae through a quantization scheme. Unfortunately, single fingerprints seem not to be capable of providing an acceptable security level against offline attacks. To overcome the aforementioned security issues, this paper shows how an implementation for multiple fingerprints can be derived on base of the implementation for single finger thereby making use of a Guruswami-Sudan algorithm-based decoder for verification. The implementation, of which public C++ source code can be downloaded, is evaluated for single and various multi-finger settings using the MCYTFingerprint-100 database and provides security enhancing features such as the possibility of combination with password and a slow-down mechanism.",
"title": ""
},
{
"docid": "c695f74a41412606e31c771ec9d2b6d3",
"text": "Osteochondrosis dissecans (OCD) is a form of osteochondrosis limited to the articular epiphysis. The most commonly affected areas include, in decreasing order of frequency, the femoral condyles, talar dome and capitellum of the humerus. OCD rarely occurs in the shoulder joint, where it involves either the humeral head or the glenoid. The purpose of this report is to present a case with glenoid cavity osteochondritis dissecans and clinical and radiological outcome after arthroscopic debridement. The patient underwent arthroscopy to remove the loose body and to microfracture the cavity. The patient was followed-up for 4 years and she is pain-free with full range of motion and a stable shoulder joint.",
"title": ""
},
{
"docid": "12350d889ee7e66eeda886e1e3b03ff5",
"text": "With the rapid development of cloud storage, more and more data owners store their data on the remote cloud, that can reduce data owners’ overhead because the cloud server maintaining the data for them, e.g., storing, updating and deletion. However, that leads to data deletion becomes a security challenge because the cloud server may not delete the data honestly for financial incentives. Recently, plenty of research works have been done on secure data deletion. However, most of the existing methods can be summarized with the same protocol essentially, which called “one-bit-return” protocol: the storage server deletes the data and returns a one-bit result. The data owner has to believe the returned result because he cannot verify it. In this paper, we propose a novel blockchain-based data deletion scheme, which can make the deletion operation more transparent. In our scheme, the data owner can verify the deletion result no matter how malevolently the cloud server behaves. Besides, with the application of blockchain, the proposed scheme can achieve public verification without any trusted third party.",
"title": ""
},
{
"docid": "6d813684a21e3ccc7fb2e09c866be1f1",
"text": "Cross-site scripting (XSS) is a code injection attack that allows an attacker to execute malicious script in another user’s browser. Once the attacker gains control over the Website vulnerable to XSS attack, it can perform actions like cookie-stealing, malware-spreading, session-hijacking and malicious redirection. Malicious JavaScripts are the most conventional ways of performing XSS attacks. Although several approaches have been proposed, XSS is still a live problem since it is very easy to implement, but di cult to detect. In this paper, we propose an e↵ective approach for XSS attack detection. Our method focuses on balancing the load between client and the server. Our method performs an initial checking in the client side for vulnerability using divergence measure. If the suspicion level exceeds beyond a threshold value, then the request is discarded. Otherwise, it is forwarded to the proxy for further processing. In our approach we introduce an attribute clustering method supported by rank aggregation technique to detect confounded JavaScripts. The approach is validated using real life data.",
"title": ""
},
{
"docid": "c68633905f8bbb759c71388819e9bfa9",
"text": "An additional mechanical mechanism for a passive parallelogram-based exoskeleton arm-support is presented. It consists of several levers and joints and an attached extension coil spring. The additional mechanism has two favourable features. On the one hand it exhibits an almost iso-elastic behaviour whereby the lifting force of the mechanism is constant for a wide working range. Secondly, the value of the supporting force can be varied by a simple linear movement of a supporting joint. Furthermore a standard tension spring can be used to gain the desired behavior. The additional mechanism is a 4-link mechanism affixed to one end of the spring within the parallelogram arm-support. It has several geometrical parameters which influence the overall behaviour. A standard optimisation routine with constraints on the parameters is used to find an optimal set of geometrical parameters. Based on the optimized geometrical parameters a prototype was constructed and tested. It is a lightweight wearable system, with a weight of 1.9 kg. Detailed experiments reveal a difference between measured and calculated forces. These variations can be explained by a 60 % higher pre load force of the tension spring and a geometrical offset in the construction.",
"title": ""
},
{
"docid": "a964f8aeb9d48c739716445adc58e98c",
"text": "A passive aeration composting study was undertaken to investigate the effects of aeration pipe orientation (PO) and perforation size (PS) on some physico-chemical properties of chicken litter (chicken manure + sawdust) during composting. The experimental set up was a two-factor completely randomised block design with two pipe orientations: horizontal (Ho) and vertical (Ve), and three perforation sizes: 15, 25 and 35 mm diameter. The properties monitored during composting were pile temperature, moisture content (MC), pH, electrical conductivity (EC), total carbon (C(T)), total nitrogen (N(T)) and total phosphorus (P(T)). Moisture level in the piles was periodically replenished to 60% for efficient microbial activities. The results of the study showed that optimum composting conditions (thermophilic temperatures and sanitation requirements) were attained in all the piles. During composting, both PO and PS significantly affected pile temperature, moisture level, pH, C(T) loss and P(T) gain. EC was only affected by PO while N(T) was affected by PS. Neither PO nor PS had a significant effect on the C:N ratio. A vertical pipe was effective for uniform air distribution, hence, uniform composting rate within the composting pile. The final values showed that PO of Ve and PS of 35 mm diameter resulted in the least loss in N(T). The PO of Ho was as effective as Ve in the conservation of C(T) and P(T). Similarly, the three PSs were equally effective in the conservation of C(T) and P(T). In conclusion, the combined effects of PO and PS showed that treatments Ve35 and Ve15 were the most effective in minimizing N(T) loss.",
"title": ""
},
{
"docid": "f2ad701c00cf7cff75ddb8eba073a408",
"text": "One of the high efficiency motors that were introduced to the industry in recent times is Line Start Permanent Magnet Synchronous Motor (LS-PMSM). Fault detection of LS-PMSM is one of interesting issues. This article presents a new technique for broken rotor bar detection based on the values of Mean and RMS features obtained from captured start-up current in the time domain. The extracted features were analyzed using analysis of variance method to predict the motor condition. Starting load condition and its interaction on detection of broken rotor bar were also investigated. The statistical evaluation of means for each feature at different conditions was performed using Tukey's method as post-hoc procedure. The result showed that the applied features could able to detect the broken rotor bar fault in LS-PMSMs.",
"title": ""
}
] |
scidocsrr
|
fb6e29a915d2343b5b0810ff1c8b2bb1
|
Gaussian Process Regression for Fingerprinting based Localization
|
[
{
"docid": "aa3da820fe9e98cb4f817f6a196c18e7",
"text": "Location awareness is an important capability for mobile computing. Yet inexpensive, pervasive positioning—a requirement for wide-scale adoption of location-aware computing—has been elusive. We demonstrate a radio beacon-based approach to location, called Place Lab, that can overcome the lack of ubiquity and high-cost found in existing location sensing approaches. Using Place Lab, commodity laptops, PDAs and cell phones estimate their position by listening for the cell IDs of fixed radio beacons, such as wireless access points, and referencing the beacons’ positions in a cached database. We present experimental results showing that 802.11 and GSM beacons are sufficiently pervasive in the greater Seattle area to achieve 20-40 meter median accuracy with nearly 100% coverage measured by availability in people’s daily",
"title": ""
}
] |
[
{
"docid": "9eca9a069f8d1e7bf7c0f0b74e3129f0",
"text": "With increasing use of GPS devices more and more location-based information is accessible. Thus not only more movements of people are tracked but also POI (point of interest) information becomes available in increasing geo-spatial density. To enable analysis of movement behavior, we present an approach to enrich trajectory data with semantic POI information and show how additional insights can be gained. Using a density-based clustering technique we extract 1.215 frequent destinations of ~150.000 user movements from a large e-mobility database. We query available context information from Foursquare, a popular location-based social network, to enrich the destinations with semantic background. As GPS measurements can be noisy, often more then one possible destination is found and movement patterns vary over time. Therefore we present highly interactive visualizations that enable an analyst to cope with the inherent geospatial and behavioral uncertainties.",
"title": ""
},
{
"docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b",
"text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.",
"title": ""
},
{
"docid": "1f333e1dbeec98d3733dd78dfd669933",
"text": "Background and objectives: Food poisoning has been always a major concern in health system of every community and cream-filled products are one of the most widespread food poisoning causes in humans. In present study, we examined the preservative effect of the cinnamon oil in cream-filled cakes. Methods: Antimicrobial activity of Cinnamomum verum J. Presl (Cinnamon) bark essential oil was examined against five food-borne pathogens (Staphylococcus aureus, Escherichia coli, Candida albicans, Bacillus cereus and Salmonella typhimurium) to investigate its potential for use as a natural preservative in cream-filled baked goods. Chemical constituents of the oil were determined by gas chromatography/mass spectrometry. For evaluation of preservative sufficiency of the oil, pathogens were added to cream-filled cakes manually and 1 μL/mL of the essential oil was added to all samples except the blank. Results: Chemical constituents of the oil were determined by gas chromatography/mass spectrometry and twenty five components were identified where cinnamaldehyde (79.73%), linalool (4.08%), cinnamaldehyde para-methoxy (2.66%), eugenol (2.37%) and trans-caryophyllene (2.05%) were the major constituents. Cinnamon essential oil showed strong antimicrobial activity against selected pathogens in vitro and the minimum inhibitory concentration values against all tested microorganisms were determined as 0.5 μL/disc except for S. aureus for which, the oil was not effective in tested concentrations. After baking, no observable microorganism was observed in all susceptible microorganisms count in 72h stored samples. Conclusion: It was concluded that by analysing the sensory quality of the preserved food, cinnamon oil may be considered as a natural preservative in food industry, especially for cream-filled cakes and",
"title": ""
},
{
"docid": "accda4f9cb11d92639cf2737c5e8fe78",
"text": "Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T2-weighted images of preterm infants acquired at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an average age of 70 years, and T1-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86, and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol.",
"title": ""
},
{
"docid": "ab4fce4bd35bd8dd749bf0357c4b14b6",
"text": "In this paper, we describe and analyze the performance of two iris-encoding techniques. The first technique is based on Principle Component Analysis (PCA) encoding method while the second technique is a combination of Principal Component Analysis with Independent Component Analysis (ICA) following it. Both techniques are applied globally. PCA and ICA are two well known methods used to process a variety of data. Though PCA has been used as a preprocessing step that reduces dimensions for obtaining ICA components for iris, it has never been analyzed in depth as an individual encoding method. In practice PCA and ICA are known as methods that extract global and fine features, respectively. It is shown here that when PCA and ICA methods are used to encode iris images, one of the critical steps required to achieve a good performance is compensation for rotation effect. We further study the effect of varying the image resolution level on the performance of the two encoding methods. The major motivation for this study is the cases in practice where images of the same or different irises taken at different distances have to be compared. The performance of encoding techniques is analyzed using the CASIA dataset. The original images are non-ideal and thus require a sequence of preprocessing steps prior to application of encoding methods. We plot a series of Receiver Operating Characteristics (ROCs) to demonstrate various effects on the performance of the iris-based recognition system implementing PCA and ICA encoding techniques.",
"title": ""
},
{
"docid": "48c157638090b3168b6fd3cb50780184",
"text": "Adverse reactions to drugs are among the most common causes of death in industrialized nations. Expensive clinical trials are not sufficient to uncover all of the adverse reactions a drug may cause, necessitating systems for post-marketing surveillance, or pharmacovigilance. These systems have typically relied on voluntary reporting by health care professionals. However, self-reported patient data has become an increasingly important resource, with efforts such as MedWatch from the FDA allowing reports directly from the consumer. In this paper, we propose mining the relationships between drugs and adverse reactions as reported by the patients themselves in user comments to health-related websites. We evaluate our system on a manually annotated set of user comments, with promising performance. We also report encouraging correlations between the frequency of adverse drug reactions found by our system in unlabeled data and the frequency of documented adverse drug reactions. We conclude that user comments pose a significant natural language processing challenge, but do contain useful extractable information which merits further exploration.",
"title": ""
},
{
"docid": "d6fe99533c66075ffb85faf7c70475f0",
"text": "Outlier detection has received significant attention in many applications, such as detecting credit card fraud or network intrusions. Most existing research focuses on numerical datasets, and cannot directly apply to categorical sets where there is little sense in calculating distances among data points. Furthermore, a number of outlier detection methods require quadratic time with respect to the dataset size and usually multiple dataset scans. These characteristics are undesirable for large datasets, potentially scattered over multiple distributed sites. In this paper, we introduce Attribute Value Frequency (A VF), a fast and scalable outlier detection strategy for categorical data. A VF scales linearly with the number of data points and attributes, and relies on a single data scan. AVF is compared with a list of representative outlier detection approaches that have not been contrasted against each other. Our proposed solution is experimentally shown to be significantly faster, and as effective in discovering outliers.",
"title": ""
},
{
"docid": "341e0b7d04b333376674dac3c0888f50",
"text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.",
"title": ""
},
{
"docid": "2ea626f0e1c4dfa3d5a23c80d8fbf70c",
"text": "Although research studies in education show that use of technology can help student learning, its use is generally affected by certain barriers. In this paper, we first identify the general barriers typically faced by K-12 schools, both in the United States as well as other countries, when integrating technology into the curriculum for instructional purposes, namely: (a) resources, (b) institution, (c) subject culture, (d) attitudes and beliefs, (e) knowledge and skills, and (f) assessment. We then describe the strategies to overcome such barriers: (a) having a shared vision and technology integration plan, (b) overcoming the scarcity of resources, (c) changing attitudes and beliefs, (d) conducting professional development, and (e) reconsidering assessments. Finally, we identify several current knowledge gaps pertaining to the barriers and strategies of technology integration, and offer pertinent recommendations for future research.",
"title": ""
},
{
"docid": "455a6fe5862e3271ac00057d1b569b11",
"text": "Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.",
"title": ""
},
{
"docid": "160e3c3fc9e3a13c4ee961e453532fd1",
"text": "An encephalitis outbreak was investigated in Faridpur District, Bangladesh, in April-May 2004 to determine the cause of the outbreak and risk factors for disease. Biologic specimens were tested for Nipah virus. Surfaces were evaluated for Nipah virus contamination by using reverse transcription-PCR (RT-PCR). Thirty-six cases of Nipah virus illness were identified; 75% of case-patients died. Multiple peaks of illness occurred, and 33 case-patients had close contact with another Nipah virus patient before their illness. Results from a case-control study showed that contact with 1 patient carried the highest risk for infection (odds ratio 6.7, 95% confidence interval 2.9-16.8, p < 0.001). RT-PCR testing of environmental samples confirmed Nipah virus contamination of hospital surfaces. This investigation provides evidence for person-to-person transmission of Nipah virus. Capacity for person-to-person transmission increases the potential for wider spread of this highly lethal pathogen and highlights the need for infection control strategies for resource-poor settings.",
"title": ""
},
{
"docid": "3a0275d7834a6fb1359bb7d3bef14e97",
"text": "With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve quality of service (QoS) in IoT networks is becoming a challenging problem. Currently most interaction between the IoT devices and the supporting back-end servers is done through large scale cloud data centers. However, with the exponential growth of IoT devices and the amount of data they produce, communication between \"things\" and cloud will be costly, inefficient, and in some cases infeasible. Fog computing serves as solution for this as it provides computation, storage, and networking resource for IoT, closer to things and users. One of the promising advantages of fog is reducing service delay for end user applications, whereas cloud provides extensive computation and storage capacity with a higher latency. Thus it is necessary to understand the interplay between fog computing and cloud, and to evaluate the effect of fog computing on the IoT service delay and QoS. In this paper we will introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.",
"title": ""
},
{
"docid": "31cf550d44266e967716560faeb30f2b",
"text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.",
"title": ""
},
{
"docid": "8c3fb435c46c8ff3c509a2bfeb6625d7",
"text": "The objective of this study was to quantify the electrode-tissue interface impedance of electrodes used for deep brain stimulation (DBS). We measured the impedance of DBS electrodes using electrochemical impedance spectroscopy in vitro in a carbonate- and phosphate-buffered saline solution and in vivo following acute implantation in the brain. The components of the impedance, including the series resistance (R(s)), the Faradaic resistance (R(f)) and the double layer capacitance (C(dl)), were estimated using an equivalent electrical circuit. Both R(f) and C(dl) decreased as the sinusoidal frequency was increased, but the ratio of the capacitive charge transfer to the Faradaic charge transfer was relatively insensitive to the change of frequency. R(f) decreased and C(dl) increased as the current density was increased, and above a critical current density the interface impedance became nonlinear. Thus, the magnitude of the interface impedance was strongly dependent on the intensity (pulse amplitude and duration) of stimulation. The temporal dependence and spatial non-uniformity of R(f) and C(dl) suggested that a distributed network, with each element of the network having dynamics tailored to a specific stimulus waveform, is required to describe adequately the impedance of the DBS electrode-tissue interface. Voltage transients to biphasic square current pulses were measured and suggested that the electrode-tissue interface did not operate in a linear range at clinically relevant current amplitudes, and that the assumption of the DBS electrode being ideally polarizable was not valid under clinical stimulating conditions.",
"title": ""
},
{
"docid": "89a1e91c2ab1393f28a6381ba94de12d",
"text": "In this paper, a simulation environment encompassing realistic propagation conditions and system parameters is employed in order to analyze the performance of future multigigabit indoor communication systems at tetrahertz frequencies. The influence of high-gain antennas on transmission aspects is investigated. Transmitter position for optimal signal coverage is also analyzed. Furthermore, signal coverage maps and achievable data rates are calculated for generic indoor scenarios with and without furniture for a variety of possible propagation conditions.",
"title": ""
},
{
"docid": "0251f38f48c470e2e04fb14fc7ba34b2",
"text": "The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.",
"title": ""
},
{
"docid": "cb702c48a242c463dfe1ac1f208acaa2",
"text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.",
"title": ""
},
{
"docid": "a506f3f6c401f83eaba830abb20c8fff",
"text": "The mechanisms governing the recruitment of functional glutamate receptors at nascent excitatory postsynapses following initial axon-dendrite contact remain unclear. We examined here the ability of neurexin/neuroligin adhesions to mobilize AMPA-type glutamate receptors (AMPARs) at postsynapses through a diffusion/trap process involving the scaffold molecule PSD-95. Using single nanoparticle tracking in primary rat and mouse hippocampal neurons overexpressing or lacking neuroligin-1 (Nlg1), a striking inverse correlation was found between AMPAR diffusion and Nlg1 expression level. The use of Nlg1 mutants and inhibitory RNAs against PSD-95 demonstrated that this effect depended on intact Nlg1/PSD-95 interactions. Furthermore, functional AMPARs were recruited within 1 h at nascent Nlg1/PSD-95 clusters assembled by neurexin-1β multimers, a process requiring AMPAR membrane diffusion. Triggering novel neurexin/neuroligin adhesions also caused a depletion of PSD-95 from native synapses and a drop in AMPAR miniature EPSCs, indicating a competitive mechanism. Finally, both AMPAR level at synapses and AMPAR-dependent synaptic transmission were diminished in hippocampal slices from newborn Nlg1 knock-out mice, confirming an important role of Nlg1 in driving AMPARs to nascent synapses. Together, these data reveal a mechanism by which membrane-diffusing AMPARs can be rapidly trapped at PSD-95 scaffolds assembled at nascent neurexin/neuroligin adhesions, in competition with existing synapses.",
"title": ""
},
{
"docid": "149073f577d0e1fb380ae395ff1ca0c5",
"text": "A complete kinematic model of the 5 DOF-Mitsubishi RV-M1 manipulator is presented in this paper. The forward kinematic model is based on the Modified Denavit-Hartenberg notation, and the inverse one is derived in closed form by fixing the orientation of the tool. A graphical interface is developed using MATHEMATICA software to illustrate the forward and inverse kinematics, allowing student or researcher to have hands-on of virtual graphical model that fully describe both the robot's geometry and the robot's motion in its workspace before to tackle any real task.",
"title": ""
},
{
"docid": "4d6e9bc0a8c55e65d070d1776e781173",
"text": "As electronic device feature sizes scale-down, the power consumed due to onchip communications as compared to computations will increase dramatically; likewise, the available bandwidth per computational operation will continue to decrease. Integrated photonics can offer savings in power and potential increase in bandwidth for onchip networks. Classical diffraction-limited photonics currently utilized in photonic integrated circuits (PIC) is characterized by bulky and inefficient devices compared to their electronic counterparts due to weak light matter interactions (LMI). Performance critical for the PIC is electro-optic modulators (EOM), whose performances depend inherently on enhancing LMIs. Current EOMs based on diffraction-limited optical modes often deploy ring resonators and are consequently bulky, photon-lifetime modulation limited, and power inefficient due to large electrical...",
"title": ""
}
] |
scidocsrr
|
43c305b4a8dc2035524c87b78485678c
|
Document dissimilarity within and across languages: A benchmarking study
|
[
{
"docid": "5bc9b4a952465bed83b5e84d6ab2bba8",
"text": "We present a new algorithm for duplicate document detection thatuses collection statistics. We compare our approach with thestate-of-the-art approach using multiple collections. Thesecollections include a 30 MB 18,577 web document collectiondeveloped by Excite@Home and three NIST collections. The first NISTcollection consists of 100 MB 18,232 LA-Times documents, which isroughly similar in the number of documents to theExcite&at;Home collection. The other two collections are both 2GB and are the 247,491-web document collection and the TREC disks 4and 5---528,023 document collection. We show that our approachcalled I-Match, scales in terms of the number of documents andworks well for documents of all sizes. We compared our solution tothe state of the art and found that in addition to improvedaccuracy of detection, our approach executed in roughly one-fifththe time.",
"title": ""
}
] |
[
{
"docid": "41b7b8638fa1d3042873ca70f9c338f1",
"text": "The LC50 (78, 85 ppm) and LC90 (88, 135 ppm) of Anagalis arvensis and Calendula micrantha respectively against Biomphalaria alexandrina were higher than those of the non-target snails, Physa acuta, Planorbis planorbis, Helisoma duryi and Melanoides tuberculata. In contrast, the LC50 of Niclosamide (0.11 ppm) and Copper sulphate (CuSO4) (0.42 ppm) against B. alexandrina were lower than those of the non-target snails. The mortalities percentage among non-target snails ranged between 0.0 & 20% when sublethal concentrations of CuSO4 against B. alexandrina mixed with those of C. micrantha and between 0.0 & 40% when mixed with A. arvensis. Mortalities ranged between 0.0 & 50% when Niclosamide was mixed with each of A. arvensis and C. micrantha. A. arvensis induced 100% mortality on Oreochromis niloticus after 48 hrs exposure and after 24 hrs for Gambusia affinis. C. micrantha was non-toxic to the fish. The survival rate of O. niloticus and G. affinis after 48 hrs exposure to 0.11 ppm of Niclosamide were 83.3% & 100% respectively. These rates were 91.7% & 93.3% respectively when each of the two fish species was exposed to 0.42 ppm of CuSO4. Mixture of sub-lethal concentrations of A. arvensis against B. alexandrina and those of Niclosamide or CuSO4 at ratios 10:40 & 25:25 induced 66.6% mortalities on O. niloticus and 83.3% at 40:10. These mixtures caused 100% mortalities on G. affinis at all ratios. A. arvensis CuSO4 mixtures at 10:40 induced 83.3% & 40% mortalities on O. niloticus and G. affinis respectively and 100% mortalities on both fish species at ratios 25:25 & 40:10. A mixture of sub-lethal concentrations of C. micrantha against B. alexandrina and of Niclosamide or CuSO4 caused mortalities of O. niloticus between 0.0 & 33.3% and between 5% & 35% of G. affinis. The residue of Cu in O. niloticus were 4.69, 19.06 & 25.37 mg/1kgm fish after 24, 48 & 72 hrs exposure to LC0 of CuSO4 against B. alexandrina respectively.",
"title": ""
},
{
"docid": "673fea40e5cb12b54cc296b1a2c98ddb",
"text": "Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.",
"title": ""
},
{
"docid": "7aa9a5f9bde62b5aafb30cbd9c79f9e9",
"text": "Congestion in traffic is a serious issue. In existing system signal timings are fixed and they are independent of traffic density. Large red light delays leads to traffic congestion. In this paper, IoT based traffic control system is implemented in which signal timings are updated based on the vehicle counting. This system consists of WI-FI transceiver module it transmits the vehicle count of the current system to the next traffic signal. Based on traffic density of previous signal it controls the signals of the next signal. The system is based on raspberry-pi and Arduino. Image processing of traffic video is done in MATLAB with simulink support. The whole vehicle counting is performed by raspberry pi.",
"title": ""
},
{
"docid": "7d38b4b2d07c24fdfb2306116017cd5e",
"text": "Science Technology Engineering, Art, Mathematics (STEAM) is an integration of art into Science Technology Engineering, Mathematics (STEM). Connecting art to science makes learning more effective and innovative. This study aims to determine the increase in mastery of the concept of high school students after the application of STEAM education in learning with the theme of Water and Us. The research method used is one group Pretestposttest design with students of class VII (n = 37) junior high school. The instrument used in the form of question of mastery of concepts in the form of multiple choices amounted to 20 questions and observation sheet of learning implementation. The results of the study show that there is an increase in conceptualization on the theme of Water and Us which is categorized as medium (<g>=0, 46) after the application of the STEAM approach. The conclusion obtained that by applying STEAM approach in learning can improve the mastery of concept",
"title": ""
},
{
"docid": "d29eba4f796cb642d64e73b76767e59d",
"text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.",
"title": ""
},
{
"docid": "1c03c9e9fb2697cbff3ee3063593d33c",
"text": "Hand pose estimation from a monocular RGB image is an important but challenging task. A main factor affecting its performance is the lack of a sufficiently large training dataset with accurate hand-keypoint annotations. In this work, we circumvent this problem by proposing an effective method for generating realistic hand poses, and show that state-of-the-art algorithms for hand pose estimation can be greatly improved by utilizing the generated hand poses as training data. Specifically, we first adopt an augmented reality (AR) simulator to synthesize hand poses with accurate hand-keypoint labels. Although the synthetic hand poses come with precise joint labels, eliminating the need of manual annotations, they look unnatural and are not the ideal training data. To produce more realistic hand poses, we propose to blend a synthetic hand pose with a real background, such as arms and sleeves. To this end, we develop tonality-alignment generative adversarial networks (TAGANs), which align the tonality and color distributions between synthetic hand poses and real backgrounds, and can generate high quality hand poses. We evaluate TAGAN on three benchmarks, including the RHP, STB, and CMUPS hand pose datasets. With the aid of the synthesized poses, our method performs favorably against the state-ofthe-arts in both 2D and 3D hand pose estimations.",
"title": ""
},
{
"docid": "eec15a5d14082d625824452bd070ec38",
"text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.",
"title": ""
},
{
"docid": "d3b6fcc353382c947cfb0b4a73eda0ef",
"text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "bca27f6e44d64824a0be41d5f2beea4d",
"text": "In Infrastructure-as-a-Service (IaaS) clouds, intrusion detection systems (IDSes) increase their importance. To securely detect attacks against virtual machines (VMs), IDS offloading with VM introspection (VMI) has been proposed. In semi-trusted clouds, however, it is difficult to securely offload IDSes because there may exist insiders such as malicious system administrators. First, secure VM execution cannot coexist with IDS offloading although it has to be enabled to prevent information leakage to insiders. Second, offloaded IDSes can be easily disabled by insiders. To solve these problems, this paper proposes IDS remote offloading with remote VMI. Since IDSes can run at trusted remote hosts outside semi-trusted clouds, they cannot be disabled by insiders in clouds. Remote VMI enables IDSes at remote hosts to introspect VMs via the trusted hypervisor inside semi-trusted clouds. Secure VM execution can be bypassed by performing VMI in the hypervisor. Remote VMI preserves the integrity and confidentiality of introspected data between the hypervisor and remote hosts. The integrity of the hypervisor can be guaranteed by various existing techniques. We have developed RemoteTrans for remotely offloading legacy IDSes and confirmed that RemoteTrans could achieve surprisingly efficient execution of legacy IDSes at remote hosts.",
"title": ""
},
{
"docid": "b31f5af2510461479d653be1ddadaa22",
"text": "Integrating smart temperature sensors into digital platforms facilitates information to be processed and transmitted, and open up new applications. Furthermore, temperature sensors are crucial components in computing platforms to manage power-efficiency trade-offs reliably under a thermal budget. This paper presents a holistic perspective about smart temperature sensor design from system- to device-level including manufacturing concerns. Through smart sensor design evolutions, we identify some scaling paths and circuit techniques to surmount analog/mixed-signal design challenges in 32-nm and beyond. We close with opportunities to design smarter temperature sensors.",
"title": ""
},
{
"docid": "0b19bd9604fae55455799c39595c8016",
"text": "Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) λ -coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The λ-coverage problem is concerned with finding a set of key nodes having minimal size that can influence a given percentage λ of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the λ -coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient.",
"title": ""
},
{
"docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04",
"text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.",
"title": ""
},
{
"docid": "eec819447de1d6482f9ff4a04fb73782",
"text": "Estimating the travel time of any path (denoted by a sequence of connected road segments) in a city is of great importance to traffic monitoring, route planning, ridesharing, taxi/Uber dispatching, etc. However, it is a very challenging problem, affected by diverse complex factors, including spatial correlations, temporal dependencies, external conditions (e.g. weather, traffic lights). Prior work usually focuses on estimating the travel times of individual road segments or sub-paths and then summing up these times, which leads to an inaccurate estimation because such approaches do not consider road intersections/traffic lights, and local errors may accumulate. To address these issues, we propose an end-to-end Deep learning framework for Travel Time Estimation (called DeepTTE) that estimates the travel time of the whole path directly. More specifically, we present a geo-convolution operation by integrating the geographic information into the classical convolution, capable of capturing spatial correlations. By stacking recurrent unit on the geo-convoluton layer, our DeepTTE can capture the temporal dependencies as well. A multi-task learning component is given on the top of DeepTTE, that learns to estimate the travel time of both the entire path and each local path simultaneously during the training phase. Extensive experiments on two trajectory datasets show our DeepTTE significantly outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "35830166ddf17086a61ab07ec41be6b0",
"text": "As the need for Human Computer Interaction (HCI) designers increases so does the need for courses that best prepare students for their future work life. Multidisciplinary teamwork is what very frequently meets the graduates in their new work situations. Preparing students for such multidisciplinary work through education is not easy to achieve. In this paper, we investigate ways to engage computer science students, majoring in design, use, and interaction (with technology), in design practices through an advanced graduate course in interaction design. Here, we take a closer look at how prior embodied and explicit knowledge of HCI that all of the students have, combined with understanding of design practice through the course, shape them as human-computer interaction designers. We evaluate the results of the effort in terms of increase in creativity, novelty of ideas, body language when engaged in design activities, and in terms of perceptions of how well this course prepared the students for the work practice outside of the university. Keywords—HCI education; interaction design; studio; design education; multidisciplinary teamwork.",
"title": ""
},
{
"docid": "230d3cdc0bd444bfe5c910f32bd1a109",
"text": "Programming is taught as foundation module at the beginning of undergraduate studies and/or during foundation year. Learning introductory programming languages such as Pascal, Basic / C (procedural) and C++ / Java (object oriented) requires learners to understand the underlying programming paradigm, syntax, logic and the structure. Learning to program is considered hard for novice learners and it is important to understand what makes learning program so difficult and how students learn.\n The prevailing focus on multimedia learning objects provides promising approach to create better knowledge transfer. This project aims to investigate: (a) students' perception in learning to program and the difficulties. (b) effectiveness of multimedia learning objects in learning introductory programming language in a face-to-face learning environment.",
"title": ""
},
{
"docid": "06518637c2b44779da3479854fdbb84d",
"text": "OBJECTIVE\nThe relative short-term efficacy and long-term benefits of pharmacologic versus psychotherapeutic interventions have not been studied for posttraumatic stress disorder (PTSD). This study compared the efficacy of a selective serotonin reup-take inhibitor (SSRI), fluoxetine, with a psychotherapeutic treatment, eye movement desensitization and reprocessing (EMDR), and pill placebo and measured maintenance of treatment gains at 6-month follow-up.\n\n\nMETHOD\nEighty-eight PTSD subjects diagnosed according to DSM-IV criteria were randomly assigned to EMDR, fluoxetine, or pill placebo. They received 8 weeks of treatment and were assessed by blind raters posttreatment and at 6-month follow-up. The primary outcome measure was the Clinician-Administered PTSD Scale, DSM-IV version, and the secondary outcome measure was the Beck Depression Inventory-II. The study ran from July 2000 through July 2003.\n\n\nRESULTS\nThe psychotherapy intervention was more successful than pharmacotherapy in achieving sustained reductions in PTSD and depression symptoms, but this benefit accrued primarily for adult-onset trauma survivors. At 6-month follow-up, 75.0% of adult-onset versus 33.3% of child-onset trauma subjects receiving EMDR achieved asymptomatic end-state functioning compared with none in the fluoxetine group. For most childhood-onset trauma patients, neither treatment produced complete symptom remission.\n\n\nCONCLUSIONS\nThis study supports the efficacy of brief EMDR treatment to produce substantial and sustained reduction of PTSD and depression in most victims of adult-onset trauma. It suggests a role for SSRIs as a reliable first-line intervention to achieve moderate symptom relief for adult victims of childhood-onset trauma. Future research should assess the impact of lengthier intervention, combination treatments, and treatment sequencing on the resolution of PTSD in adults with childhood-onset trauma.",
"title": ""
},
{
"docid": "0a2d2a018348f1740a086977cf19ceb4",
"text": "This paper describes the design of UART (universal asynchronous receiver transmitter) based on VHDL. As UART is consider as a low speed, low cost data exchange between computer and peripherals.[1].To overcome the problem of low speed data transmission , a 16 bit UART is proposed in this paper. It works on 9600bps baud rate. This will result in increased the speed of UART.Whole design is simulated with Xilinx ISE8.2i software and results are completely consistent with UART protocol. Keywords— Baud rate generator, HDL, ISE8.2i, Receiver, Serial communication, Transmitter, Xilinx.",
"title": ""
},
{
"docid": "b84f84961c655ea98920513bf3074241",
"text": "This study took place in Sakarya Anatolian High School, Profession High School and Vocational High School for Industry (SAPHPHVHfI) where a flexible and nonroutine organising style was tried to be realized. The management style was initiated on a study group at first, but then it helped the group to come out as natural team spontaneously. The main purpose of the study is to make an evaluation on five teams within the school where team (based) management has been experienced in accordance with Belbin (1981)’s team roles theory [9]. The study group of the research consists of 28 people. The data was obtained from observations, interviews and the answers given to the questions in Belbin Team Roles Self Perception Inventory (BTRSPI). Some of the findings of the study are; (1) There was no paralellism between the team and functional roles of the members of the mentioned five team, (2) The team roles were distributed equaly balanced but it was also found that most of the roles were played by the members who were less inclined to play it, (3) The there were very few members who played plant role within the teams and there were nearly no one who were inclined to play leader role.",
"title": ""
},
{
"docid": "19fbd4a685e7fc8c299447644f496d5f",
"text": "The creation of the e-learning web services are increasingly growing. Therefore, their discovery is a very important challenge. The choice of the e-learning web services depend, generally, on the pedagogic, the financial and the technological constraints. The Learning Quality ontology extends existing ontology such as OWL-S to provide a semantically rich description of these constraints. However, due to the diversity of web services customers, other parameters must be considered during the discovery process, such as their preferences. For this purpose, the user profile takes into account to increase the degree of relevance of discovery results. We also present a modeling scenario to illustrate how our ontology can be used.",
"title": ""
}
] |
scidocsrr
|
15a2ffc1ca94feb12059ba5d4285a66c
|
Learning Decision Trees Using the Area Under the ROC Curve
|
[
{
"docid": "e9017607252973b36f9d4c3c659fe858",
"text": "In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a topdown approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method. Index Terms —Decision trees, top-down induction of decision trees, simplification of decision trees, pruning and grafting operators, optimal pruning, comparative studies. —————————— ✦ ——————————",
"title": ""
},
{
"docid": "a9bc9d9098fe852d13c3355ab6f81edb",
"text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.",
"title": ""
}
] |
[
{
"docid": "9a48e31b5911e68b11c846d543f897be",
"text": "Today’s smartphone users face a security dilemma: many apps they install operate on privacy-sensitive data, although they might originate from developers whose trustworthiness is hard to judge. Researchers have addressed the problem with more and more sophisticated static and dynamic analysis tools as an aid to assess how apps use private user data. Those tools, however, rely on the manual configuration of lists of sources of sensitive data as well as sinks which might leak data to untrusted observers. Such lists are hard to come by. We thus propose SUSI, a novel machine-learning guided approach for identifying sources and sinks directly from the code of any Android API. Given a training set of hand-annotated sources and sinks, SUSI identifies other sources and sinks in the entire API. To provide more fine-grained information, SUSI further categorizes the sources (e.g., unique identifier, location information, etc.) and sinks (e.g., network, file, etc.). For Android 4.2, SUSI identifies hundreds of sources and sinks with over 92% accuracy, many of which are missed by current information-flow tracking tools. An evaluation of about 11,000 malware samples confirms that many of these sources and sinks are indeed used. We furthermore show that SUSI can reliably classify sources and sinks even in new, previously unseen Android versions and components like Google Glass or",
"title": ""
},
{
"docid": "b15dcda2b395d02a2df18f6d8bfa3b19",
"text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.",
"title": ""
},
{
"docid": "8025825afec9258d9a0a3da1f609f4ef",
"text": "The task of measuring sentence similarity is defined as determining how similar the meanings of two sentences are. Computing sentence similarity is not a trivial task, due to the variability of natural language expressions. Measuring semantic similarity of sentences is closely related to semantic similarity between words. It makes a relationship between a word and the sentence through their meanings. The intention is to enhance the concepts of semantics over the syntactic measures that are able to categorize the pair of sentences effectively. Semantic similarity plays a vital role in Natural language processing, Informational Retrieval, Text Mining, Q & A systems, text-related research and application area. Traditional similarity measures are based on the syntactic features and other path based measures. In this project, we evaluated and tested three different semantic similarity approaches like cosine similarity, path based approach (wu – palmer and shortest path based), and feature based approach. Our proposed approaches exploits preprocessing of pair of sentences which identifies the bag of words and then applying the similarity measures like cosine similarity, path based similarity measures. In our approach the main contributions are comparison of existing similarity measures and feature based measure based on Wordnet. In feature based approach we perform the tagging and lemmatization and generates the similarity score based on the nouns and verbs. We evaluate our project output by comparing the existing measures based on different thresholds and comparison between three approaches. Finally we conclude that feature based measure generates better semantic score.",
"title": ""
},
{
"docid": "cf1720877ddc4400bdce2a149b5ec8b4",
"text": "How do we find patterns in author-keyword associations, evolving over time? Or in data cubes (tensors), with product-branchcustomer sales information? And more generally, how to summarize high-order data cubes (tensors)? How to incrementally update these patterns over time? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks, and many more settings. However, they have only two orders (i.e., matrices, like author and keyword in the previous example).\n We propose to envision such higher-order data as tensors, and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce a general framework, incremental tensor analysis (ITA), which efficiently computes a compact summary for high-order and high-dimensional data, and also reveals the hidden correlations. Three variants of ITA are presented: (1) dynamic tensor analysis (DTA); (2) streaming tensor analysis (STA); and (3) window-based tensor analysis (WTA). In paricular, we explore several fundamental design trade-offs such as space efficiency, computational cost, approximation accuracy, time dependency, and model complexity.\n We implement all our methods and apply them in several real settings, such as network anomaly detection, multiway latent semantic indexing on citation networks, and correlation study on sensor measurements. Our empirical studies show that the proposed methods are fast and accurate and that they find interesting patterns and outliers on the real datasets.",
"title": ""
},
{
"docid": "fe7668dd82775cf02116faacd1dd945f",
"text": "In the last years, the advent of unmanned aerial vehicles (UAVs) for civilian remote sensing purposes has generated a lot of interest because of the various new applications they can offer. One of them is represented by the automatic detection and counting of cars. In this paper, we propose a novel car detection method. It starts with a feature extraction process based on scalar invariant feature transform (SIFT) thanks to which a set of keypoints is identified in the considered image and opportunely described. Successively, the process discriminates between keypoints assigned to cars and those associated with all remaining objects by means of a support vector machine (SVM) classifier. Experimental results have been conducted on a real UAV scene. They show how the proposed method allows providing interesting detection performances.",
"title": ""
},
{
"docid": "5a063c2373aa849b59e20e6115a4df54",
"text": "A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.",
"title": ""
},
{
"docid": "3d862e488798629d633f78260a569468",
"text": "Training workshops and professional meetings are important tools for capacity building and professional development. These social events provide professionals and educators a platform where they can discuss and exchange constructive ideas, and receive feedback. In particular, competition-based training workshops where participants compete on solving similar and common challenging problems are effective tools for stimulating students’ learning and aspirations. This paper reports the results of a two-day training workshop where memory and disk forensics were taught using a competition-based security educational tool. The workshop included training sessions for professionals, educators, and students to learn features of Tracer FIRE, a competition-based digital forensics and assessment tool, developed by Sandia National Laboratories. The results indicate that competitionbased training can be very effective in stimulating students’ motivation to learn. However, extra caution should be taken into account when delivering these types of training workshops. Keywords-component; cyber security, digital forenciscs, partcipatory training workshop, competition-based learning,",
"title": ""
},
{
"docid": "95d1a35068e7de3293f8029e8b8694f9",
"text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.",
"title": ""
},
{
"docid": "802b1bf3a263d9641dc7dc689f7eab10",
"text": "Type I membrane oscillators such as the Connor model (Connor et al. 1977) and the Morris-Lecar model (Morris and Lecar 1981) admit very low frequency oscillations near the critical applied current. Hansel et al. (1995) have numerically shown that synchrony is difficult to achieve with these models and that the phase resetting curve is strictly positive. We use singular perturbation methods and averaging to show that this is a general property of Type I membrane models. We show in a limited sense that so called Type II resetting occurs with models that obtain rhythmicity via a Hopf bifurcation. We also show the differences between synapses that act rapidly and those that act slowly and derive a canonical form for the phase interactions.",
"title": ""
},
{
"docid": "2ceedf1be1770938c94892c80ae956e4",
"text": "Although there is interest in the educational potential of online multiplayer games and virtual worlds, there is still little evidence to explain specifically what and how people learn from these environments. This paper addresses this issue by exploring the experiences of couples that play World of Warcraft together. Learning outcomes were identified (involving the management of ludic, social and material resources) along with learning processes, which followed Wenger’s model of participation in Communities of Practice. Comparing this with existing literature suggests that productive comparisons can be drawn with the experiences of distance education students and the social pressures that affect their participation. Introduction Although there is great interest in the potential that computer games have in educational settings (eg, McFarlane, Sparrowhawk & Heald, 2002), and their relevance to learning more generally (eg, Gee, 2003), there has been relatively little in the way of detailed accounts of what is actually learnt when people play (Squire, 2002), and still less that relates such learning to formal education. In this paper, we describe a study that explores how people learn when they play the massively multiplayer online role-playing game (MMORPG), World of Warcraft. Detailed, qualitative research was undertaken with couples to explore their play, adopting a social perspective on learning. The paper concludes with a discussion that relates this to formal curricula and considers the implications for distance learning. British Journal of Educational Technology Vol 40 No 3 2009 444–457 doi:10.1111/j.1467-8535.2009.00948.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Background Researchers have long been interested in games and learning. There is, for example, a tradition of work within psychology exploring what makes games motivating, and relating this to learning (eg, Malone & Lepper, 1987). Games have been recently featured in mainstream educational policy (eg, DfES, 2005), and it has been suggested (eg, Gee, 2003) that they provide a model that should inform educational practice more generally. However, research exploring how games can be used in formal education suggests that the potential value of games to support learning is not so easy to realise. McFarlane et al (2002, p. 16), for example, argued that ‘the greatest obstacle to integrating games into the curriculum is the mismatch between the skills and knowledge developed in games, and those recognised explicitly within the school system’. Mitchell and Savill-Smith (2004) noted that although games have been used to support various kinds of learning (eg, recall of content, computer literacy, strategic skills), such uses were often problematic, being complicated by the need to integrate games into existing educational contexts. Furthermore, games specifically designed to be educational were ‘typically disliked’ (p. 44) as well as being expensive to produce. Until recently, research on the use of games in education tended to focus on ‘stand alone’ or single player games. Such games can, to some extent, be assessed in terms of their content coverage or instructional design processes, and evaluated for their ‘fit’ with a given curriculum (eg, Kirriemuir, 2002). Gaming, however, is generally a social activity, and this is even more apparent when we move from a consideration of single player games to a focus on multiplayer, online games. Viewing games from a social perspective opens the possibility of understanding learning as a social achievement, not just a process of content acquisition or skills development (Squire, 2002). In this study, we focus on a particular genre of online, multiplayer game: an MMORPG. MMORPGs incorporate structural elements drawn from table-top role-playing games (Dungeons & Dragons being the classic example). Play takes place in an expansive and persistent graphically rendered world. Players form teams and guilds, undertake group missions, meet in banks and auction houses, chat, congregate in virtual cities and engage in different modes of play, which involve various forms of collaboration and competition. As Squire noted (2002), socially situated accounts of actual learning in games (as opposed to what they might, potentially, help people to learn) have been lacking, partly because the topic is so complex. How, indeed, should the ‘game’ be understood—is it limited to the rules, or the player’s interactions with these rules? Does it include other players, and all possible interactions, and extend to out-of-game related activities and associated materials such as fan forums? Such questions have methodological implications, and hint at the ambiguities that educators working with virtual worlds might face (Carr, Oliver & Burn, 2008). Learning in virtual worlds 445 © 2009 The Authors. Journal compilation © 2009 Becta. Work in this area is beginning to emerge, particularly in relation to the learning and mentoring that takes place within player ‘guilds’ and online clans (see Galarneau, 2005; Steinkuehler, 2005). However, it is interesting to note that the research emerging from a digital game studies perspective, including much of the work cited thus far, is rarely utilised by educators researching the pedagogic potentials of virtual worlds such as Second Life. This study is informed by and attempts to speak to both of these communities. Methodology The purpose of this study was to explore how people learn in such virtual worlds in general. It was decided that focusing on a MMORPG such as World of Warcraft would be practical and offer a rich opportunity to study learning. MMORPGs are games; they have rules and goals, and particular forms of progression. Expertise in a virtual world such as Second Life is more dispersed, because the range of activities is that much greater (encompassing building, playing, scripting, creating machinima or socialising, for instance). Each of these activities would involve particular forms of expertise. The ‘curriculum’ proposed by World of Warcraft is more specified. It was important to approach learning practices in this game without divorcing such phenomena from the real-world contexts in which play takes place. In order to study players’ accounts of learning and the links between their play and other aspects of their social lives, we sought participants who would interact with each other both in the context of the game and outside of it. To this end, we recruited couples that play together in the virtual environment of World of Warcraft, while sharing real space. This decision was taken to manage the potential complexity of studying social settings: couples were the simplest stable social formation that we could identify who would interact both in the context of the game and outside of this too. Interviews were conducted with five couples. These were theoretically sampled, to maximise diversity in players’ accounts (as with any theoretically sampled study, this means that no claims can be made about prevalence or typicality). Players were recruited through online guilds and real-world social networks. The first two sets of participants were sampled for convenience (two heterosexual couples); the rest were invited to participate in order to broaden this sample (one couple was chosen because they shared a single account, one where a partner had chosen to stop playing and one mother–son pairing). All participants were adults, and conventional ethical procedures to ensure informed consent were followed, as specified in the British Educational Research Association guidelines. The couples were interviewed in the game world at a location of their choosing. The interviews, which were semi-structured, were chat-logged and each lasted 60–90 minutes. The resulting transcripts were split into self-contained units (typically a single statement, or a question and answer, or a short exchange) and each was categorised 446 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Authors. Journal compilation © 2009 Becta. thematically. The initial categories were then jointly reviewed in order to consolidate and refine them, cross-checking them with the source transcripts to ensure their relevance and coherence. At this stage, the categories included references to topics such as who started first, self-assessments of competence, forms of help, guilds, affect, domestic space and assets, ‘alts’ (multiple characters) and so on. These were then reviewed to develop a single category that might provide an overview or explanation of the process. It should be noted that although this approach was informed by ‘grounded theory’ processes as described in Glaser and Strauss (1967), it does not share their positivistic stance on the status of the model that has been developed. Instead, it accords more closely with the position taken by Charmaz (2000), who recognises the central role of the researcher in shaping the data collected and making sense of it. What is produced therefore is seen as a socially constructed model, based on personal narratives, rather than an objective account of an independent reality. Reviewing the categories that emerged in this case led to ‘management of resources’ being selected as a general marker of learning. As players moved towards greater competence, they identified and leveraged an increasingly complex array of in-game resources, while also negotiating real-world resources and demands. To consider this framework in greater detail, ‘management of resources’ was subdivided into three categories: ludic (concerning the skills, knowledge and practices of game play), social and material (concerning physical resources such as the embodied setting for play) (see Carr & Oliver, 2008). Using this explanation of learning, the transcripts were re-reviewed in order to ",
"title": ""
},
{
"docid": "39673b789ee8d8c898c93b7627b31f0a",
"text": "In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permission-less setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.",
"title": ""
},
{
"docid": "590ad5ce089e824d5e9ec43c54fa3098",
"text": "The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.",
"title": ""
},
{
"docid": "01800567648367a34aa80a3161a21871",
"text": "Single-image haze-removal is challenging due to limited information contained in one single image. Previous solutions largely rely on handcrafted priors to compensate for this deficiency. Recent convolutional neural network (CNN) models have been used to learn haze-related priors but they ultimately work as advanced image filters. In this paper we propose a novel semantic approach towards single image haze removal. Unlike existing methods, we infer color priors based on extracted semantic features. We argue that semantic context can be exploited to give informative cues for (a) learning color prior on clean image and (b) estimating ambient illumination. This design allowed our model to recover clean images from challenging cases with strong ambiguity, e.g. saturated illumination color and sky regions in image. In experiments, we validate our approach upon synthetic and real hazy images, where our method showed superior performance over state-of-the-art approaches, suggesting semantic information facilitates the haze removal task.",
"title": ""
},
{
"docid": "3afa5356d956e2a525836b873442aa6b",
"text": "The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols.",
"title": ""
},
{
"docid": "8fb37cad9ad964598ed718f0c32eaff1",
"text": "A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.",
"title": ""
},
{
"docid": "7f27e9b29e6ed2800ef850e6022d29ba",
"text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.",
"title": ""
},
{
"docid": "7e325afeaaf3cc548bca023e35fbd203",
"text": "The short length of the estrous cycle of rats makes them ideal for investigation of changes occurring during the reproductive cycle. The estrous cycle lasts four days and is characterized as: proestrus, estrus, metestrus and diestrus, which may be determined according to the cell types observed in the vaginal smear. Since the collection of vaginal secretion and the use of stained material generally takes some time, the aim of the present work was to provide researchers with some helpful considerations about the determination of the rat estrous cycle phases in a fast and practical way. Vaginal secretion of thirty female rats was collected every morning during a month and unstained native material was observed using the microscope without the aid of the condenser lens. Using the 10 x objective lens, it was easier to analyze the proportion among the three cellular types, which are present in the vaginal smear. Using the 40 x objective lens, it is easier to recognize each one of these cellular types. The collection of vaginal lavage from the animals, the observation of the material, in the microscope, and the determination of the estrous cycle phase of all the thirty female rats took 15-20 minutes.",
"title": ""
},
{
"docid": "5df22a15a1bd768782214647b1b87ebe",
"text": "Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.",
"title": ""
},
{
"docid": "acdd0043b764fe8bb9904ea6ca71e5cf",
"text": "We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50% relative improvement in pose estimation accuracy over a stateof-the-art method.",
"title": ""
}
] |
scidocsrr
|
ab640c04dd25df53ae412ac5ce28c102
|
Neural Stance Detectors for Fake News Challenge
|
[
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
}
] |
[
{
"docid": "c4f0e371ea3950e601f76f8d34b736e3",
"text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.",
"title": ""
},
{
"docid": "c5122000c9d8736cecb4d24e6f56aab8",
"text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.",
"title": ""
},
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "bccb8e4cf7639dbcd3896e356aceec8d",
"text": "Over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy.",
"title": ""
},
{
"docid": "129c1b9a723b062a52b821988d124486",
"text": "Modern applications employ text files widely for providing data storage in a readable format for applications ranging from database systems to mobile phones. Traditional text processing tools are built around a byte-at-a-time sequential processing model that introduces significant branch and cache miss penalties. Recent work has explored an alternative, transposed representation of text, Parabix (Parallel Bit Streams), to accelerate scanning and parsing using SIMD facilities. This paper advocates and develops Parabix as a general framework and toolkit, describing the software toolchain and run-time support that allows applications to exploit modern SIMD instructions for high performance text processing. The goal is to generalize the techniques to ensure that they apply across a wide variety of applications and architectures. The toolchain enables the application developer to write constructs assuming unbounded character streams and Parabix's code translator generates code based on machine specifics (e.g., SIMD register widths). The general argument in support of Parabix technology is made by a detailed performance and energy study of XML parsing across a range of processor architectures. Parabix exploits intra-core SIMD hardware and demonstrates 2×-7× speedup and 4× improvement in energy efficiency when compared with two widely used conventional software parsers, Expat and Apache-Xerces. SIMD implementations across three generations of x86 processors are studied including the new SandyBridge. The 256-bit AVX technology in Intel SandyBridge is compared with the well established 128-bit SSE technology to analyze the benefits and challenges of 3-operand instruction formats and wider SIMD hardware. Finally, the XML program is partitioned into pipeline stages to demonstrate that thread-level parallelism enables the application to exploit SIMD units scattered across the different cores, achieving improved performance (2× on 4 cores) while maintaining single-threaded energy levels.",
"title": ""
},
{
"docid": "7fcd8eee5f2dccffd3431114e2b0ed5a",
"text": "Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.",
"title": ""
},
{
"docid": "ba4121003eb56d3ab6aebe128c219ab7",
"text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.",
"title": ""
},
{
"docid": "43e630794f1bce27688d2cedbb19f17d",
"text": "The systematic maintenance of mining machinery and equipment is the crucial factor for the proper functioning of a mine without production process interruption. For high-quality maintenance of the technical systems in mining, it is necessary to conduct a thorough analysis of machinery and accompanying elements in order to determine the critical elements in the system which are prone to failures. The risk assessment of the failures of system parts leads to obtaining precise indicators of failures which are also excellent guidelines for maintenance services. This paper presents a model of the risk assessment of technical systems failure based on the fuzzy sets theory, fuzzy logic and min–max composition. The risk indicators, severity, occurrence and detectability are analyzed. The risk indicators are given as linguistic variables. The model presented was applied for assessing the risk level of belt conveyor elements failure which works in severe conditions in a coal mine. Moreover, this paper shows the advantages of this model when compared to a standard procedure of RPN calculating – in the FMEA method of risk",
"title": ""
},
{
"docid": "7533347e8c5daf17eb09e64db0fa4394",
"text": "Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application. Keywords—Android; Permissions; Signature",
"title": ""
},
{
"docid": "4ac26e974e2d3861659323ae2aa7323c",
"text": "Episacral lipoma is a small, tender subcutaneous nodule primarily occurring over the posterior iliac crest. Episacral lipoma is a significant and treatable cause of acute and chronic low back pain. Episacral lipoma occurs as a result of tears in the thoracodorsal fascia and subsequent herniation of a portion of the underlying dorsal fat pad through the tear. This clinical entity is common, and recognition is simple. The presence of a painful nodule with disappearance of pain after injection with anaesthetic, is diagnostic. Medication and physical therapy may not be effective. Local injection of the nodule with a solution of anaesthetic and steroid is effective in treating the episacral lipoma. Here we describe 2 patients with painful nodules over the posterior iliac crest. One patient complained of severe lower back pain radiating to the left lower extremity and this patient subsequently underwent disc operation. The other patient had been treated for greater trochanteric pain syndrome. In both patients, symptoms appeared to be relieved by local injection of anaesthetic and steroid. Episacral lipoma should be considered during diagnostic workup and in differential diagnosis of acute and chronic low back pain.",
"title": ""
},
{
"docid": "4244af4f70e49c3e08e3943a88c79645",
"text": "From a dynamic system point of view, bat locomotion stands out among other forms of flight. During a large part of bat wingbeat cycle the moving body is not in a static equilibrium. This is in sharp contrast to what we observe in other simpler forms of flight such as insects, which stay at their static equilibrium. Encouraged by biological examinations that have revealed bats exhibit periodic and stable limit cycles, this work demonstrates that one effective approach to stabilize articulated flying robots with bat morphology is locating feasible limit cycles for these robots; then, designing controllers that retain the closed-loop system trajectories within a bounded neighborhood of the designed periodic orbits. This control design paradigm has been evaluated in practice on a recently developed bio-inspired robot called Bat Bot (B2).",
"title": ""
},
{
"docid": "79833f074b2e06d5c56898ca3f008c00",
"text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.",
"title": ""
},
{
"docid": "13ae9c0f1c802de86b80906558b27713",
"text": "Anaerobic saccharolytic bacteria thriving at high pH values were studied in a cellulose-degrading enrichment culture originating from the alkaline lake, Verkhneye Beloye (Central Asia). In situ hybridization of the enrichment culture with 16S rRNA-targeted probes revealed that abundant, long, thin, rod-shaped cells were related to Cytophaga. Bacteria of this type were isolated with cellobiose and five isolates were characterized. Isolates were thin, flexible, gliding rods. They formed a spherical cyst-like structure at one cell end during the late growth phase. The pH range for growth was 7.5–10.2, with an optimum around pH 8.5. Cultures produced a pinkish pigment tentatively identified as a carotenoid. Isolates did not degrade cellulose, indicating that they utilized soluble products formed by so far uncultured hydrolytic cellulose degraders. Besides cellobiose, the isolates utilized other carbohydrates, including xylose, maltose, xylan, starch, and pectin. The main organic fermentation products were propionate, acetate, and succinate. Oxygen, which was not used as electron acceptor, impaired growth. A representative isolate, strain Z-7010, with Marinilabilia salmonicolor as the closest relative, is described as a new genus and species, Alkaliflexus imshenetskii. This is the first cultivated alkaliphilic anaerobic member of the Cytophaga/Flavobacterium/Bacteroides phylum.",
"title": ""
},
{
"docid": "804322502b82ad321a0f97d6f83858ee",
"text": "Cheating is a real problem in the Internet of Things. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. The problem, however, isnt inherent in whether or not to embrace the idea of an open platform and open-source software, but to establish a methodology to verify the trustworthiness and control any access. This paper focuses on building an access control model and system based on trust computing. This is a new field of access control techniques which includes Access Control, Trust Computing, Internet of Things, network attacks, and cheating technologies. Nevertheless, the target access control systems can be very complex to manage. This paper presents an overview of the existing work on trust computing, access control models and systems in IoT. It not only summarizes the latest research progress, but also provides an understanding of the limitations and open issues of the existing work. It is expected to provide useful guidelines for future research. Access Control, Trust Management, Internet of Things Today, our world is characterized by increasing connectivity. Things in this world are increasingly being connected. Smart phones have started an era of global proliferation and rapid consumerization of smart devices. It is predicted that the next disruptive transformation will be the concept of ‘Internet of Things’ [2]. From networked computers to smart devices, and to connected people, we are now moving towards connected ‘things’. Items of daily use are being turned into smart devices as various sensors are embedded in consumer and enterprise equipment, industrial and household appliances and personal devices. Pervasive connectivity mechanisms build bridges between our clothing and vehicles. Interaction among these things/devices can happen with little or no human intervention, thereby conjuring an enormous network, namely the Internet of Things (IoT). One of the primary goals behind IoT is to sense and send data over remote locations to enable detection of significant events, and take relevant actions sooner rather than later [25]. This technological trend is being pursued actively in all areas including the medical and health care fields. IoT provides opportunities to dramatically improve many medical applications, such as glucose level sensing, remote health monitoring (e.g. electrocardiogram, blood pressure, body temperature, and oxygen saturation monitoring, etc), rehabilitation systems, medication management, and ambient assisted living systems. The connectivity offered by IoT extends from humanto-machine to machine-to-machine communications. The interconnected devices collect all kinds of data about patients. Intelligent and ubiquitous services can then be built upon the useful information extracted from the data. During the data aggregation, fusion, and analysis processes, user ar X iv :1 61 0. 01 06 5v 1 [ cs .C R ] 4 O ct 2 01 6 2 Z. Yunpeng and X. Wu privacy and information security become major concerns for IoT services and applications. Security breaches will seriously compromise user acceptance and consumption on IoT applications in the medical and health care areas. The large scale of integration of heterogeneous devices in IoT poses a great challenge for the provision of standard security services. Many IoT devices are vulnerable to attacks since no high-level intelligence can be enabled on these passive devices [10], and security vulnerabilities in products uncovered by researchers have spread from cars [13] to garage doors [9] and to skateboards [35]. Technological utopianism surrounding IoT was very real until the emergence of the Volkswagen emissions scandal [4]. The German conglomerate admitted installing software in its diesel cars that recognizes and identifies patterns when vehicles are being tested for nitrogen oxide emissions and cuts them so that they fall within the limits prescribed by US regulators (004 g/km). Once the test is over, the car returns to its normal state: emitting nitrogen oxides (nitric oxide and nitrogen dioxide) at up to 35 times the US legal limit. The focus of IoT is not the thing itself, but the data generated by the devices and the value therein. What Volkswagen has brought to light goes far beyond protecting data and privacy, preventing intrusion, and keeping the integrity of the data. It casts doubts on the credibility of the IoT industry and its ability to secure data, reach agreement on standards, or indeed guarantee that consumer privacy rights are upheld. All in all, IoT holds tremendous potential to improve our health, make our environment safer, boost productivity and efficiency, and conserve both water and energy. IoT needs to improve its trustworthiness, however, before it can be used to solve challenging economic and environmental problems tied to our social lives. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. If a node of IoT cheats, how does a system identify the cheating node and prevent a malicious attack from misbehaving nodes? This paper focuses on an access control mechanism that will only grant network access permission to trustworthy nodes. Embedding trust management into access control will improve the systems ability to discover untrustworthy participating nodes and prevent discriminatory attacks. There has been substantial research in this domain, most of which has been related to attacks like self-promotion and ballot stuffing where a node falsely promotes its importance and boosts the reputation of a malicious node (by providing good recommendations) to engage in a collusion-style attack. The traditional trust computation model is inefficient in differentiating a participant object in IoT, which is designed to win trust by cheating. In particular, the trust computation model will fail when a malicious node intelligently adjusts its behavior to hide its defect and obtain a higher trust value for its own gain. 1 Access Control Model and System IoT comprises the following three Access Control types Access Control in Internet of Things: A Survey 3 – Role-based access control (RBAC) – Credential-based access control (CBAC) — in order to access some resources and data, users require certain certificate information that falls into the following two types: 1. Attribute-Based access control (ABAC) : If a user has some special attributes, it is possible to access a particular resource or piece of data. 2. Capability-Based access control (Cap-BAC): A capability is a communicable, unforgeable rights markup, which corresponds to a value that uniquely specifies certain access rights to objects owned by subjects. – Trust-based access control (TBAC) In addition, there are also combinations of the aforementioned three methods. In order to improve the security of the system, some of the access control methods include encryption and key management mechanisms.",
"title": ""
},
{
"docid": "3d81867b694a7fa56383583d9ee2637f",
"text": "Elasticity is undoubtedly one of the most striking characteristics of cloud computing. Especially in the area of high performance computing (HPC), elasticity can be used to execute irregular and CPU-intensive applications. However, the on- the-fly increase/decrease in resources is more widespread in Web systems, which have their own IaaS-level load balancer. Considering the HPC area, current approaches usually focus on batch jobs or assumptions such as previous knowledge of application phases, source code rewriting or the stop-reconfigure-and-go approach for elasticity. In this context, this article presents AutoElastic, a PaaS-level elasticity model for HPC in the cloud. Its differential approach consists of providing elasticity for high performance applications without user intervention or source code modification. The scientific contributions of AutoElastic are twofold: (i) an Aging-based approach to resource allocation and deallocation actions to avoid unnecessary virtual machine (VM) reconfigurations (thrashing) and (ii) asynchronism in creating and terminating VMs in such a way that the application does not need to wait for completing these procedures. The prototype evaluation using OpenNebula middleware showed performance gains of up to 26 percent in the execution time of an application with the AutoElastic manager. Moreover, we obtained low intrusiveness for AutoElastic when reconfigurations do not occur.",
"title": ""
},
{
"docid": "f3f70e5ba87399e9d44bda293a231399",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "1107a5b766a3d471ae00c9e02d8592da",
"text": "In this paper, a wideband dual polarized self-complementary connected array antenna with low radar cross section (RCS) under normal and oblique incidence is presented. First, an analytical model of the multilayer structure is proposed in order to obtain a fast and reliable predimensioning tool providing an optimized design of the infinite array. The accuracy of this model is demonstrated thanks to comparative simulations with a full wave analysis software. RCS reduction compared to a perfectly conducting flat plate of at least 10 dB has been obtained over an ultrawide bandwidth of nearly 7:1 at normal incidence and 5:1 (3.8 to 19 GHz) at 60° in both polarizations. These performances are confirmed by finite element tearing and interconnecting computations of finite arrays of different sizes. Finally, the realization of a $28 \\times 28$ cell prototype and measurement results are detailed.",
"title": ""
},
{
"docid": "1e4a86dcc05ff3d593a4bf7b88f8b23a",
"text": "Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.",
"title": ""
},
{
"docid": "889747dbf541583475cbce74c42dc616",
"text": "This paper presents an analysis of FastSLAM - a Rao-Blackwellised particle filter formulation of simultaneous localisation and mapping. It shows that the algorithm degenerates with time, regardless of the number of particles used or the density of landmarks within the environment, and would always produce optimistic estimates of uncertainty in the long-term. In essence, FastSLAM behaves like a non-optimal local search algorithm; in the short-term it may produce consistent uncertainty estimates but, in the long-term, it is unable to adequately explore the state-space to be a reasonable Bayesian estimator. However, the number of particles and landmarks does affect the accuracy of the estimated mean and, given sufficient particles, FastSLAM can produce good non-stochastic estimates in practice. FastSLAM also has several practical advantages, particularly with regard to data association, and would probably work well in combination with other versions of stochastic SLAM, such as EKF-based SLAM",
"title": ""
}
] |
scidocsrr
|
3d29e2996f9e625152fa1ec7e456a8e4
|
A literature survey on Facial Expression Recognition using Global Features
|
[
{
"docid": "d537214f407128585d6a4e6bab55a45b",
"text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.",
"title": ""
}
] |
[
{
"docid": "add26519d60ec2a972ad550cd79129d6",
"text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.",
"title": ""
},
{
"docid": "441633276271b94dc1bd3e5e28a1014d",
"text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.",
"title": ""
},
{
"docid": "e6a60fab31af5985520cc64b93b5deb0",
"text": "BACKGROUND\nGenital warts may mimic a variety of conditions, thus complicating their diagnosis and treatment. The recognition of early flat lesions presents a diagnostic challenge.\n\n\nOBJECTIVE\nWe sought to describe the dermatoscopic features of genital warts, unveiling the possibility of their diagnosis by dermatoscopy.\n\n\nMETHODS\nDermatoscopic patterns of 61 genital warts from 48 consecutively enrolled male patients were identified with their frequencies being used as main outcome measures.\n\n\nRESULTS\nThe lesions were examined dermatoscopically and further classified according to their dermatoscopic pattern. The most frequent finding was an unspecific pattern, which was found in 15/61 (24.6%) lesions; a fingerlike pattern was observed in 7 (11.5%), a mosaic pattern in 6 (9.8%), and a knoblike pattern in 3 (4.9%) cases. In almost half of the lesions, pattern combinations were seen, of which a fingerlike/knoblike pattern was the most common, observed in 11/61 (18.0%) cases. Among the vascular features, glomerular, hairpin/dotted, and glomerular/dotted vessels were the most frequent finding seen in 22 (36.0%), 15 (24.6%), and 10 (16.4%) of the 61 cases, respectively. In 10 (16.4%) lesions no vessels were detected. Hairpin vessels were more often seen in fingerlike (χ(2) = 39.31, P = .000) and glomerular/dotted vessels in knoblike/mosaic (χ(2) = 9.97, P = .008) pattern zones; vessels were frequently missing in unspecified (χ(2) = 8.54, P = .014) areas.\n\n\nLIMITATIONS\nOnly male patients were examined.\n\n\nCONCLUSIONS\nThere is a correlation between dermatoscopic patterns and vascular features reflecting the life stages of genital warts; dermatoscopy may be useful in the diagnosis of early-stage lesions.",
"title": ""
},
{
"docid": "f193816262da8f4edb523e172a83f953",
"text": "The European FF POIROT project (IST-2001-38248) aims at developing applications for tackling financial fraud, using formal ontological repositories as well as multilingual terminological resources. In this article, we want to focus on the development cycle towards an application recognizing several types of e-mail fraud, such as phishing, Nigerian advance fee fraud and lottery scam. The development cycle covers four tracks of development - language engineering, terminology engineering, knowledge engineering and system engineering. These development tracks are preceded by a problem determination phase and followed by a deployment phase. Each development track is supported by a methodology. All methodologies and phases in the development cycle will be discussed in detail",
"title": ""
},
{
"docid": "f5f70dca677752bcaa39db59988c088e",
"text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children",
"title": ""
},
{
"docid": "eaeccd0d398e0985e293d680d2265528",
"text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.",
"title": ""
},
{
"docid": "2a45f4ed21d9534a937129532cb32020",
"text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.",
"title": ""
},
{
"docid": "2e864dcde57ea1716847f47977af0140",
"text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.",
"title": ""
},
{
"docid": "885b3a5b386e642dc567c9b7944112d5",
"text": "Derived from the field of art curation, digital provenance is an unforgeable record of a digital object’s chain of successive custody and sequence of operations performed on it. Digital provenance forms an immutable directed acyclic graph (DAG) structure. Recent works in digital provenance have focused on provenance generation, storage and management frameworks in different fields. In this paper, we address two important aspects of digital provenance that have not been investigated thoroughly in existing works: 1) capturing the DAG structure of provenance and 2) supporting dynamic information sharing. We propose a scheme that uses signature-based mutual agreements between successive users to clearly delineate the transition of responsibility of the document as it is passed along the chain of users. In addition to preserving the properties of confidentiality, immutability and availability for a digital provenance chain, it supports the representation of DAG structures of provenance. Our scheme supports dynamic information sharing scenarios where the sequence of users who have custody of the document is not predetermined. Security analysis and empirical results indicate that our scheme improves the security of the existing Onion and PKLC provenance schemes with comparable performance. Keywords—Provenance, cryptography, signatures, integrity, confidentiality, availability",
"title": ""
},
{
"docid": "9bbc3e426c7602afaa857db85e754229",
"text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.",
"title": ""
},
{
"docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7",
"text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].",
"title": ""
},
{
"docid": "3c27b3e11ba9924e9c102fc9ba7907b6",
"text": "The Visagraph IITM Eye Movement Recording System is an instrument that assesses reading eye movement efficiency and related parameters objectively. It also incorporates automated data analysis. In the standard protocol, the patient reads selections only at the level of their current school grade, or at the level that has been determined by a standardized reading test. In either case, deficient reading eye movements may be the consequence of a language-based reading disability, an oculomotor-based reading inefficiency, or both. We propose an addition to the standard protocol: the patient’s eye movements are recorded a second time with text that is significantly below the grade level of the initial reading. The goal is to determine which factor is primarily contributing to the patient’s reading problem, oculomotor or language. This concept is discussed in the context of two representative cases.",
"title": ""
},
{
"docid": "272d83db41293889d9ca790717983193",
"text": "The ability to measure the level of customer satisfaction with online shopping is essential in gauging the success and failure of e-commerce. To do so, Internet businesses must be able to determine and understand the values of their existing and potential customers. Hence, it is important for IS researchers to develop and validate a diverse array of metrics to comprehensively capture the attitudes and feelings of online customers. What factors make online shopping appealing to customers? What customer values take priority over others? This study’s purpose is to answer these questions, examining the role of several technology, shopping, and product factors on online customer satisfaction. This is done using a conjoint analysis of consumer preferences based on data collected from 188 young consumers. Results indicate that the three most important attributes to consumers for online satisfaction are privacy (technology factor), merchandising (product factor), and convenience (shopping factor). These are followed by trust, delivery, usability, product customization, product quality, and security. Implications of these findings are discussed and suggestions for future research are provided.",
"title": ""
},
{
"docid": "6de2b5fa5c8d3db9f9d599b6ebb56782",
"text": "Extreme sensitivity of soil organic carbon (SOC) to climate and land use change warrants further research in different terrestrial ecosystems. The aim of this study was to investigate the link between aggregate and SOC dynamics in a chronosequence of three different land uses of a south Chilean Andisol: a second growth Nothofagus obliqua forest (SGFOR), a grassland (GRASS) and a Pinus radiataplantation (PINUS). Total carbon content of the 0–10 cm soil layer was higher for GRASS (6.7 kg C m −2) than for PINUS (4.3 kg C m−2), while TC content of SGFOR (5.8 kg C m−2) was not significantly different from either one. High extractable oxalate and pyrophosphate Al concentrations (varying from 20.3–24.4 g kg −1, and 3.9– 11.1 g kg−1, respectively) were found in all sites. In this study, SOC and aggregate dynamics were studied using size and density fractionation experiments of the SOC, δ13C and total carbon analysis of the different SOC fractions, and C mineralization experiments. The results showed that electrostatic sorption between and among amorphous Al components and clay minerals is mainly responsible for the formation of metal-humus-clay complexes and the stabilization of soil aggregates. The process of ligand exchange between SOC and Al would be of minor importance resulting in the absence of aggregate hierarchy in this soil type. Whole soil C mineralization rate constants were highest for SGFOR and PINUS, followed by GRASS (respectively 0.495, 0.266 and 0.196 g CO 2-C m−2 d−1 for the top soil layer). In contrast, incubation experiments of isolated macro organic matter fractions gave opposite results, showing that the recalcitrance of the SOC decreased in another order: PINUS>SGFOR>GRASS. We deduced that electrostatic sorption processes and physical protection of SOC in soil aggregates were the main processes determining SOC stabilization. As a result, high aggregate carbon concentraCorrespondence to: D. Huygens (dries.huygens@ugent.be) tions, varying from 148 till 48 g kg −1, were encountered for all land use sites. Al availability and electrostatic charges are dependent on pH, resulting in an important influence of soil pH on aggregate stability. Recalcitrance of the SOC did not appear to largely affect SOC stabilization. Statistical correlations between extractable amorphous Al contents, aggregate stability and C mineralization rate constants were encountered, supporting this hypothesis. Land use changes affected SOC dynamics and aggregate stability by modifying soil pH (and thus electrostatic charges and available Al content), root SOC input and management practices (such as ploughing and accompanying drying of the soil).",
"title": ""
},
{
"docid": "f26df52af74f9c2f51ff0e56daeb4c38",
"text": "Browsing is part of the information seeking process, used when information needs are ill-defined or unspecific. Browsing and searching are often interleaved during information seeking to accommodate changing awareness of information needs. Digital Libraries often support full-text search, but are not so helpful in supporting browsing. Described here is a novel browsing system created for the Greenstone software used by the New Zealand Digital Library that supports users in a more natural approach to the information seeking process.",
"title": ""
},
{
"docid": "ad14a9f120aedc84abc99f1715e6769b",
"text": "We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.",
"title": ""
},
{
"docid": "3ca7c89e12c81ac90d5d12d6f9a2b7f2",
"text": "Texture classification is one of the problems which has been paid much attention on by computer scientists since late 90s. If texture classification is done correctly and accurately, it can be used in many cases such as Pattern recognition, object tracking, and shape recognition. So far, there have been so many methods offered to solve this problem. Near all these methods have tried to extract and define features to separate different labels of textures really well. This article has offered an approach which has an overall process on the images of textures based on Local binary pattern and Gray Level Co-occurrence matrix and then by edge detection, and finally, extracting the statistical features from the images would classify them. Although, this approach is a general one and is could be used in different applications, the method has been tested on the stone texture and the results have been compared with some of the previous approaches to prove the quality of proposed approach. Keywords-Texture Classification, Gray level Co occurrence, Local Binary Pattern, Statistical Features",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "e20d26ce3dea369ae6817139ff243355",
"text": "This article explores the roots of white support for capital punishment in the United States. Our analysis addresses individual-level and contextual factors, paying particular attention to how racial attitudes and racial composition influence white support for capital punishment. Our findings suggest that white support hinges on a range of attitudes wider than prior research has indicated, including social and governmental trust and individualist and authoritarian values. Extending individual-level analyses, we also find that white responses to capital punishment are sensitive to local context. Perhaps most important, our results clarify the impact of race in two ways. First, racial prejudice emerges here as a comparatively strong predictor of white support for the death penalty. Second, black residential proximity functions to polarize white opinion along lines of racial attitude. As the black percentage of county residents rises, so too does the impact of racial prejudice on white support for capital punishment.",
"title": ""
},
{
"docid": "3196c06c66b49c052d07ced0de683d02",
"text": "Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from examplebased specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquerbased deductive search paradigm that inductively reduces the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these PBE components leverage both symbolic reasoning and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This can not only lead to better heuristics, but can also enable easier development, maintenance, and even personalization of a PBE system.",
"title": ""
}
] |
scidocsrr
|
551c90304020aee22c4aff6a9ae6cf02
|
Interpretable Representation Learning for Healthcare via Capturing Disease Progression through Time
|
[
{
"docid": "e7659e2c20e85f99996e4394fdc37a5c",
"text": "Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.",
"title": ""
}
] |
[
{
"docid": "76049ed267e9327412d709014e8e9ed4",
"text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.",
"title": ""
},
{
"docid": "4ab644ac13d8753aa6e747c4070e95e9",
"text": "This paper presents a framework for modeling the phase noise in complementary metal–oxide–semiconductor (CMOS) ring oscillators. The analysis considers both linear and nonlinear operations, and it includes both device noise and digital switching noise coupled through the power supply and substrate. In this paper, we show that fast rail-to-rail switching is required in order to achieve low phase noise. Further, flicker noise from the bias circuit can potentially dominate the phase noise at low offset frequencies. We define the effective factor for ring oscillators with large and nonlinear voltage swings and predict its increase for CMOS processes with smaller feature sizes. Our phase-noise analysis is validated via simulation and measurement results for ring oscillators fabricated in a number of CMOS processes.",
"title": ""
},
{
"docid": "b5d7c6a4d9551bf9b47b4e3754fb5911",
"text": "Discovering significant types of relations from the web is challenging because of its open nature. Unsupervised algorithms are developed to extract relations from a corpus without knowing the relations in advance, but most of them rely on tagging arguments of predefined types. Recently, a new algorithm was proposed to jointly extract relations and their argument semantic classes, taking a set of relation instances extracted by an open IE algorithm as input. However, it cannot handle polysemy of relation phrases and fails to group many similar (“synonymous”) relation instances because of the sparseness of features. In this paper, we present a novel unsupervised algorithm that provides a more general treatment of the polysemy and synonymy problems. The algorithm incorporates various knowledge sources which we will show to be very effective for unsupervised extraction. Moreover, it explicitly disambiguates polysemous relation phrases and groups synonymous ones. While maintaining approximately the same precision, the algorithm achieves significant improvement on recall compared to the previous method. It is also very efficient. Experiments on a realworld dataset show that it can handle 14.7 million relation instances and extract a very large set of relations from the web.",
"title": ""
},
{
"docid": "27745116e5c05802bda2bc6dc548cce6",
"text": "Recently, many researchers have attempted to classify Facial Attributes (FAs) by representing characteristics of FAs such as attractiveness, age, smiling and so on. In this context, recent studies have demonstrated that visual FAs are a strong background for many applications such as face verification, face search and so on. However, Facial Attribute Classification (FAC) in a wide range of attributes based on the regression representation -predicting of FAs as real-valued labelsis still a significant challenge in computer vision and psychology. In this paper, a regression model formulation is proposed for FAC in a wide range of FAs (e.g. 73 FAs). The proposed method accommodates real-valued scores to the probability of what percentage of the given FAs is present in the input image. To this end, two simultaneous dictionary learning methods are proposed to learn the regression and identity feature dictionaries simultaneously. Accordingly, a multi-level feature extraction is proposed for FAC. Then, four regression classification methods are proposed using a regression model formulated based on dictionary learning, SRC and CRC. Convincing results are",
"title": ""
},
{
"docid": "35bc2da7f6a3e18f831b4560fba7f94d",
"text": "findings All countries—developing and developed alike—find it difficult to stay competitive without inflows of foreign direct investment (FDI). FDI brings to host countries not only capital, productive facilities, and technology transfers, but also employment, new job skills and management expertise. These ingredients are particularly important in the case of Russia today, where the pressure for firms to compete with each other remains low. With blunted incentives to become efficient, due to interregional barriers to trade, weak exercise of creditor rights and administrative barriers to new entrants—including foreign invested firms—Russian enterprises are still in the early stages of restructuring. This paper argues that the policy regime governing FDI in the Russian Federation is still characterized by the old paradigm of FDI, established before the Second World War and seen all over the world during the 1950s and 1960s. In this paradigm there are essentially only two motivations for foreign direct investment: access to inputs for production, and access to markets for outputs. These kinds of FDI are useful, but often based either on exports that exploit cheap labor or natural resources, or else aimed at protected local markets and not necessarily at world standards for price and quality. The fact is that Russia is getting relatively small amounts of these types of FDI, and almost none of the newer, more efficient kind—characterized by state-of-the-art technology and world-class competitive production linked to dynamic global (or regional) markets. The paper notes that Russia should phase out the three core pillars of the current FDI policy regime-(i) all existing high tariffs and non-tariff protection for the domestic market; (ii) tax preferences for foreign investors (including those offered in Special Economic Zones), which bring few benefits (in terms of increased FDI) but engender costs (in terms of foregone fiscal revenue); and (iii) the substantial number of existing restrictions on FDI (make them applicable only to a limited number of sectors and activities). This set of reforms would allow Russia to switch to a modern approach towards FDI. The paper suggests the following specific policy recommendations: (i) amend the newly enacted FDI law so as to give \" national treatment \" for both right of establishment and for post-establishment operations; abolish conditions that are inconsistent with the agreement on trade-related investment measures (TRIMs) of the WTO (such as local content restrictions); and make investor-State dispute resolution mechanisms more efficient, including giving foreign investors the opportunity to …",
"title": ""
},
{
"docid": "00547f45936c7cea4b7de95ec1e0fbcd",
"text": "With the emergence of the Internet of Things (IoT) and Big Data era, many applications are expected to assimilate a large amount of data collected from environment to extract useful information. However, how heterogeneous computing devices of IoT ecosystems can execute the data processing procedures has not been clearly explored. In this paper, we propose a framework which characterizes energy and performance requirements of the data processing applications across heterogeneous devices, from a server in the cloud and a resource-constrained gateway at edge. We focus on diverse machine learning algorithms which are key procedures for handling the large amount of IoT data. We build analytic models which automatically identify the relationship between requirements and data in a statistical way. The proposed framework also considers network communication cost and increasing processing demand. We evaluate the proposed framework on two heterogenous devices, a Raspberry Pi and a commercial Intel server. We show that the identified models can accurately estimate performance and energy requirements with less than error of 4.8% for both platforms. Based on the models, we also evaluate whether the resource-constrained gateway can process the data more efficiently than the server in the cloud. The results present that the less-powerful device can achieve better energy and performance efficiency for more than 50% of machine learning algorithms.",
"title": ""
},
{
"docid": "620642c5437dc26cac546080c4465707",
"text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1",
"title": ""
},
{
"docid": "e85e66b6ad6324a07ca299bf4f3cd447",
"text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.",
"title": ""
},
{
"docid": "65849cfb115918dd264445e91698e868",
"text": "Handwritten character recognition is always a frontier area of research in the field of pattern recognition. There is a large demand for OCR on hand written documents in Image processing. Even though, sufficient studies have performed in foreign scripts like Arabic, Chinese and Japanese, only a very few work can be traced for handwritten character recognition mainly for the south Indian scripts. OCR system development for Indian script has many application areas like preserving manuscripts and ancient literatures written in different Indian scripts and making digital libraries for the documents. Feature extraction and classification are essential steps of character recognition process affecting the overall accuracy of the recognition system. This paper presents a brief overview of digital image processing techniques such as Feature Extraction, Image Restoration and Image Enhancement. A brief history of OCR and various approaches to character recognition is also discussed in this paper.",
"title": ""
},
{
"docid": "0b1a8b80b4414fa34d6cbb5ad1342ad7",
"text": "OBJECTIVE\nThe aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.\n\n\nMETHODS\nThis prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.\n\n\nRESULTS\nThe study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P < .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P < .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P < .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P < .05).\n\n\nCONCLUSION\nLidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.",
"title": ""
},
{
"docid": "696320f53bb91db9a59a803ec5356727",
"text": "Ransomware is a type of malware that encrypts data or locks a device to extort a ransom. Recently, a variety of high-profile ransomware attacks have been reported, and many ransomware defense systems have been proposed. However, none specializes in resisting untargeted attacks such as those by remote desktop protocol (RDP) attack ransomware. To resolve this problem, this paper proposes a way to combat RDP ransomware attacks by trapping and tracing. It discovers and ensnares the attacker through a network deception environment and uses an auxiliary tracing technology to find the attacker, finally achieving the goal of deterring the ransomware attacker and countering the RDP attack ransomware. Based on cyber deception, an auxiliary ransomware traceable system called RansomTracer is introduced in this paper. RansomTracer collects clues about the attacker by deploying monitors in the deception environment. Then, it automatically extracts and analyzes the traceable clues. Experiments and evaluations show that RansomTracer ensnares the adversary in the deception environment and improves the efficiency of clue analysis significantly. In addition, it is able to recognize the clues that identify the attacker and the screening rate reaches 98.34%.",
"title": ""
},
{
"docid": "7c6708511e8a19c7a984ccc4b5c5926e",
"text": "INTRODUCTION\nOtoplasty or correction of prominent ears, is one of most commonly performed surgeries in plastic surgery both in children and adults. Until nowadays, there have been more than 150 techniques described, but all with certain percentage of recurrence which varies from just a few up to 24.4%.\n\n\nOBJECTIVE\nThe authors present an otoplasty technique, a combination of Mustardé's original procedure with other techniques, which they have been using successfully in their everyday surgical practice for the last 9 years. The technique is based on posterior antihelical and conchal approach.\n\n\nMETHODS\nThe study included 102 patients (60 males and 42 females) operated on between 1999 and 2008. The age varied between 6 and 49 years. Each procedure was tailored to the aberrant anatomy which was analysed after examination. Indications and the operative procedure are described in step-by-step detail accompanied by drawings and photos taken during the surgery.\n\n\nRESULTS\nAll patients had bilateral ear deformity. In all cases was performed a posterior antihelical approach. The conchal reduction was done only when necessary and also through the same incision. The follow-up was from 1 to 5 years. There were no recurrent cases. A few minor complications were presented. Postoperative care, complications and advantages compared to other techniques are discussed extensively.\n\n\nCONCLUSION\nAll patients showed a high satisfaction rate with the final result and there was no necessity for further surgeries. The technique described in this paper is easy to reproduce even for young surgeons.",
"title": ""
},
{
"docid": "c9ea36d15ec23b678c23ad1ae8d976a9",
"text": "Privacy-preserving distributed machine learning has become more important than ever due to the high demand of large-scale data processing. This paper focuses on a class of machine learning problems that can be formulated as regularized empirical risk minimization, and develops a privacy-preserving learning approach to such problems. We use Alternating Direction Method of Multipliers (ADMM) to decentralize the learning algorithm, and apply Gaussian mechanisms to provide differential privacy guarantee. However, simply combining ADMM and local randomization mechanisms would result in a nonconvergent algorithm with poor performance even under moderate privacy guarantees. Besides, this intuitive approach requires a strong assumption that the objective functions of the learning problems should be differentiable and strongly convex. To address these concerns, we propose an improved ADMMbased Differentially Private distributed learning algorithm, DPADMM, where an approximate augmented Lagrangian function and Gaussian mechanisms with time-varying variance are utilized. We also apply the moments accountant method to bound the total privacy loss. Our theoretical analysis shows that DPADMM can be applied to a general class of convex learning problems, provides differential privacy guarantee, and achieves a convergence rate of O(1/ √ t), where t is the number of iterations. Our evaluations demonstrate that our approach can achieve good convergence and accuracy with moderate privacy guarantee.",
"title": ""
},
{
"docid": "2819e5fd171e76a6ed90b5f576259f39",
"text": "Moving obstacle avoidance is a fundamental requirement for any robot operating in real environments, where pedestrians, bicycles and cars are present. In this work, we design and validate a new approach that takes explicitly into account obstacle velocities, to achieve safe visual navigation in outdoor scenarios. A wheeled vehicle, equipped with an actuated pinhole camera and with a lidar, must follow a path represented by key images, without colliding with the obstacles. To estimate the obstacle velocities, we design a Kalman-based observer. Then, we adapt the tentacles designed in [1], to take into account the predicted obstacle positions. Finally, we validate our approach in a series of simulated and real experiments, showing that when the obstacle velocities are considered, the robot behaviour is safer, smoother, and faster than when it is not.",
"title": ""
},
{
"docid": "9f25bc7a2dadb2b8c0d54ac6e70e92e5",
"text": "Our research suggests that ML technologies will indeed grow more pervasive, but within job categories, what we define as the “suitability for machine learning” (SML) of work tasks varies greatly. We further propose that our SML rubric, illustrating the variability in task-level SML, can serve as an indicator for the potential reorganization of a job or an occupation because the set of tasks that form a job can be separated and re-bundled to redefine the job. Evaluating worker activities using our rubric, in fact, has the benefit of focusing on what ML can do instead of grouping all forms of automation together.",
"title": ""
},
{
"docid": "75a1832a5fdd9c48f565eb17e8477b4b",
"text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.",
"title": ""
},
{
"docid": "9b9cff2b6d1313844b88bad5a2724c52",
"text": "A robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Many robots have been built for manufacturing purpose and can be found in factories around the world. Designing of the latest inverted ROBOT which can be controlling using an APP for android mobile. We are developing the remote buttons in the android app by which we can control the robot motion with them. And in which we use Bluetooth communication to interface controller and android. Controller can be interfaced to the Bluetooth module though UART protocol. According to commands received from android the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. Pick and Place robots can be reprogrammable and tooling can be interchanged to provide for multiple applications.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "07179377e99a40beffcb50ac039ca503",
"text": "RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter's feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization's Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.",
"title": ""
}
] |
scidocsrr
|
f8965c62a7b6fbba3e11d13a94a648c5
|
Establishing moderators and biosignatures of antidepressant response in clinical care (EMBARC): Rationale and design.
|
[
{
"docid": "469d83dd9996ca27217907362f44304c",
"text": "Although cells in many brain regions respond to reward, the cortical-basal ganglia circuit is at the heart of the reward system. The key structures in this network are the anterior cingulate cortex, the orbital prefrontal cortex, the ventral striatum, the ventral pallidum, and the midbrain dopamine neurons. In addition, other structures, including the dorsal prefrontal cortex, amygdala, hippocampus, thalamus, and lateral habenular nucleus, and specific brainstem structures such as the pedunculopontine nucleus, and the raphe nucleus, are key components in regulating the reward circuit. Connectivity between these areas forms a complex neural network that mediates different aspects of reward processing. Advances in neuroimaging techniques allow better spatial and temporal resolution. These studies now demonstrate that human functional and structural imaging results map increasingly close to primate anatomy.",
"title": ""
}
] |
[
{
"docid": "39e550b269a66f31d467269c6389cde0",
"text": "The artificial intelligence community has seen a recent resurgence in the area of neural network study. Inspired by the workings of the brain and nervous system, neural networks have solved some persistent problems in vision and speech processing. However, the new systems may offer an alternative approach to decision-making via high level pattern recognition. This paper will describe the distinguishing features of neurally inspired systems, and present popular systems in a discrete-time, algorithmic framework. Examples of applications to decision problems will appear, and guidelines for their use in operations research will be established.",
"title": ""
},
{
"docid": "7f1eb105b7a435993767e4a4b40f7ed9",
"text": "In the last two decades, organizations have recognized, indeed fixated upon, the impOrtance of quality and quality management One manifestation of this is the emergence of the total quality management (TQM) movement, which has been proclaimed as the latest and optimal way of managing organizations. Likewise, in the domain of human resource management, the concept of quality of work life (QWL) has also received much attention of late from theoreticians, researchers, and practitioners. However, little has been done to build a bridge between these two increasingly important concepts, QWL and TQM. The purpose of this research is to empirically examine the relationship between quality of work life (the internalized attitudes employees' have about their jobs) and an indicatorofTQM, customer service attitudes, CSA (the externalized signals employees' send to customers about their jobs). In addition, this study examines how job involvement and organizational commitment mediate the relationship between QWL and CSA. OWL and <:sA HlU.3 doc JJ a9t94 page 3 INTRODUCTION Quality and quality management have become increasingly important topics for both practitioners and researchers (Anderson, Rungtusanatham, & Schroeder, 1994). Among the many quality related activities that have arisen, the principle of total quality mana~ement (TQM) has been advanced as the optimal approach for managing people and processes. Indeed, it is considered by some to be the key to ensuring the long-term viability of organizations (Feigenbaum, 1982). Ofcourse, niany companies have invested heavily in total quality efforts in the form of capital expenditures on plant and equipment, and through various human resource management programs designed to spread the quality gospel. However, many still argue that there is insufficient theoretical development and empirical eviden~e for the determinants and consequences of quality management initiatives (Dean & Bowen, 1994). Mter reviewing the relevant research literatures, we find that three problems persist in the research on TQM. First, a definition of quality has not been agreed upon. Even more problematic is the fact that many of the definitions that do exist are continuously evolving. Not smprisingly, these variable definitions often lead to inconsistent and even conflicting conclusions, Second, very few studies have systematically examined these factors that influence: the quality of goods and services, the implementation of quality activities, or the performance of organizations subsequent to undertaking quality initiatives (Spencer, 1994). Certainly this has been true for quality-related human resource management interventions. Last, TQM has suffered from an \"implementation problem\" (Reger, Gustafson, Demarie, & Mullane, 1994, p. 565) which has prevented it from transitioning from the theoretical to the applied. In the domain of human resource management, quality of working life (QWL) has also received a fair amount of attention of late from theorists, researchers, and practitioners. The underlying, and mostimportant, principles of QWL capture an employee's satisfaction with and feelings about their: work, work environment, and organization. Most who study QWL, and TQM for that matter, tend to focus on the importance of employee systems and organizational performance, whereas researchers in the field ofHRM OWLmdCSA HlU.3doc 1J1l2f}4 pBgc4 usually emphasize individual attitudes and individual performance (Walden, 1994). Fmthennore, as Walden (1994) alludes to, there are significantly different managerial prescriptions and applied levels for routine human resource management processes, such as selection, performance appraisal, and compensation, than there are for TQM-driven processes, like teamwork, participative management, and shared decision-making (Deming, 1986, 1993; Juran, 1989; M. Walton, 1986; Dean & Bowen, 1994). To reiterate, these variations are attributable to the difference between a mico focus on employees as opposed to a more macrofocus on employee systems. These specific differences are but a few of the instances where the views of TQM and the views of traditional HRM are not aligned (Cardy & Dobbins, 1993). In summary, although TQM is a ubiquitous organizational phenomenon; it has been given little research attention, especially in the form ofempirical studies. Therefore, the goal of this study is to provide an empirical assessment of how one, internalized, indicator ofHRM effectiveness, QWL, is associated with one, externalized, indicator of TQM, customer service attitudes, CSA. In doing so, it bridges the gap between \"employee-focused\" H.RM outcoines and \"customer-focused\" TQM consequences. In addition, it examines the mediating effects of organizational commitment and job involvement on this relationship. QUALITY OF WORK LIFE AND CUSTOMER SERVICE AITITUDES In this section, we introduce and review the main principles of customer service attitudes, CSA, and discuss its measurement Thereafter, our extended conceptualization and measurement of QWL will be presented. Fmally, two variables hypothesized to function as mediators of the relationship between CSA and QWL, organization commitment and job involvement, will be· explored. Customer Service Attitudes (CSA) Despite all the ruminations about it in the business and trade press, TQM still remains an ambiguous notion, one that often gives rise to as many different definitions as there are observers. Some focus on the presence of organizational systems. Others, the importance of leadership. ., Many stress the need to reduce variation in organizational processes (Deming, 1986). A number · OWL and CSA mn.3 doc 11 fl9tlJ4 page 5 emphasize reducing costs through q~ty improvement (p.B. Crosby, 1979). Still others focus on quality planing, control, and improvement (Juran, 1989). Regardless of these differences, however, the most important, generally agreed upon principle is to be \"customer focused\" (Feigenbaum, 1982). The cornerstone for this principle is the belief that customer satisfaction and customer judgments about the organization and itsproducts are the most important determinants of long-term organizational viability (Oliva, Oliver & MacMillan, 1992). Not surprisingly, this belief is a prominent tenet in both the manufacturing and service sectors alike. Conventional wisdom holds that quality can best be evaluated from the customers' perspective. Certainly, customers can easily articulate how well a product or service meets their expectations. Therefore, managers and researchers must take into account subjective and cognitive factors that influence customers' judgments when trying to identify influential customer cues, rather than just relying on organizational presumptions. Recently, for example, Hannon & Sano (1994) described how customer-driven HR strategies and practices are pervasive in Japan. An example they cited was the practice of making the tOp graduates from the best schools work in low level, customer service jobs for their first 1-2 years so that they might better underst3nd customers and their needs. To be sure, defining quality in terms of whether a product or service meets the expectations ofcustomers is all-encompassing. As a result of the breadth of this issue, and the limited research on this topic, many importantquestions about the service relationship, particularly those penaining to exchanges between employees and customers, linger. Some include, \"What are the key dimensions of service quality?\" and \"What are the actions service employees might direct their efforts to in order to foster good relationships with customers?\" Arguably, the most readily obvious manifestations of quality for any customer are the service attitudes ofemployees. In fact, dming the employee-customer interaction, conventional wisdom holds that employees' customer service attitudes influence customer satisfaction, customer evaluations, and decisions to buy. . OWL and <:SA HJU.3,doc J J129m page 6 According to Rosander (1980), there are five dimensions of service quality: quality of employee performance, facility, data, decision, and outcome. Undoubtedly, the performance of the employee influences customer satisfaction. This phenomenon has been referred to as interactive quality (Lehtinen & Lehtinen, 1982). Parasuraman, Zeithaml, & Berry (1985) go so far as to suggest that service quality is ultimately a function of the relationship between the employee and the customer, not the product or the price. Sasser, Olsen, & Wyckoff (1987) echo the assertion that personnel performance is a critical factor in the satisfaction of customers. If all of them are right, the relationship between satisfaction with quality of work life and customer service attitudes cannot be understated. Measuring Customer Service Attitudes The challenge of measuring service quality has increasingly captured the attention of researchers (Teas, 1994; Cronin & Taylor, 1992). While the substance and determinants of quality may remain undefined, its importance to organizations is unquestionable. Nevertheless, numerous problems inherent in the measurement of customer service attitudes still exist (Reeves & Bednar, 1994). Perhaps the complexities involved in measuring this construct have deterred many researchers from attempting to define and model service quality. Maybe this is also the reason why many of the efforts to define and measure service quality have emanated primarily from manufacturing, rather than service, settings. When it has been measured, quality has sometimes been defined as a \"zero defect\" policy, a perspective the Japanese have embraced. Alternatively, P.B. Crosby (1979) quantifies quality as \"conformance to requirements.\" Garvin (1983; 1988), on the other hand, measures quality in terms ofcounting the incidence of \"internal failures\" and \"external failures.\" Other definitions include \"value\" (Abbot, 1955; Feigenbaum, 1982), \"concordance to specification'\" (Gilmo",
"title": ""
},
{
"docid": "e8792ced13f1be61d031e2b150cc5cf6",
"text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.",
"title": ""
},
{
"docid": "9d1dc15130b9810f6232b4a3c77e8038",
"text": "This paper argues that we should seek the golden middle way between dynamically and statically typed languages.",
"title": ""
},
{
"docid": "8a21ff7f3e4d73233208d5faa70eb7ce",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "373c89beb40ce164999892be2ccb8f46",
"text": "Recent advances in mobile technologies (esp., smart phones and tablets with built-in cameras, GPS and Internet access) made augmented reality (AR ) applications available for the broad public. While many researchers have examined the af fordances and constraints of AR for teaching and learning, quantitative evidence for it s effectiveness is still scarce. To contribute to filling this research gap, we designed and condu cted a pretest-posttest crossover field experiment with 101 participants at a mathematics exh ibition to measure the effect of AR on acquiring and retaining mathematical knowledge in a n informal learning environment. We hypothesized that visitors acquire more knowledge f rom augmented exhibits than from exhibits without AR. The theoretical rationale for our h ypothesis is that AR allows for the efficient and effective implementation of a subset of the des ign principles defined in the cognitive theory of multimedia. The empirical results we obtaine d show that museum visitors performed better on knowledge acquisition and retention tests related to augmented exhibits than to nonaugmented exhibits and that they perceived AR as a valuable and desirable add-on for museum exhibitions.",
"title": ""
},
{
"docid": "591e4719cadd8b9e6dfda932856fffce",
"text": "Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community.",
"title": ""
},
{
"docid": "fb97b11eba38f84f38b473a09119162a",
"text": "We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS ’15 ). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.",
"title": ""
},
{
"docid": "5a583fe6fae9f0624bcde5043c56c566",
"text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.",
"title": ""
},
{
"docid": "09e164aa239be608e8c2ba250d168ebc",
"text": "The alarming growth rate of malicious apps has become a serious issue that sets back the prosperous mobile ecosystem. A recent report indicates that a new malicious app for Android is introduced every 10 s. To combat this serious malware campaign, we need a scalable malware detection approach that can effectively and efficiently identify malware apps. Numerous malware detection tools have been developed, including system-level and network-level approaches. However, scaling the detection for a large bundle of apps remains a challenging task. In this paper, we introduce Significant Permission IDentification (SigPID), a malware detection system based on permission usage analysis to cope with the rapid increase in the number of Android malware. Instead of extracting and analyzing all Android permissions, we develop three levels of pruning by mining the permission data to identify the most significant permissions that can be effective in distinguishing between benign and malicious apps. SigPID then utilizes machine-learning-based classification methods to classify different families of malware and benign apps. Our evaluation finds that only 22 permissions are significant. We then compare the performance of our approach, using only 22 permissions, against a baseline approach that analyzes all permissions. The results indicate that when a support vector machine is used as the classifier, we can achieve over 90% of precision, recall, accuracy, and F-measure, which are about the same as those produced by the baseline approach while incurring the analysis times that are 4–32 times less than those of using all permissions. Compared against other state-of-the-art approaches, SigPID is more effective by detecting 93.62% of malware in the dataset and 91.4% unknown/new malware samples.",
"title": ""
},
{
"docid": "c7857bde224ef6252602798c349beb44",
"text": "Context Several studies show that people with low health literacy skills have poorer health-related knowledge and comprehension. Contribution This updated systematic review of 96 studies found that low health literacy is associated with poorer ability to understand and follow medical advice, poorer health outcomes, and differential use of some health care services. Caution No studies examined the relationship between oral literacy (speaking and listening skills) and outcomes. Implication Although it is challenging, we need to find feasible ways to improve patients' health literacy skills and reduce the negative effects of low health literacy on outcomes. The Editors The term health literacy refers to a set of skills that people need to function effectively in the health care environment (1). These skills include the ability to read and understand text and to locate and interpret information in documents (print literacy); use quantitative information for tasks, such as interpreting food labels, measuring blood glucose levels, and adhering to medication regimens (numeracy); and speak and listen effectively (oral literacy) (2, 3). Approximately 80 million U.S. adults are thought to have limited health literacy, which puts them at risk for poorer health outcomes. Rates of limited health literacy are higher among elderly, minority, and poor persons and those with less than a high school education (4). Numerous policy and advocacy organizations have expressed concern about barriers caused by low health literacy, notably the Institute of Medicine's report Health Literacy: A Prescription to End Confusion in 2004 (5) and the U.S. Department of Health and Human Services' report National Action Plan to Improve Health Literacy in 2010 (6). To understand the relationship between health literacy level and use of health care services, health outcomes, costs, and disparities in health outcomes, we conducted a systematic evidence review for the Agency for Healthcare Research and Quality (AHRQ) (published in 2004), which was limited to the relationship between print literacy and health outcomes (7). We found a consistent association between low health literacy (measured by reading skills) and more limited health-related knowledge and comprehension. The relationship between health literacy level and other outcomes was less clear, primarily because of a lack of studies and relatively unsophisticated methods in the available studies. In this review, we update and expand the earlier review (7). Since 2004, researchers have conducted new and more sophisticated studies. Thus, in synthesizing the literature, we can now consider the relationship between outcomes and health literacy (print literacy alone or combined with numeracy) and between outcomes and the numeracy component of health literacy alone. Methods We developed and followed a protocol that used standard AHRQ Evidence-based Practice Center methods. The full report describes study methods in detail and presents evidence tables for each included study (1). Literature Search We searched MEDLINE, CINAHL, the Cochrane Library, PsycINFO, and ERIC databases. For health literacy, our search dates were from 2003 to May 2010. For numeracy, they were from 1966 to May 2010; we began at an earlier date because numeracy was not addressed in our 2004 review. For this review, we updated our searches beyond what was included in the full report from May 2010 through 22 February 2011 to be current with the most recent literature. No Medical Subject Heading terms specifically identify health literacyrelated articles, so we conducted keyword searches, including health literacy, literacy, numeracy, and terms or phrases used to identify related measurement instruments. We also hand-searched reference lists of pertinent review articles and editorials. Appendix Table 1 shows the full search strategy. Appendix Table 1. Search Strategy Study Selection We included English-language studies on persons of all ages whose health literacy or that of their caregivers (including numeracy or oral health literacy) had been measured directly and had not been self-reported. Studies had to compare participants in relation to an outcome, including health care access and service use, health outcomes, and costs of care. For numeracy studies, outcomes also included knowledge, because our earlier review had established the relationship between only health literacy and knowledge. We did not examine outcomes concerning attitudes, social norms, or patientprovider relationships. Data Abstraction and Quality Assessment After determining article inclusion, 1 reviewer entered study data into evidence tables; a second, senior reviewer checked the information for accuracy and completeness. Two reviewers independently rated the quality of studies as good, fair, or poor by using criteria designed to detect potential risk of bias in an observational study (including selection bias, measurement bias, and control for potential confounding) and precision of measurement. Data Synthesis and Strength of Evidence We assessed the overall strength of the evidence for each outcome separately for studies measuring health literacy and those measuring numeracy on the basis of information only from good- and fair-quality studies. Using AHRQ guidance (8), we graded the strength of evidence as high, moderate, low, or insufficient on the basis of the potential risk of bias of included studies, consistency of effect across studies, directness of the evidence, and precision of the estimate (Table 1). We determined the grade on the basis of the literature from the update searches. We then considered whether the findings from the 2004 review would alter our conclusions. We graded the body of evidence for an outcome as low if the evidence was limited to 1 study that controlled for potential confounding variables or to several small studies in which all, or only some, controlled for potential confounding variables or as insufficient if findings across studies were inconsistent or were limited to 1 unadjusted study. Because of heterogeneity across studies in their approaches to measuring health literacy, numeracy, and outcomes, we summarized the evidence through consensus discussions and did not conduct any meta-analyses. Table 1. Strength of Evidence Grades and Definitions Role of the Funding Source AHRQ reviewed a draft report and provided copyright release for this manuscript. The funding source did not participate in conducting literature searches, determining study eligibility, evaluating individual studies, grading evidence, or interpreting results. Results First, we present the results from our literature search and a summary of characteristics across studies, followed by findings specific to health literacy then numeracy. We generally highlight evidence of moderate or high strength and mention only outcomes with low or insufficient evidence. Where relevant, we comment on the evidence provided through the 2004 review. Tables 2 and 3 summarize our findings and strength-of-evidence grade for each included health literacy and numeracy outcome, respectively. Table 2. Health Literacy Outcome Results: Strength of Evidence and Summary of Findings, 2004 and 2011 Table 3. Numeracy Outcome Results: Strength of Evidence and Summary of Findings, 2011 Characteristics of Reviewed Studies We identified 3823 citations and evaluated 1012 full-text articles (Appendix Figure). Ultimately, we included 96 studies rated as good or fair quality. These studies were reported in 111 articles because some investigators reported study results in multiple publications (98 articles on health literacy, 22 on numeracy, and 9 on both). We found no studies that examined outcomes by the oral (verbal) component of health literacy. Of the 111 articles, 100 were rated as fair quality. All studies were observational, primarily cross-sectional designs (91 of 111 articles). The Supplement (health literacy) and Appendix Table 2 (numeracy) present summary information for each included article. Supplement. Overview of Health Literacy Studies Appendix Figure. Summary of evidence search and selection. KQ = key question. Appendix Table 2. Overview of Numeracy Studies Studies varied in their measurement of health literacy and numeracy. Commonly used instruments to measure health literacy are the Rapid Estimate of Adult Literacy in Medicine (REALM) (9), the Test of Functional Health Literacy in Adults (TOFHLA) (10), and short TOFHLA (S-TOFHLA). Instruments frequently used to measure numeracy are the SchwartzWoloshin Numeracy Test (11) and the Wide Range Achievement Test (WRAT) math subtest (12). Studies also differed in how investigators distinguished between levels or thresholds of health literacyeither as a continuous measure or as categorical groups. Some studies identified 3 groups, often called inadequate, marginal, and adequate, whereas others combined 2 of the 3 groups. Because evidence was sparse for evaluating differences between marginal and adequate health literacy, our results focus on the differences between the lowest and highest groups. Studies in this update generally included multivariate analyses rather than simpler unadjusted analyses. They varied considerably, however, in regard to which potential confounding variables are controlled (Supplement and Appendix Table 2). All results reported here are from adjusted analyses that controlled for potential confounding variables, unless otherwise noted. Relationship Between Health Literacy and Outcomes Use of Health Care Services and Access to Care Emergency Care and Hospitalizations. Nine studies examining the risk for emergency care use (1321) and 6 examining the risk for hospitalizations (1419) provided moderate evidence showing increased use of both services among people with lower health literacy, including elderly persons, clinic and inner-city hospital patients, patients with asthma, and patients with congestive heart failure.",
"title": ""
},
{
"docid": "f6fa1c4ce34f627d9d7d1ca702272e26",
"text": "One of the most difficult aspects in rhinoplasty is resolving and preventing functional compromise of the nasal valve area reliably. The nasal valves are crucial for the individual breathing competence of the nose. Structural and functional elements contribute to this complex system: the nasolabial angle, the configuration and stability of the alae, the function of the internal nasal valve, the anterior septum symmetrically separating the bilateral airways and giving structural and functional support to the alar cartilage complex and to their junction with the upper lateral cartilages, the scroll area. Subsequently, the open angle between septum and sidewalls is important for sufficient airflow as well as the position and function of the head of the turbinates. The clinical examination of these elements is described. Surgical techniques are more or less well known and demonstrated with patient examples and drawings: anterior septoplasty, reconstruction of tip and dorsum support by septal extension grafts and septal replacement, tip suspension and lateral crural sliding technique, spreader grafts and suture techniques, splay grafts, alar batten grafts, lateral crural extension grafts, and lateral alar suspension. The numerous literature is reviewed.",
"title": ""
},
{
"docid": "f3a044835e9cbd0c13218ab0f9c06dd1",
"text": "Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface.",
"title": ""
},
{
"docid": "03e7070b1eb755d792564077f65ea012",
"text": "The widespread use of online social networks (OSNs) to disseminate information and exchange opinions, by the general public, news media, and political actors alike, has enabled new avenues of research in computational political science. In this paper, we study the problem of quantifying and inferring the political leaning of Twitter users. We formulate political leaning inference as a convex optimization problem that incorporates two ideas: (a) users are consistent in their actions of tweeting and retweeting about political issues, and (b) similar users tend to be retweeted by similar audience. We then apply our inference technique to 119 million election-related tweets collected in seven months during the 2012 U.S. presidential election campaign. On a set of frequently retweeted sources, our technique achieves 94 percent accuracy and high rank correlation as compared with manually created labels. By studying the political leaning of 1,000 frequently retweeted sources, 232,000 ordinary users who retweeted them, and the hashtags used by these sources, our quantitative study sheds light on the political demographics of the Twitter population, and the temporal dynamics of political polarization as events unfold.",
"title": ""
},
{
"docid": "b294a3541182e3195254e83b092f537d",
"text": "This paper describes a new project intended to provide a firmer theoretical and empirical foundation for such tasks as enterprise modeling, enterprise integration, and process re-engineering. The project includes ( 1 ) collecting examples of how different organizations perform sim'lar processes, and ( 2 ) representing these examples in an on-line \"process handbook\" which includes the relative advantages of the alternatives. The handbook is intended to help (a) redesign existing Organizational processes, ( b ) invent new organizational processes that take advantage of information technology, and perhaps (e ) automatically generate sofivare to support organizational processes. A key element of the work is a novel approach to representing processes at various levels of abstraction. This approach uses ideas from computer science about inheritance and from coordinalion theory about managing dependencies. Its primary advantage is that it allows users to explicitly represent the similarities (and differences) among related processes and to easily find or generate sensible alternatives for how a given process could be",
"title": ""
},
{
"docid": "157f5ef02675b789df0f893311a5db72",
"text": "We present a novel spectral shading model for human skin. Our model accounts for both subsurface and surface scattering, and uses only four parameters to simulate the interaction of light with human skin. The four parameters control the amount of oil, melanin and hemoglobin in the skin, which makes it possible to match specific skin types. Using these parameters we generate custom wavelength dependent diffusion profiles for a two-layer skin model that account for subsurface scattering within the skin. These diffusion profiles are computed using convolved diffusion multipoles, enabling an accurate and rapid simulation of the subsurface scattering of light within skin. We combine the subsurface scattering simulation with a Torrance-Sparrow BRDF model to simulate the interaction of light with an oily layer at the surface of the skin. Our results demonstrate that this four parameter model makes it possible to simulate the range of natural appearance of human skin including African, Asian, and Caucasian skin types.",
"title": ""
},
{
"docid": "57502ae793808fded7d446a3bb82ca74",
"text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication",
"title": ""
},
{
"docid": "950d7d10b09f5d13e09692b2a4576c00",
"text": "Prebiotics, as currently conceived of, are all carbohydrates of relatively short chain length. To be effective they must reach the cecum. Present evidence concerning the 2 most studied prebiotics, fructooligosaccharides and inulin, is consistent with their resisting digestion by gastric acid and pancreatic enzymes in vivo. However, the wide variety of new candidate prebiotics becoming available for human use requires that a manageable set of in vitro tests be agreed on so that their nondigestibility and fermentability can be established without recourse to human studies in every case. In the large intestine, prebiotics, in addition to their selective effects on bifidobacteria and lactobacilli, influence many aspects of bowel function through fermentation. Short-chain fatty acids are a major product of prebiotic breakdown, but as yet, no characteristic pattern of fermentation acids has been identified. Through stimulation of bacterial growth and fermentation, prebiotics affect bowel habit and are mildly laxative. Perhaps more importantly, some are a potent source of hydrogen in the gut. Mild flatulence is frequently observed by subjects being fed prebiotics; in a significant number of subjects it is severe enough to be unacceptable and to discourage consumption. Prebiotics are like other carbohydrates that reach the cecum, such as nonstarch polysaccharides, sugar alcohols, and resistant starch, in being substrates for fermentation. They are, however, distinctive in their selective effect on the microflora and their propensity to produce flatulence.",
"title": ""
},
{
"docid": "e541be7c81576fdef564fd7eba5d67dd",
"text": "As the cost of massively broadband® semiconductors continue to be driven down at millimeter wave (mm-wave) frequencies, there is great potential to use LMDS spectrum (in the 28-38 GHz bands) and the 60 GHz band for cellular/mobile and peer-to-peer wireless networks. This work presents urban cellular and peer-to-peer RF wideband channel measurements using a broadband sliding correlator channel sounder and steerable antennas at carrier frequencies of 38 GHz and 60 GHz, and presents measurements showing the propagation time delay spread and path loss as a function of separation distance and antenna pointing angles for many types of real-world environments. The data presented here show that at 38 GHz, unobstructed Line of Site (LOS) channels obey free space propagation path loss while non-LOS (NLOS) channels have large multipath delay spreads and can exploit many different pointing angles to provide propagation links. At 60 GHz, there is notably more path loss, smaller delay spreads, and fewer unique antenna angles for creating a link. For both 38 GHz and 60 GHz, we demonstrate empirical relationships between the RMS delay spread and antenna pointing angles, and observe that excess path loss (above free space) has an inverse relationship with transmitter-to-receiver separation distance.",
"title": ""
},
{
"docid": "5392e45840929b05b549a64a250774e5",
"text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.",
"title": ""
}
] |
scidocsrr
|
842674cf8a39f07f2abf32dd670a7ec9
|
Anomalous lattice vibrations of single- and few-layer MoS2.
|
[
{
"docid": "9068ae05b4064a98977f6a19bae6ccf0",
"text": "We present Raman spectroscopy measurements on single- and few-layer graphene flakes. By using a scanning confocal approach, we collect spectral data with spatial resolution, which allows us to directly compare Raman images with scanning force micrographs. Single-layer graphene can be distinguished from double- and few-layer by the width of the D' line: the single peak for single-layer graphene splits into different peaks for the double-layer. These findings are explained using the double-resonant Raman model based on ab initio calculations of the electronic structure and of the phonon dispersion. We investigate the D line intensity and find no defects within the flake. A finite D line response originating from the edges can be attributed either to defects or to the breakdown of translational symmetry.",
"title": ""
}
] |
[
{
"docid": "23f9be150ae62c583d34b53b509818a4",
"text": "Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation of our access control model that allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.",
"title": ""
},
{
"docid": "e409a2a23fb0dbeb0aa57c89a10d61b1",
"text": "Text is still the most prevalent Internet media type. Examples of this include popular social networking applications such as Twitter, Craigslist, Facebook, etc. Other web applications such as e-mail, blog, chat rooms, etc. are also mostly text based. A question we address in this paper that deals with text based Internet forensics is the following: given a short text document, can we identify if the author is a man or a woman? This question is motivated by recent events where people faked their gender on the Internet. Note that this is different from the authorship attribution problem. In this paper we investigate author gender identification for short length, multi-genre, content-free text, such as the ones found in many Internet applications. Fundamental questions we ask are: do men and women inherently use different classes of language styles? If this is true, what are good linguistic features that indicate gender? Based on research in human psychology, we propose 545 psycho-linguistic and gender-preferential cues along with stylometric features to build the feature space for this identification problem. Note that identifying the correct set of features that indicate gender is an open research problem. Three machine learning algorithms (support vector machine, Bayesian logistic regression and AdaBoost decision tree) are then designed for gender identification based on the proposed features. Extensive experiments on large text corpora (Reuters Corpus Volume 1 newsgroup data and Enron e-mail data) indicate an accuracy up to 85.1% in identifying the gender. Experiments also indicate that function words, word-based features and structural features are significant gender discriminators. a 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6fe77035a5101f60968a189d648e2feb",
"text": "In the past few years, Reddit -- a community-driven platform for submitting, commenting and rating links and text posts -- has grown exponentially, from a small community of users into one of the largest online communities on the Web. To the best of our knowledge, this work represents the most comprehensive longitudinal study of Reddit's evolution to date, studying both (i) how user submissions have evolved over time and (ii) how the community's allocation of attention and its perception of submissions have changed over 5 years based on an analysis of almost 60 million submissions. Our work reveals an ever-increasing diversification of topics accompanied by a simultaneous concentration towards a few selected domains both in terms of posted submissions as well as perception and attention. By and large, our investigations suggest that Reddit has transformed itself from a dedicated gateway to the Web to an increasingly self-referential community that focuses on and reinforces its own user-generated image- and textual content over external sources.",
"title": ""
},
{
"docid": "1c0e441afd88f00b690900c42b40841a",
"text": "Convergence problems occur abundantly in all branches of mathematics or in the mathematical treatment of the sciences. Sequence transformations are principal tools to overcome convergence problems of the kind. They accomplish this by converting a slowly converging or diverging input sequence {sn} ∞ n=0 into another sequence {s ′ n }∞ n=0 with hopefully better numerical properties. Padé approximants, which convert the partial sums of a power series to a doubly indexed sequence of rational functions, are the best known sequence transformations, but the emphasis of the review will be on alternative sequence transformations which for some problems provide better results than Padé approximants.",
"title": ""
},
{
"docid": "76e75c4549cbaf89796355b299bedfdc",
"text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.",
"title": ""
},
{
"docid": "095c796491edf050dc372799ae82b3d3",
"text": "Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Although many networks contain this type of temporal information, the majority of research in network representation learning has focused on static snapshots of the graph and has largely ignored the temporal dynamics of the network. In this work, we describe a general framework for incorporating temporal information into network embedding methods. The framework gives rise to methods for learning time-respecting embeddings from continuous-time dynamic networks. Overall, the experiments demonstrate the effectiveness of the proposed framework and dynamic network embedding approach as it achieves an average gain of 11.9% across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations.",
"title": ""
},
{
"docid": "b139bad3a500fad18c203316fb6fbb55",
"text": "The current environment of web applications demands performance and scalability. Several previous approaches have implemented threading, events, or both, but increasing traffic requires new solutions for improved concurrent service. Node.js is a new web framework that achieves both through server-side JavaScript and eventdriven I/O. Tests will be performed against two comparable frameworks that compare service request times over a number of cores. The results will demonstrate the performance of JavaScript as a server-side language and the efficiency of the non-blocking asynchronous model.",
"title": ""
},
{
"docid": "a027c9dd3b4522cdf09a2238bfa4c37e",
"text": "Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.",
"title": ""
},
{
"docid": "c78a4446be38b8fff2a949cba30a8b65",
"text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.",
"title": ""
},
{
"docid": "d76e46eec2aa0abcbbd47b8270673efa",
"text": "OBJECTIVE\nTo explore the clinical efficacy and the mechanism of acupoint autohemotherapy in the treatment of allergic rhinitis.\n\n\nMETHODS\nForty-five cases were randomized into an autohemotherapy group (24 cases) and a western medication group (21 cases). In the autohemotherapy group, the acupoint autohemotherapy was applied to the bilateral Dingchuan (EX-B 1), Fengmen (BL 12), Feishu (BL 13), Quchi (LI 11), Zusanli (ST 36) and the others. In the western medication group, loratadine tablets were prescribed. The patients were treated continuously for 3 months in both groups. The clinical symptom score was taken for the assessment of clinical efficacy. The enzyme-linked immunoadsordent assay (ELISA) was adopted to determine the contents of serum interferon-gamma (IFN-gamma) and interleukin-12 (IL-12).\n\n\nRESULTS\nThe total effective rate was 83.3% (20/24) in the autohemotherapy group, which was obviously superior to 66.7% (14/21) in the western medication group (P < 0.05). After treatment, the clinical symptom scores of patients in the two groups were all reduced. The improvements in the scores of sneezing and clear nasal discharge in the autohemotherapy group were much more significant than those in the western medication group (both P < 0.05). After treatment, the serum IL-12 content of patients in the two groups was all increased to different extents as compared with that before treatment (both P < 0.05). In the autohemotherapy group, the serum IFN-gamma was increased after treatment (P < 0.05). In the western medication group, the serum IFN-gamma was not increased obviously after treatment (P > 0.05). The increase of the above index contents in the autohemotherapy group were more apparent than those in the western medication group (both P < 0.05).\n\n\nCONCLUSION\nThe acupoint autohemotherapy relieves significantly the clinical symptoms of allergic rhinitis and the therapeutic effect is better than that with oral administration of loratadine tablets, which is probably relevant with the increase of serum IL-12 content and the promotion of IFN-gamma synthesis.",
"title": ""
},
{
"docid": "0739c95aca9678b3c001c4d2eb92ec57",
"text": "The Image segmentation is referred to as one of the most important processes of image processing. Image segmentation is the technique of dividing or partitioning an image into parts, called segments. It is mostly useful for applications like image compression or object recognition, because for these types of applications, it is inefficient to process the whole image. So, image segmentation is used to segment the parts from image for further processing. There exist several image segmentation techniques, which partition the image into several parts based on certain image features like pixel intensity value, color, texture, etc. These all techniques are categorized based on the segmentation method used. In this paper the various image segmentation techniques are reviewed, discussed and finally a comparison of their advantages and disadvantages is listed.",
"title": ""
},
{
"docid": "92e50fc2351b4a05d573590f3ed05e81",
"text": "OBJECTIVE\nWe examined the effects of sensory-enhanced hatha yoga on symptoms of combat stress in deployed military personnel, compared their anxiety and sensory processing with that of stateside civilians, and identified any correlations between the State-Trait Anxiety Inventory scales and the Adolescent/Adult Sensory Profile quadrants.\n\n\nMETHOD\nSeventy military personnel who were deployed to Iraq participated in a randomized controlled trial. Thirty-five received 3 wk (≥9 sessions) of sensory-enhanced hatha yoga, and 35 did not receive any form of yoga.\n\n\nRESULTS\nSensory-enhanced hatha yoga was effective in reducing state and trait anxiety, despite normal pretest scores. Treatment participants showed significantly greater improvement than control participants on 16 of 18 mental health and quality-of-life factors. We found positive correlations between all test measures except sensory seeking. Sensory seeking was negatively correlated with all measures except low registration, which was insignificant.\n\n\nCONCLUSION\nThe results support using sensory-enhanced hatha yoga for proactive combat stress management.",
"title": ""
},
{
"docid": "0c842ef34f1924e899e408309f306640",
"text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.",
"title": ""
},
{
"docid": "b8702cb8d18ae53664f3dfff95152764",
"text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.",
"title": ""
},
{
"docid": "ae956d5e1182986505ff8b4de8b23777",
"text": "Device classification is important for many applications such as industrial quality controls, through-wall imaging, and network security. A novel approach to detection is proposed using a random noise radar (RNR), coupled with Radio Frequency “Distinct Native Attribute (RF-DNA)” fingerprinting processing algorithms to non-destructively interrogate microwave devices. RF-DNA has previously demonstrated “serial number” discrimination of passive Radio Frequency (RF) emissions such as Orthogonal Frequency Division Multiplexed (OFDM) signals, Worldwide Interoperability for Microwave Access (WiMAX) signals and others with classification accuracies above 80% using a Multiple Discriminant Analysis/Maximum Likelihood (MDAML) classifier. This approach proposes to couple the classification successes of the RF-DNA fingerprint processing with a non-destructive active interrogation waveform. An Ultra Wideband (UWB) noise waveform is uniquely suitable as an active interrogation method since it will not cause damage to sensitive microwave components and multiple RNRs can operate simultaneously in close proximity, allowing for significant parallelization of detection systems.",
"title": ""
},
{
"docid": "6b1c17b9c4462aebbe7f908f4c88381b",
"text": "This study examined neural activity associated with establishing causal relationships across sentences during on-line comprehension. ERPs were measured while participants read and judged the relatedness of three-sentence scenarios in which the final sentence was highly causally related, intermediately related, and causally unrelated to its context. Lexico-semantic co-occurrence was matched across the three conditions using a Latent Semantic Analysis. Critical words in causally unrelated scenarios evoked a larger N400 than words in both highly causally related and intermediately related scenarios, regardless of whether they appeared before or at the sentence-final position. At midline sites, the N400 to intermediately related sentence-final words was attenuated to the same degree as to highly causally related words, but otherwise the N400 to intermediately related words fell in between that evoked by highly causally related and intermediately related words. No modulation of the late positivity/P600 component was observed across conditions. These results indicate that both simple and complex causal inferences can influence the earliest stages of semantically processing an incoming word. Further, they suggest that causal coherence, at the situation level, can influence incremental word-by-word discourse comprehension, even when semantic relationships between individual words are matched.",
"title": ""
},
{
"docid": "7b13637b634b11b3061f7ebe0c64b3a6",
"text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.",
"title": ""
},
{
"docid": "84ba070a14da00c37de479e62e78f126",
"text": "The EEG (Electroencephalogram) signal indicates the electrical activity of the brain. They are highly random in nature and may contain useful information about the brain state. However, it is very difficult to get useful information from these signals directly in the time domain just by observing them. They are basically non-linear and nonstationary in nature. Hence, important features can be extracted for the diagnosis of different diseases using advanced signal processing techniques. In this paper the effect of different events on the EEG signal, and different signal processing methods used to extract the hidden information from the signal are discussed in detail. Linear, Frequency domain, time - frequency and non-linear techniques like correlation dimension (CD), largest Lyapunov exponent (LLE), Hurst exponent (H), different entropies, fractal dimension(FD), Higher Order Spectra (HOS), phase space plots and recurrence plots are discussed in detail using a typical normal EEG signal.",
"title": ""
},
{
"docid": "b7da2182bbdf69c46ffba20b272fab02",
"text": "Social Media is playing a key role in today's society. Many of the events that are taking place in diverse human activities could be explained by the study of these data. Big Data is a relatively new parading in Computer Science that is gaining increasing interest by the scientific community. Big Data Predictive Analytics is a Big Data discipline that is mostly used to analyze what is in the huge amounts of data and then perform predictions based on such analysis using advanced mathematics and computing techniques. The study of Social Media Data involves disciplines like Natural Language Processing, by the integration of this area to academic studies, useful findings have been achieved. Social Network Rating Systems are online platforms that allow users to know about goods and services, the way in how users review and rate their experience is a field of evolving research. This paper presents a deep investigation in the state of the art of these areas to discover and analyze the current status of the research that has been developed so far by academics of diverse background.",
"title": ""
},
{
"docid": "b467763514576e3f37755fe0e18394c7",
"text": "T study of lactic acid (HLa) and muscular contraction has a long history, beginning perhaps as early as 1807 when Berzelius found HLa in muscular fluid and thought that ‘‘the amount of free lactic acid in a muscle [was] proportional to the extent to which the muscle had previously been exercised’’ (cited in ref. 1). Several subsequent studies in the 19th century established the view that HLa was a byproduct of metabolism under conditions of O2 limitation. For example, in 1891, Araki (cited in ref. 2) reported elevated HLa levels in the blood and urine of a variety of animals subjected to hypoxia. In the early part of the 20th century, Fletcher and Hopkins (3) found an accumulation of HLa in anoxia as well as after prolonged stimulation to fatigue in amphibian muscle in vitro. Subsequently, based on the work of Fletcher and Hopkins (3) as well as his own studies, Hill (and colleagues; ref. 4) postulated that HLa increased during muscular exercise because of a lack of O2 for the energy requirements of the contracting muscles. These studies laid the groundwork for the anaerobic threshold concept, which was introduced and detailed by Wasserman and colleagues in the 1960s and early 1970s (5–7). The basic anaerobic threshold paradigm is that elevated HLa production and concentration during muscular contractions or exercise are the result of cellular hypoxia. Table 1 summarizes the essential components of the anaerobic threshold concept. However, several studies during the past '30 years have presented evidence questioning the idea that O2 limitation is a prerequisite for HLa production and accumulation in muscle and blood. Jöbsis and Stainsby (8) stimulated the canine gastrocnemius in situ at a rate known to elicit peak twitch oxygen uptake (V̇O2) and high net HLa output. They (8) reasoned that if the HLA output was caused by O2-limited oxidative phosphorylation, then there should be an accompanying reduction of members of the respiratory chain, including the NADHyNAD1 pair. Instead, muscle surface fluorometry indicated NADHyNAD1 oxidation in comparison to the resting condition. Later, Connett and colleagues (9–11), by using myoglobin cryomicrospectroscopy in small volumes of dog gracilis muscle, were unable to find loci with a PO2 less than the critical PO2 for maximal mitochondrial ox idative phosphorylation (0.1– 0.5 mmHg) during muscle contractions resulting in HLa output and an increase in muscle HLa concentration. More recently, Richardson and colleagues (12) used proton magnetic resonance spectroscopy to determine myoglobin saturation (and thereby an estimate of intramuscular PO2) during progressive exercise in humans. They found that HLa efflux was unrelated to muscle cytoplasmic PO2 during normoxia. Although there are legitimate criticisms of these studies, they and many others of a related nature have led to alternative explanations for HLa production that do not involve O2 limitation. In the present issue of PNAS, two papers (13, 14) illustrate the dichotomous relationship between lactic acid and oxygen. First, Kemper and colleagues (13) add further evidence against O2 as the key regulator of HLa production. They (13) used a unique model, the rattlesnake tailshaker muscle complex, to study intracellular glycolysis during ischemia in comparison to HLa efflux during free flow conditions; in both protocols, the muscle complex was active and producing rattling. In their first experiment, rattling was induced for 29 s during ischemia resulting from blood pressure cuff inflation between the cloaca and tailshaker muscle complex. In a second experiment, measures were taken during 108 s of rattling with normal, spontaneous blood flow. In both experiments, 31P magnetic resonance spectroscopy permitted measurement of changes in muscle levels of PCr, ATP, Pi, and pH before, during, and after rattling. Based on previous methods established in their laboratory, Kemper and colleagues (13) estimated glycolytic f lux during the ischemic and aerobic rattling protocols. The result was that total glycolytic f lux was the same under both conditions! Kemper and colleagues (13) conclude that HLa generation does not necessarily ref lect O2 limitation. To be fair, there are potential limitations to the excellent paper by Kemper and colleagues (13). First, and most importantly, they studied muscle metabolism in the transition from rest to rattling (29 s during ischemia and 108 s during free flow). Some investigators argue that oxidative phosphorylation is limited by O2 delivery to the exercising muscles during this nonsteady-state transition even with spontaneous blood flow (for review, see ref. 15). This remains a matter of debate, and the role of O2 in the transition from rest to contractions may depend on the intensity of contractions (16, 17). Of course, it is possible that the role of O2 in the transition to rattling may be tempered by the high volume density of mitochondria and the high blood supply to this unique muscle complex (13, 18). Second, there could be significant early lactate production within the first seconds of the transition (19). Third, it would have been helpful to have measurements of intramuscular lactate and glycogen concentra-",
"title": ""
}
] |
scidocsrr
|
480c294d9d88c3aace8b12a9c0a1d89b
|
A typology of crowdfunding sponsors: Birds of a feather flock together?
|
[
{
"docid": "e267fe4d2d7aa74ded8988fcdbfb3474",
"text": "Consumers have recently begun to play a new role in some markets: that of providing capital and investment support to the offering. This phenomenon, called crowdfunding, is a collective effort by people who network and pool their money together, usually via the Internet, in order to invest in and support efforts initiated by other people or organizations. Successful service businesses that organize crowdfunding and act as intermediaries are emerging, attesting to the viability of this means of attracting investment. Employing a “Grounded Theory” approach, this paper performs an in-depth qualitative analysis of three cases involving crowdfunding initiatives: SellaBand in the music business, Trampoline in financial services, and Kapipal in non-profit services. These cases were selected to represent a diverse set of crowdfunding operations that vary in terms of risk/return for the investorconsumer and the type of consumer involvement. The analysis offers important insights about investor behaviour in crowdfunding service models, the potential determinants of such behaviour, and variations in behaviour and determinants across different service models. The findings have implications for service managers interested in launching and/or managing crowdfunding initiatives, and for service theory in terms of extending the consumer’s role from co-production and co-creation to investment.",
"title": ""
},
{
"docid": "8654b5134dadc076a6298526e60f66fb",
"text": "Ideas competitions appear to be a promising tool for crowdsourcing and open innovation processes, especially for business-to-business software companies. active participation of potential lead users is the key to success. Yet a look at existing ideas competitions in the software field leads to the conclusion that many information technology (It)–based ideas competitions fail to meet requirements upon which active participation is established. the paper describes how activation-enabling functionalities can be systematically designed and implemented in an It-based ideas competition for enterprise resource planning software. We proceeded to evaluate the outcomes of these design measures and found that participation can be supported using a two-step model. the components of the model support incentives and motives of users. Incentives and motives of the users then support the process of activation and consequently participation throughout the ideas competition. this contributes to the successful implementation and maintenance of the ideas competition, thereby providing support for the development of promising innovative ideas. the paper concludes with a discussion of further activation-supporting components yet to be implemented and points to rich possibilities for future research in these areas.",
"title": ""
}
] |
[
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "d2f2137602149b5062f60e7325d3610f",
"text": "Recently a revision of the cell theory has been proposed, which has several implications both for physiology and pathology. This revision is founded on adapting the old Julius von Sach’s proposal (1892) of the Energide as the fundamental universal unit of eukaryotic life. This view maintains that, in most instances, the living unit is the symbiotic assemblage of the cell periphery complex organized around the plasma membrane, some peripheral semi-autonomous cytosol organelles (as mitochondria and plastids, which may be or not be present), and of the Energide (formed by the nucleus, microtubules, and other satellite structures). A fundamental aspect is the proposal that the Energide plays a pivotal and organizing role of the entire symbiotic assemblage (see Appendix 1). The present paper discusses how the Energide paradigm implies a revision of the concept of the internal milieu. As a matter of fact, the Energide interacts with the cytoplasm that, in turn, interacts with the interstitial fluid, and hence with the medium that has been, classically, known as the internal milieu. Some implications of this aspect have been also presented with the help of a computational model in a mathematical Appendix 2 to the paper. Finally, relevances of the Energide concept for the information handling in the central nervous system are discussed especially in relation to the inter-Energide exchange of information.",
"title": ""
},
{
"docid": "d80580490ac7d968ff08c2a9ee159028",
"text": "Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.",
"title": ""
},
{
"docid": "c9431b5a214dba08ca50706a27b2af7c",
"text": "For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).",
"title": ""
},
{
"docid": "f83481aef8fc3f61a6ecbe3548c9bde2",
"text": "Establishing unique identities for both humans and end systems has been an active research problem in the security community, giving rise to innovative machine learning-based authentication techniques. Although such techniques offer an automated method to establish identity, they have not been vetted against sophisticated attacks that target their core machine learning technique. This paper demonstrates that mimicking the unique signatures generated by host fingerprinting and biometric authentication systems is possible. We expose the ineffectiveness of underlying machine learning classification models by constructing a blind attack based around the query synthesis framework and utilizing Explainable–AI (XAI) techniques. We launch an attack in under 130 queries on a state-of-the-art face authentication system, and under 100 queries on a host authentication system. We examine how these attacks can be defended against and explore their limitations. XAI provides an effective means for adversaries to infer decision boundaries and provides a new way forward in constructing attacks against systems using machine learning models for authentication.",
"title": ""
},
{
"docid": "e99343a0ab1eb9007df4610ae35dec97",
"text": "Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL). Although SRL is naturally essential to text comprehension tasks, it is surprisingly ignored in previous work. This paper thus makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal arguments and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art.",
"title": ""
},
{
"docid": "8856fa1c0650970da31fae67cd8dcd86",
"text": "In this paper, a new topology for rectangular waveguide bandpass and low-pass filters is presented. A simple, accurate, and robust design technique for these novel meandered waveguide filters is provided. The proposed filters employ a concatenation of ±90° $E$ -plane mitered bends (±90° EMBs) with different heights and lengths, whose dimensions are consecutively and independently calculated. Each ±90° EMB satisfies a local target reflection coefficient along the device so that they can be calculated separately. The novel structures allow drastically reduce the total length of the filters and embed bends if desired, or even to provide routing capabilities. Furthermore, the new meandered topology allows the introduction of transmission zeros above the passband of the low-pass filter, which can be controlled by the free parameters of the ±90° EMBs. A bandpass and a low-pass filter with meandered topology have been designed following the proposed novel technique. Measurements of the manufactured prototypes are also included to validate the novel topology and design technique, achieving excellent agreement with the simulation results.",
"title": ""
},
{
"docid": "f327ed315be7d47b9f63dd9498999ae4",
"text": "In this paper we propose a deep architecture for detecting people attributes (e.g. gender, race, clothing …) in surveillance contexts. Our proposal explicitly deal with poor resolution and occlusion issues that often occur in surveillance footages by enhancing the images by means of Deep Convolutional Generative Adversarial Networks (DCGAN). Experiments show that by combining both our Generative Reconstruction and Deep Attribute Classification Network we can effectively extract attributes even when resolution is poor and in presence of strong occlusions up to 80% of the whole person figure.",
"title": ""
},
{
"docid": "19a1a5d69037f0072f67c785031b0881",
"text": "In recent years, advances in the design of convolutional neural networks have resulted in signicant improvements on the image classication and object detection problems. One of the advances is networks built by stacking complex cells, seen in such networks as InceptionNet and NasNet. ese cells are either constructed by hand, generated by generative networks or discovered by search. Unlike conventional networks (where layers consist of a convolution block, sampling and non linear unit), the new cells feature more complex designs consisting of several lters and other operators connected in series and parallel. Recently, several cells have been proposed or generated that are supersets of previously proposed custom or generated cells. Inuenced by this, we introduce a network construction method based on EnvelopeNets. An EnvelopeNet is a deep convolutional neural network of stacked EnvelopeCells. EnvelopeCells are supersets (or envelopes) of previously proposed handcraed and generated cells. We propose a method to construct improved network architectures by restructuring EnvelopeNets. e algorithm restructures an EnvelopeNet by rearranging blocks in the network. It identies blocks to be restructured using metrics derived from the featuremaps collected during a partial training run of the EnvelopeNet. e method requires less computation resources to generate an architecture than an optimized architecture search over the entire search space of blocks. e restructured networks have higher accuracy on the image classication problem on a representative dataset than both the generating EnvelopeNet and an equivalent arbitrary network.",
"title": ""
},
{
"docid": "c8f3b235811dd64b9b1d35d596ff22f5",
"text": "Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm, prototypethen-edit for response generation, that first retrieves a prototype response from a pre-defined index and then edits the prototype response according to the differences between the prototype context and current context. Our motivation is that the retrieved prototype provides a good start-point for generation because it is grammatical and informative, and the post-editing process further improves the relevance and coherence of the prototype. In practice, we design a contextaware editing model that is built upon an encoder-decoder framework augmented with an editing vector. We first generate an edit vector by considering lexical differences between a prototype context and current context. After that, the edit vector and the prototype response representation are fed to a decoder to generate a new response. Experiment results on a large scale dataset demonstrate that our new paradigm significantly increases the relevance, diversity and originality of generation results, compared to traditional generative models. Furthermore, our model outperforms retrieval-based methods in terms of relevance and originality.",
"title": ""
},
{
"docid": "7709df997c72026406d257c85dacb271",
"text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.",
"title": ""
},
{
"docid": "2a443df82f61b198ceca472a7a080361",
"text": "Despite rapid technological advances in computer hardware and software, insecure behavior by individual computer users continues to be a significant source of direct cost and productivity loss. Why do individuals, many of whom are aware of the possible grave consequences of low-level insecure behaviors such as failure to backup work and disclosing passwords, continue to engage in unsafe computing practices? In this article we propose a conceptual model of this behavior as the outcome of a boundedly-rational choice process. We explore this model in a survey of undergraduate students (N = 167) at two large public universities. We asked about the frequency with which they engaged in five commonplace but unsafe computing practices, and probed their decision processes with regard to these practices. Although our respondents saw themselves as knowledgeable, competent users, and were broadly aware that serious consequences were quite likely to result, they reported frequent unsafe computing behaviors. We discuss the implications of these findings both for further research on risky computing practices and for training and enforcement policies that will be needed in the organizations these students will shortly be entering.",
"title": ""
},
{
"docid": "265b352775956004436b438574ee2d91",
"text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.",
"title": ""
},
{
"docid": "903dc946b338c178634fcf9f14e1b1eb",
"text": "Detecting system anomalies is an important problem in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be powerful in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: (1) fault propagation in the network is ignored, (2) the root casual anomalies may not always be the nodes with a high percentage of vanishing correlations, (3) temporal patterns of vanishing correlations are not exploited for robust detection, and (4) prior knowledge on anomalous nodes are not exploited for (semi-)supervised detection. To address these limitations, in this article we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network and can perform joint inference on both the structural and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations and can compensate for unstructured measurement noise in the system. Moreover, when the prior knowledge on the anomalous status of some nodes are available at certain time points, our approach is able to leverage them to further enhance the anomaly inference accuracy. When the prior knowledge is noisy, our approach also automatically learns reliable information and reduces impacts from noises. By performing extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "d59e64c1865193db3aaecc202f688690",
"text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.",
"title": ""
},
{
"docid": "70574bc8ad9fece3328ca77f17eec90f",
"text": "Five different proposed measures of similarity or semantic distance in WordNet were experimentally compared by examining their performance in a real-word spelling correction system. It was found that Jiang and Conrath’s measure gave the best results overall. That of Hirst and St-Onge seriously over-related, that of Resnik seriously under-related, and those of Lin and of Leacock and Chodorow fell in between.",
"title": ""
},
{
"docid": "7b13637b634b11b3061f7ebe0c64b3a6",
"text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.",
"title": ""
},
{
"docid": "f1681e1c8eef93f15adb5a4d7313c94c",
"text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.",
"title": ""
},
{
"docid": "529ee26c337908488a5912835cc966c3",
"text": "Nucleic acids have emerged as powerful biological and nanotechnological tools. In biological and nanotechnological experiments, methods of extracting and purifying nucleic acids from various types of cells and their storage are critical for obtaining reproducible experimental results. In nanotechnological experiments, methods for regulating the conformational polymorphism of nucleic acids and increasing sequence selectivity for base pairing of nucleic acids are important for developing nucleic acid-based nanomaterials. However, dearth of media that foster favourable behaviour of nucleic acids has been a bottleneck for promoting the biology and nanotechnology using the nucleic acids. Ionic liquids (ILs) are solvents that may be potentially used for controlling the properties of the nucleic acids. Here, we review researches regarding the behaviour of nucleic acids in ILs. The efficiency of extraction and purification of nucleic acids from biological samples is increased by IL addition. Moreover, nucleic acids in ILs show long-term stability, which maintains their structures and enhances nuclease resistance. Nucleic acids in ILs can be used directly in polymerase chain reaction and gene expression analysis with high efficiency. Moreover, the stabilities of the nucleic acids for duplex, triplex, and quadruplex (G-quadruplex and i-motif) structures change drastically with IL cation-nucleic acid interactions. Highly sensitive DNA sensors have been developed based on the unique changes in the stability of nucleic acids in ILs. The behaviours of nucleic acids in ILs detailed here should be useful in the design of nucleic acids to use as biological and nanotechnological tools.",
"title": ""
}
] |
scidocsrr
|
872e08afa64afcdb8a0268c4fe1bc9ac
|
Byzantine Chain Replication
|
[
{
"docid": "e2b74db574db8001dace37cbecb8c4eb",
"text": "Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get/put operations.",
"title": ""
}
] |
[
{
"docid": "434ec0510dc38ea2e7effabe8090d4ce",
"text": "Purpose: Big data analytics (BDA) increasingly provide value to firms for robust decision making and solving business problems. The purpose of this paper is to explore information quality dynamics in big data environment linking business value, user satisfaction and firm performance. Design/methodology/approach: Drawing on the appraisal-emotional response-coping framework, the authors propose a theory on information quality dynamics that helps in achieving business value, user satisfaction and firm performance with big data strategy and implementation. Information quality from BDA is conceptualized as the antecedent to the emotional response (e.g. value and satisfaction) and coping (performance). Proposed information quality dynamics are tested using data collected from 302 business analysts across various organizations in France and the USA. Findings: The findings suggest that information quality in BDA reflects four significant dimensions: completeness, currency, format and accuracy. The overall information quality has significant, positive impact on firm performance which is mediated by business value (e.g. transactional, strategic and transformational) and user satisfaction. Research limitations/implications: On the one hand, this paper shows how to operationalize information quality, business value, satisfaction and firm performance in BDA using PLS-SEM. On the other hand, it proposes an REBUS-PLS algorithm to automatically detect three groups of users sharing the same behaviors when determining the information quality perceptions of BDA. Practical implications: The study offers a set of determinants for information quality and business value in BDA projects, in order to support managers in their decision to enhance user satisfaction and firm performance. Originality/value: The paper extends big data literature by offering an appraisal-emotional response-coping framework that is well fitted for information quality modeling on firm performance. The methodological novelty lies in embracing REBUS-PLS to handle unobserved heterogeneity in the sample. Disciplines Business Publication Details Fosso Wamba, S., Akter, S., Trinchera, L. & De Bourmont, M. (2018). Turning information quality into firm performance in the big data economy. Management Decision, Online First 1-28. This journal article is available at Research Online: https://ro.uow.edu.au/gsbpapers/536",
"title": ""
},
{
"docid": "fb099587aea7f8090a4b8fd8fc2d72df",
"text": "This paper provides a review of explanations, visualizations and interactive elements of user interfaces (UI) in music recommendation systems. We call these UI features “recommendation aids”. Explanations are elements of the interface that inform the user why a certain recommendation was made. We highlight six possible goals for explanations, resulting in overall satisfaction towards the system. We found that the most of the existing music recommenders of popular systems provide no explanations, or very limited ones. Since explanations are not independent of other UI elements in recommendation process, we consider how the other elements can be used to achieve the same goals. To this end, we evaluated several existing music recommenders. We wanted to discover which of the six goals (transparency, scrutability, effectiveness, persuasiveness, efficiency and trust) the different UI elements promote in the existing music recommenders, and how they could be measured in order to create a simple framework for evaluating recommender UIs. By using this framework designers of recommendation systems could promote users’ trust and overall satisfaction towards a recommender system thereby improving the user experience with the system.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "5ca1c503cba0db452d0e5969e678db97",
"text": "Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the availability of large amounts of annotated examples, without which state-of-the-art performance is rarely achievable. This is especially inconvenient for the many NLP fields where annotated examples are scarce, such as medical text. To improve NLP models in this situation, we evaluate five improvements on named entity recognition (NER) tasks when only ten annotated examples are available: (1) layer-wise initialization with pre-trained weights, (2) hyperparameter tuning, (3) combining pre-training data, (4) custom word embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental results show that the F1 score of 69.3% achievable by state-of-the-art models can be improved to 78.87%.",
"title": ""
},
{
"docid": "289005e2f4d666a606f7dfd9c8f7a1f4",
"text": "In this paper we present the design of a fin-like dielectric elastomer actuator (DEA) that drives a miniature autonomous underwater vehicle (AUV). The fin-like actuator is modular and independent of the body of the AUV. All electronics required to run the actuator are inside the 100 mm long 3D-printed body, allowing for autonomous mobility of the AUV. The DEA is easy to manufacture, requires no pre-stretch of the elastomers, and is completely sealed for underwater operation. The output thrust force can be tuned by stacking multiple actuation layers and modifying the Young's modulus of the elastomers. The AUV is reconfigurable by a shift of its center of mass, such that both planar and vertical swimming can be demonstrated on a single vehicle. For the DEA we measured thrust force and swimming speed for various actuator designs ran at frequencies from 1 Hz to 5 Hz. For the AUV we demonstrated autonomous planar swimming and closed-loop vertical diving. The actuators capable of outputting the highest thrust forces can power the AUV to swim at speeds of up to 0.55 body lengths per second. The speed falls in the upper range of untethered swimming robots powered by soft actuators. Our tunable DEAs also demonstrate the potential to mimic the undulatory motions of fish fins.",
"title": ""
},
{
"docid": "313a902049654e951860b9225dc5f4e8",
"text": "Financial portfolio management is the process of constant redistribution of a fund into different financial products. This paper presents a financial-model-free Reinforcement Learning framework to provide a deep machine learning solution to the portfolio management problem. The framework consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. This framework is realized in three instants in this work with a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). They are, along with a number of recently reviewed or published portfolio-selection strategies, examined in three back-test experiments with a trading period of 30 minutes in a cryptocurrency market. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. All three instances of the framework monopolize the top three positions in all experiments, outdistancing other compared trading algorithms. Although with a high commission rate of 0.25% in the backtests, the framework is able to achieve at least 4-fold returns in 50 days.",
"title": ""
},
{
"docid": "066b4130dbc9c36d244e5da88936dfc4",
"text": "Real-time strategy (RTS) games have drawn great attention in the AI research community, for they offer a challenging and rich testbed for both machine learning and AI techniques. Due to their enormous state spaces and possible map configurations, learning good and generalizable representations for machine learning is crucial to build agents that can perform well in complex RTS games. In this paper we present a convolutional neural network approach to learn an evaluation function that focuses on learning general features that are independent of the map configuration or size. We first train and evaluate the network on a winner prediction task on a dataset collected with a small set of maps with a fixed size. Then we evaluate the network’s generalizability to three set of larger maps. by using it as an evaluation function in the context of Monte Carlo Tree Search. Our results show that the presented architecture can successfully capture general and map-independent features applicable to more complex RTS situations.",
"title": ""
},
{
"docid": "6fdd045448a1425ec1b9ac5d9bca9fa0",
"text": "Fluorescence has been observed directly across the band gap of semiconducting carbon nanotubes. We obtained individual nanotubes, each encased in a cylindrical micelle, by ultrasonically agitating an aqueous dispersion of raw single-walled carbon nanotubes in sodium dodecyl sulfate and then centrifuging to remove tube bundles, ropes, and residual catalyst. Aggregation of nanotubes into bundles otherwise quenches the fluorescence through interactions with metallic tubes and substantially broadens the absorption spectra. At pH less than 5, the absorption and emission spectra of individual nanotubes show evidence of band gap-selective protonation of the side walls of the tube. This protonation is readily reversed by treatment with base or ultraviolet light.",
"title": ""
},
{
"docid": "794ad922f93b85e2195b3c85665a8202",
"text": "The paper shows how to create a probabilistic graph for WordNet. A node is created for every word and phrase in WordNet. An edge between two nodes is labeled with the probability that a user that is interested in the source concept will also be interested in the destination concept. For example, an edge with weight 0.3 between \"canine\" and \"dog\" indicates that there is a 30% probability that a user who searches for \"canine\" will be interested in results that contain the word \"dog\". We refer to the graph as probabilistic because we enforce the constraint that the sum of the weights of all the edges that go out of a node add up to one. Structural (e.g., the word \"canine\" is a hypernym (i.e., kind of) of the word \"dog\") and textual (e.g., the word \"canine\" appears in the textual definition of the word \"dog\") data from WordNet is used to create a Markov logic network, that is, a set of first order formulas with probabilities. The Markov logic network is then used to compute the weights of the edges in the probabilistic graph. We experimentally validate the quality of the data in the probabilistic graph on two independent benchmarks: Miller and Charles and WordSimilarity-353.",
"title": ""
},
{
"docid": "c4f9b3c863323efd6eca0074c296addf",
"text": "Lip reading, the ability to recognize text information from the movement of a speaker’s mouth, is a difficult and challenging task. Recently, the end-to-end model that maps a variable-length sequence of video frames to text performs poorly in real life situation where people unintentionally move the lips instead of speaking. The goal of this work is to improve the performance of lip reading task in real life. The model proposed in this article consists of two networks that are visual to audio feature network and audio feature to text network. Our experiments showed that the model proposed in this article can achieve 92.76% accuracy in lip reading task on the dataset that the unintentional lips movement was added.",
"title": ""
},
{
"docid": "9b08be9d250822850fda92819774248e",
"text": "In recent years, recommendation systems have been widely used in various commercial platforms to provide recommendations for users. Collaborative filtering algorithms are one of the main algorithms used in recommendation systems. Such algorithms are simple and efficient; however, the sparsity of the data and the scalability of the method limit the performance of these algorithms, and it is difficult to further improve the quality of the recommendation results. Therefore, a model combining a collaborative filtering recommendation algorithm with deep learning technology is proposed, therein consisting of two parts. First, the model uses a feature representation method based on a quadric polynomial regression model, which obtains the latent features more accurately by improving upon the traditional matrix factorization algorithm. Then, these latent features are regarded as the input data of the deep neural network model, which is the second part of the proposed model and is used to predict the rating scores. Finally, by comparing with other recommendation algorithms on three public datasets, it is verified that the recommendation performance can be effectively improved by our model.",
"title": ""
},
{
"docid": "5bf3c1f19c368c1948db91bbd65da84b",
"text": "As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, nonspecialist users find it hard to create useful mental models of robot reasoning solely from observed behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where transparency may have a wider range of effects on trust and utility depending on the application and purpose of the robot. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.",
"title": ""
},
{
"docid": "2089349f4f1dae4d07dfec8481ba748e",
"text": "A signiicant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of delity to their respective networks while being com-prehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.",
"title": ""
},
{
"docid": "647ede4f066516a0343acef725e51d01",
"text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.",
"title": ""
},
{
"docid": "038637eebbf8474bf15dab1c9a81ed6d",
"text": "As the surplus market of failure analysis equipment continues to grow, the cost of performing invasive IC analysis continues to diminish. Hardware vendors in high-security applications utilize security by obscurity to implement layers of protection on their devices. High-security applications must assume that the attacker is skillful, well-equipped and well-funded. Modern security ICs are designed to make readout of decrypted data and changes to security configuration of the device impossible. Countermeasures such as meshes and attack sensors thwart many state of the art attacks. Because of the perceived difficulty and lack of publicly known attacks, the IC backside has largely been ignored by the security community. However, the backside is currently the weakest link in modern ICs because no devices currently on the market are protected against fully-invasive attacks through the IC backside. Fully-invasive backside attacks circumvent all known countermeasures utilized by modern implementations. In this work, we demonstrate the first two practical fully-invasive attacks against the IC backside. Our first attack is fully-invasive backside microprobing. Using this attack we were able to capture decrypted data directly from the data bus of the target IC's CPU core. We also present a fully invasive backside circuit edit. With this attack we were able to set security and configuration fuses of the device to arbitrary values.",
"title": ""
},
{
"docid": "0cce6366df945f079dbb0b90d79b790e",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
},
{
"docid": "d4869ee3fbc997f865cc16e9e1200d0b",
"text": "The potential of mathematical models is widely acknowledged for examining components and interactions of natural systems, estimating the changes and uncertainties on outcomes, and fostering communication between scientists with different backgrounds and between scientists, managers and the community. For favourable reception of models, a systematic accrual of a good knowledge base is crucial for both science and decision-making. As the roles of models grow in importance, there is an increase in the need for appropriate methods with which to test their quality and performance. For biophysical models, the heterogeneity of data and the range of factors influencing usefulness of their outputs often make it difficult for full analysis and assessment. As a result, modelling studies in the domain of natural sciences often lack elements of good modelling practice related to model validation, that is correspondence of models to its intended purpose. Here we review validation issues and methods currently available for assessing the quality of biophysical models. The review covers issues of validation purpose, the robustness of model results, data quality, model prediction and model complexity. The importance of assessing input data quality and interpretation of phenomena is also addressed. Details are then provided on the range of measures commonly used for validation. Requirements for a methodology for assessment during the entire model-cycle are synthesised. Examples are used from a variety of modelling studies which mainly include agronomic modelling, e.g. crop growth and development, climatic modelling, e.g. climate scenarios, and hydrological modelling, e.g. soil hydrology, but the principles are essentially applicable to any area. It is shown that conducting detailed validation requires multi-faceted knowledge, and poses substantial scientific and technical challenges. Special emphasis is placed on using combined multiple statistics to expand our horizons in validation whilst also tailoring the validation requirements to the specific objectives of the application.",
"title": ""
},
{
"docid": "22160219ffa40e4e42f1519fe25ecb6a",
"text": "We propose a new prior distribution for classical (non-hierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-t prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. Cross-validation on a corpus of datasets shows the Cauchy class of prior distributions to outperform existing implementations of Gaussian and Laplace priors. We recommend this prior distribution as a default choice for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small) and also automatically applying more shrinkage to higherorder interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation. We implement a procedure to fit generalized linear models in R with the Student-t prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several applications, including a series of logistic regressions predicting voting preferences, a small bioassay experiment, and an imputation model for a public health data set.",
"title": ""
},
{
"docid": "0824992bb506ac7c8a631664bf608086",
"text": "There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level. Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a/spl grave/ trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level.",
"title": ""
},
{
"docid": "5995a2775a6a10cf4f2bd74a2959935d",
"text": "Artemisinin-based combination therapy is recommended to treat Plasmodium falciparum worldwide, but observations of longer artemisinin (ART) parasite clearance times (PCTs) in Southeast Asia are widely interpreted as a sign of potential ART resistance. In search of an in vitro correlate of in vivo PCT after ART treatment, a ring-stage survival assay (RSA) of 0–3 h parasites was developed and linked to polymorphisms in the Kelch propeller protein (K13). However, RSA remains a laborious process, involving heparin, Percoll gradient, and sorbitol treatments to obtain rings in the 0–3 h window. Here two alternative RSA protocols are presented and compared to the standard Percoll-based method, one highly stage-specific and one streamlined for laboratory application. For all protocols, P. falciparum cultures were synchronized with 5 % sorbitol treatment twice over two intra-erythrocytic cycles. For a filtration-based RSA, late-stage schizonts were passed through a 1.2 μm filter to isolate merozoites, which were incubated with uninfected erythrocytes for 45 min. The erythrocytes were then washed to remove lysis products and further incubated until 3 h post-filtration. Parasites were pulsed with either 0.1 % dimethyl sulfoxide (DMSO) or 700 nM dihydroartemisinin in 0.1 % DMSO for 6 h, washed twice in drug-free media, and incubated for 66–90 h, when survival was assessed by microscopy. For a sorbitol-only RSA, synchronized young (0–3 h) rings were treated with 5 % sorbitol once more prior to the assay and adjusted to 1 % parasitaemia. The drug pulse, incubation, and survival assessment were as described above. Ring-stage survival of P. falciparum parasites containing either the K13 C580 or C580Y polymorphism (associated with low and high RSA survival, respectively) were assessed by the described filtration and sorbitol-only methods and produced comparable results to the reported Percoll gradient RSA. Advantages of both new methods include: fewer reagents, decreased time investment, and fewer procedural steps, with enhanced stage-specificity conferred by the filtration method. Assessing P. falciparum ART sensitivity in vitro via RSA can be streamlined and accurately evaluated in the laboratory by filtration or sorbitol synchronization methods, thus increasing the accessibility of the assay to research groups.",
"title": ""
}
] |
scidocsrr
|
64330d665d11d79b3ab1fa880ebde586
|
Liveness Detection Using Gaze Collinearity
|
[
{
"docid": "2e3f05ee44b276b51c1b449e4a62af94",
"text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.",
"title": ""
},
{
"docid": "fe33ff51ca55bf745bdcdf8ee02e2d36",
"text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.",
"title": ""
},
{
"docid": "b40129a15767189a7a595db89c066cf8",
"text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.",
"title": ""
}
] |
[
{
"docid": "0a08814f1f5f5f489f756df9ad5be051",
"text": "Jump height is a critical aspect of volleyball players' blocking and attacking performance. Although previous studies demonstrated that creatine monohydrate supplementation (CrMS) improves jumping performance, none have yet evaluated its effect among volleyball players with proficient jumping skills. We examined the effect of 4 wk of CrMS on 1 RM spike jump (SJ) and repeated block jump (BJ) performance among 12 elite males of the Sherbrooke University volleyball team. Using a parallel, randomized, double-blind protocol, participants were supplemented with a placebo or creatine solution for 28 d, at a dose of 20 g/d in days 1-4, 10 g/d on days 5-6, and 5 g/d on days 7-28. Pre- and postsupplementation, subjects performed the 1 RM SJ test, followed by the repeated BJ test (10 series of 10 BJs; 3 s interval between jumps; 2 min recovery between series). Due to injuries (N = 2) and outlier data (N = 2), results are reported for eight subjects. Following supplementation, both groups improved SJ and repeated BJ performance. The change in performance during the 1 RM SJ test and over the first two repeated BJ series was unclear between groups. For series 3-6 and 7-10, respectively, CrMS further improved repeated BJ performance by 2.8% (likely beneficial change) and 1.9% (possibly beneficial change), compared with the placebo. Percent repeated BJ decline in performance across the 10 series did not differ between groups pre- and postsupplementation. In conclusion, CrMS likely improved repeated BJ height capability without influencing the magnitude of muscular fatigue in these elite, university-level volleyball players.",
"title": ""
},
{
"docid": "162ad6b8d48f5d6c76067d25b320a947",
"text": "Image Understanding is fundamental to systems that need to extract contents and infer concepts from images. In this paper, we develop an architecture for understanding images, through which a system can recognize the content and the underlying concepts of an image and, reason and answer questions about both using a visual module, a reasoning module, and a commonsense knowledge base. In this architecture, visual data combines with background knowledge and; iterates through visual and reasoning modules to answer questions about an image or to generate a textual description of an image. We first provide motivations of such a Deep Image Understanding architecture and then, we describe the necessary components it should include. We also introduce our own preliminary implementation of this architecture and empirically show how this more generic implementation compares with a recent end-to-end Neural approach on specific applications. We address the knowledge-representation challenge in such an architecture by representing an image using a directed labeled graph (called Scene Description Graph). Our implementation uses generic visual recognition techniques and commonsense reasoning1 to extract such graphs from images. Our experiments show that the extracted graphs capture the syntactic and semantic content of an image with reasonable accuracy.",
"title": ""
},
{
"docid": "9c98685d50238cebb1e23e00201f8c09",
"text": "A frequently asked questions (FAQ) retrieval system improves the access to information by allowing users to pose natural language queries over an FAQ collection. From an information retrieval perspective, FAQ retrieval is a challenging task, mainly because of the lexical gap that exists between a query and an FAQ pair, both of which are typically very short. In this work, we explore the use of supervised learning to rank to improve the performance of domain-specific FAQ retrieval. While supervised learning-to-rank models have been shown to yield effective retrieval performance, they require costly human-labeled training data in the form of document relevance judgments or question paraphrases. We investigate how this labeling effort can be reduced using a labeling strategy geared toward the manual creation of query paraphrases rather than the more time-consuming relevance judgments. In particular, we investigate two such strategies, and test them by applying supervised ranking models to two domain-specific FAQ retrieval data sets, showcasing typical FAQ retrieval scenarios. Our experiments show that supervised ranking models can yield significant improvements in the precision-at-rank-5 measure compared to unsupervised baselines. Furthermore, we show that a supervised model trained using data labeled via a low-effort paraphrase-focused strategy has the same performance as that of the same model trained using fully labeled data, indicating that the strategy is effective at reducing the labeling effort while retaining the performance gains of the supervised approach. To encourage further research on FAQ retrieval we make our FAQ retrieval data set publicly available.",
"title": ""
},
{
"docid": "3196b8017cfb9a8cbfef0e892c508d05",
"text": "The nuclear envelope is a physical barrier that isolates the cellular DNA from the rest of the cell, thereby limiting pathogen invasion. The Human Immunodeficiency Virus (HIV) has a remarkable ability to enter the nucleus of non-dividing target cells such as lymphocytes, macrophages and dendritic cells. While this step is critical for replication of the virus, it remains one of the less understood aspects of HIV infection. Here, we review the viral and host factors that favor or inhibit HIV entry into the nucleus, including the viral capsid, integrase, the central viral DNA flap, and the host proteins CPSF6, TNPO3, Nucleoporins, SUN1, SUN2, Cyclophilin A and MX2. We review recent perspectives on the mechanism of action of these factors, and formulate fundamental questions that remain. Overall, these findings deepen our understanding of HIV nuclear import and strengthen the favorable position of nuclear HIV entry for antiviral targeting.",
"title": ""
},
{
"docid": "faea285dfac31a520e23c0a3ee06cea6",
"text": "Since 2006, Alberts and Dorofee have led MSCE with a focus on returning risk management to its original intent—supporting effective management decisions that lead to program success. They began rethinking the traditional approaches to risk management, which led to the development of SEI Mosaic, a suite of methodologies that approach managing risk from a systemic view across the life cycle and supply chain. Using a systemic risk management approach enables program managers to develop and implement strategic, high-leverage mitigation solutions that align with mission and objectives.",
"title": ""
},
{
"docid": "d470122d50dbb118ae9f3068998f8e14",
"text": "Tumor heterogeneity presents a challenge for inferring clonal evolution and driver gene identification. Here, we describe a method for analyzing the cancer genome at a single-cell nucleotide level. To perform our analyses, we first devised and validated a high-throughput whole-genome single-cell sequencing method using two lymphoblastoid cell line single cells. We then carried out whole-exome single-cell sequencing of 90 cells from a JAK2-negative myeloproliferative neoplasm patient. The sequencing data from 58 cells passed our quality control criteria, and these data indicated that this neoplasm represented a monoclonal evolution. We further identified essential thrombocythemia (ET)-related candidate mutations such as SESN2 and NTRK1, which may be involved in neoplasm progression. This pilot study allowed the initial characterization of the disease-related genetic architecture at the single-cell nucleotide level. Further, we established a single-cell sequencing method that opens the way for detailed analyses of a variety of tumor types, including those with high genetic complex between patients.",
"title": ""
},
{
"docid": "b09cacfb35cd02f6a5345c206347c6ae",
"text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.",
"title": ""
},
{
"docid": "4a18861ce15cfae3eaa2519d2fdc98c8",
"text": "This paper presents deadlock prevention are use to solve the deadlock problem of flexible manufacturing systems (FMS). Petri nets have been successfully as one of the most powerful tools for modeling of FMS. Their modeling power and a mathematical arsenal supporting the analysis of the modeled systems stimulate the increasing interest in Petri nets. With the structural object of Petri nets, siphons are important in the analysis and control of deadlocks in Petri nets (PNs) excellent properties. The deadlock prevention method are caused by the unmarked siphons, during the Petri nets are an effective way to model, analyze, simulation and control deadlocks in FMS is presented in this work. The characterization of special structural elements in Petri net so-called siphons has been a major approach for the investigation of deadlock-freeness in the center of FMS. The siphons are structures which allow for some implications on the net's can be well controlled by adding a control place (called monitor) for each uncontrolled siphon in the net in order to become deadlock-free situation in the system. Finally, We proposed method of modeling, simulation, control of FMS by using Petri nets, where deadlock analysis have Production line in parallel processing is demonstrate by a practical example used Petri Net-tool in MATLAB, is effective, and explicitly although its off-line computation.",
"title": ""
},
{
"docid": "2a384fe57f79687cba8482cabfb4243b",
"text": "The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce – a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).",
"title": ""
},
{
"docid": "6241cb482e386435be2e33caf8d94216",
"text": "A fog radio access network (F-RAN) is studied, in which $K_T$ edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve $K_R$ users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static base stations, a decentralized placement is leveraged for the mobile users. An achievable transmission scheme is presented, which employs a combination of interference alignment, zero-forcing and interference cancellation techniques in the delivery phase, and the \\textit{normalized delivery time} (NDT), which captures the worst-case latency, is analyzed.",
"title": ""
},
{
"docid": "d994b23ea551f23215232c0771e7d6b3",
"text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).",
"title": ""
},
{
"docid": "7b999aaaa1374499b910c3f7d0918484",
"text": "Research in face recognition has largely been divided between those projects concerned with front-end image processing and those projects concerned with memory for familiar people. These perceptual and cognitive programmes of research have proceeded in parallel, with only limited mutual influence. In this paper we present a model of human face recognition which combines both a perceptual and a cognitive component. The perceptual front-end is based on principal components analysis of images, and the cognitive back-end is based on a simple interactive activation and competition architecture. We demonstrate that this model has a much wider predictive range than either perceptual or cognitive models alone, and we show that this type of combination is necessary in order to analyse some important effects in human face recognition. In sum, the model takes varying images of \"known\" faces and delivers information about these people.",
"title": ""
},
{
"docid": "1503d2a235b2ce75516d18cdea42bbb5",
"text": "Phosphatidylinositol-3,4,5-trisphosphate (PtdIns(3,4,5)P3 or PIP3) mediates signalling pathways as a second messenger in response to extracellular signals. Although primordial functions of phospholipids and RNAs have been hypothesized in the ‘RNA world’, physiological RNA–phospholipid interactions and their involvement in essential cellular processes have remained a mystery. We explicate the contribution of lipid-binding long non-coding RNAs (lncRNAs) in cancer cells. Among them, long intergenic non-coding RNA for kinase activation (LINK-A) directly interacts with the AKT pleckstrin homology domain and PIP3 at the single-nucleotide level, facilitating AKT–PIP3 interaction and consequent enzymatic activation. LINK-A-dependent AKT hyperactivation leads to tumorigenesis and resistance to AKT inhibitors. Genomic deletions of the LINK-A PIP3-binding motif dramatically sensitized breast cancer cells to AKT inhibitors. Furthermore, meta-analysis showed the correlation between LINK-A expression and incidence of a single nucleotide polymorphism (rs12095274: A > G), AKT phosphorylation status, and poor outcomes for breast and lung cancer patients. PIP3-binding lncRNA modulates AKT activation with broad clinical implications.",
"title": ""
},
{
"docid": "e51fe12eecec4116a9a3b7f4c2281938",
"text": "The use of wireless technologies in automation systems offers attractive benefits, but introduces a number of new technological challenges. The paper discusses these aspects for home and building automation applications. Relevant standards are surveyed. A wireless extension to KNX/EIB based on tunnelling over IEEE 802.15.4 is presented. The design emulates the properties of the KNX/EIB wired medium via wireless communication, allowing a seamless extension. Furthermore, it is geared towards zero-configuration and supports the easy integration of protocol security.",
"title": ""
},
{
"docid": "e6b4097ead39f9b5144e2bd8551762ed",
"text": "Thanks to advances in medical imaging technologies and numerical methods, patient-specific modelling is more and more used to improve diagnosis and to estimate the outcome of surgical interventions. It requires the extraction of the domain of interest from the medical scans of the patient, as well as the discretisation of this geometry. However, extracting smooth multi-material meshes that conform to the tissue boundaries described in the segmented image is still an active field of research. We propose to solve this issue by combining an implicit surface reconstruction method with a multi-region mesh extraction scheme. The surface reconstruction algorithm is based on multi-level partition of unity implicit surfaces, which we extended to the multi-material case. The mesh generation algorithm consists in a novel multi-domain version of the marching tetrahedra. It generates multi-region meshes as a set of triangular surface patches consistently joining each other at material junctions. This paper presents this original meshing strategy, starting from boundary points extraction from the segmented data to heterogeneous implicit surface definition, multi-region surface triangulation and mesh adaptation. Results indicate that the proposed approach produces smooth and high-quality triangular meshes with a reasonable geometric accuracy. Hence, the proposed method is well suited for subsequent volume mesh generation and finite element simulations.",
"title": ""
},
{
"docid": "99880fca88bef760741f48166a51ca6f",
"text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.",
"title": ""
},
{
"docid": "6893ce06d616d08cf0a9053dc9ea493d",
"text": "Hope is the sum of goal thoughts as tapped by pathways and agency. Pathways reflect the perceived capability to produce goal routes; agency reflects the perception that one can initiate action along these pathways. Using trait and state hope scales, studies explored hope in college student athletes. In Study 1, male and female athletes were higher in trait hope than nonathletes; moreover, hope significantly predicted semester grade averages beyond cumulative grade point average and overall self-worth. In Study 2, with female cross-country athletes, trait hope predicted athletic outcomes; further, weekly state hope tended to predict athletic outcomes beyond dispositional hope, training, and self-esteem, confidence, and mood. In Study 3, with female track athletes, dispositional hope significantly predicted athletic outcomes beyond variance related to athletic abilities and affectivity; moreover, athletes had higher hope than nonathletes.",
"title": ""
},
{
"docid": "103b784d7cc23663584486fa3ca396bb",
"text": "A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose non-parametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets.",
"title": ""
},
{
"docid": "dd3efa1bea58934793c7c6a6064e1330",
"text": "This paper gives a broad overview of a complete framework for assessing the predictive uncertainty of scientific computing applications. The framework is complete in the sense that it treats both types of uncertainty (aleatory and epistemic) and incorporates uncertainty due to the form of the model and any numerical approximations used. Aleatory (or random) uncertainties in model inputs are treated using cumulative distribution functions, while epistemic (lack of knowledge) uncertainties are treated as intervals. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are discussed. Numerical approximation errors (due to discretization, iteration, and round off) are estimated using verification techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainties are quantified using model validation procedures, which include a comparison of model predictions to experimental data and then extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented.",
"title": ""
}
] |
scidocsrr
|
43fda67994521863cf18d5b59f1c239d
|
Re-ranking Person Re-identification with k-Reciprocal Encoding
|
[
{
"docid": "2bc30693be1c5855a9410fb453128054",
"text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.",
"title": ""
}
] |
[
{
"docid": "141c28bfbeb5e71dc68d20b6220794c3",
"text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.",
"title": ""
},
{
"docid": "14ba02b92184c21cbbe2344313e09c23",
"text": "Smart meters are at high risk to be an attack target or to be used as an attacking means of malicious users because they are placed at the closest location to users in the smart gridbased infrastructure. At present, Korea is proceeding with 'Smart Grid Advanced Metering Infrastructure (AMI) Construction Project', and has selected Device Language Message Specification/ COmpanion Specification for Energy Metering (DLMS/COSEM) protocol for the smart meter communication. However, the current situation is that the vulnerability analysis technique is still insufficient to be applied to DLMS/COSEM-based smart meters. Therefore, we propose a new fuzzing architecture for analyzing vulnerabilities which is applicable to actual DLMS/COSEM-based smart meter devices. In addition, this paper presents significant case studies for verifying proposed fuzzing architecture through conducting the vulnerability analysis of the experimental results from real DLMS/COSEM-based smart meter devices used in Korea SmartGrid Testbed.",
"title": ""
},
{
"docid": "dc8d9a7da61aab907ee9def56dfbd795",
"text": "The ability to detect change-points in a dynamic network or a time series of graphs is an increasingly important task in many applications of the emerging discipline of graph signal processing. This paper formulates change-point detection as a hypothesis testing problem in terms of a generative latent position model, focusing on the special case of the Stochastic Block Model time series. We analyze two classes of scan statistics, based on distinct underlying locality statistics presented in the literature. Our main contribution is the derivation of the limiting properties and power characteristics of the competing scan statistics. Performance is compared theoretically, on synthetic data, and empirically, on the Enron email corpus.",
"title": ""
},
{
"docid": "446af0ad077943a77ac4a38fd84df900",
"text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "c215a497d39f4f95a9fc720debb14b05",
"text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).",
"title": ""
},
{
"docid": "d8d102c3d6ac7d937bb864c69b4d3cd9",
"text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.",
"title": ""
},
{
"docid": "c3a6a72c9d738656f356d67cd5ce6c47",
"text": "Most doors are controlled by persons with the use of keys, security cards, password or pattern to open the door. Theaim of this paper is to help users forimprovement of the door security of sensitive locations by using face detection and recognition. Face is a complex multidimensional structure and needs good computing techniques for detection and recognition. This paper is comprised mainly of three subsystems: namely face detection, face recognition and automatic door access control. Face detection is the process of detecting the region of face in an image. The face is detected by using the viola jones method and face recognition is implemented by using the Principal Component Analysis (PCA). Face Recognition based on PCA is generally referred to as the use of Eigenfaces.If a face is recognized, it is known, else it is unknown. The door will open automatically for the known person due to the command of the microcontroller. On the other hand, alarm will ring for the unknown person. Since PCA reduces the dimensions of face images without losing important features, facial images for many persons can be stored in the database. Although many training images are used, computational efficiency cannot be decreased significantly. Therefore, face recognition using PCA can be more useful for door security system than other face recognition schemes.",
"title": ""
},
{
"docid": "78ce06926ea3b2012277755f0916fbb7",
"text": "We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because \"those who cannot remember the past are condemned to repeat it.\" This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.",
"title": ""
},
{
"docid": "d8e60dc8378fe39f698eede2b6687a0f",
"text": "Today's complex software systems are neither secure nor reliable. The rudimentary software protection primitives provided by current hardware forces systems to run many distrusting software components (e.g., procedures, libraries, plugins, modules) in the same protection domain, or otherwise suffer degraded performance from address space switches.\n We present CODOMs (COde-centric memory DOMains), a novel architecture that can provide finer-grained isolation between software components with effectively zero run-time overhead, all at a fraction of the complexity of other approaches. An implementation of CODOMs in a cycle-accurate full-system x86 simulator demonstrates that with the right hardware support, finer-grained protection and run-time performance can peacefully coexist.",
"title": ""
},
{
"docid": "dd211105651b376b40205eb16efe1c25",
"text": "WBAN based medical-health technologies have great potential for continuous monitoring in ambulatory settings, early detection of abnormal conditions, and supervised rehabilitation. They can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time.",
"title": ""
},
{
"docid": "8b7cb051224008ba3e1bf91bac5e9d21",
"text": "The Internet of things aspires to connect anyone with anything at any point of time at any place. Internet of Thing is generally made up of three-layer architecture. Namely Perception, Network and Application layers. A lot of security principles should be enabled at each layer for proper and efficient working of these applications. This paper represents the overview of Security principles, Security Threats and Security challenges at the application layer and its countermeasures to overcome those challenges. The Application layer plays an important role in all of the Internet of Thing applications. The most widely used application layer protocol is MQTT. The security threats for Application Layer Protocol MQTT is particularly selected and evaluated. Comparison is done between different Application layer protocols and security measures for those protocols. Due to the lack of common standards for IoT protocols, a lot of issues are considered while choosing the particular protocol.",
"title": ""
},
{
"docid": "79465d290ab299b9d75e9fa617d30513",
"text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.",
"title": ""
},
{
"docid": "b27b164a7ff43b8f360167e5f886f18a",
"text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.",
"title": ""
},
{
"docid": "4cc71db87682a96ddee09e49a861142f",
"text": "BACKGROUND\nReadiness is an integral and preliminary step in the successful implementation of telehealth services into existing health systems within rural communities.\n\n\nMETHODS AND MATERIALS\nThis paper details and critiques published international peer-reviewed studies that have focused on assessing telehealth readiness for rural and remote health. Background specific to readiness and change theories is provided, followed by a critique of identified telehealth readiness models, including a commentary on their readiness assessment tools.\n\n\nRESULTS\nFour current readiness models resulted from the search process. The four models varied across settings, such as rural outpatient practices, hospice programs, rural communities, as well as government agencies, national associations, and organizations. All models provided frameworks for readiness tools. Two specifically provided a mechanism by which communities could be categorized by their level of telehealth readiness.\n\n\nDISCUSSION\nCommon themes across models included: an appreciation of practice context, strong leadership, and a perceived need to improve practice. Broad dissemination of these telehealth readiness models and tools is necessary to promote awareness and assessment of readiness. This will significantly aid organizations to facilitate the implementation of telehealth.",
"title": ""
},
{
"docid": "44fee78f33e4d5c6d9c8b0126b1d5830",
"text": "This paper discusses an industrial case study in which data mining has been applied to solve a quality engineering problem in electronics assembly. During the assembly process, solder balls occur underneath some components of printed circuit boards. The goal is to identify the cause of solder defects in a circuit board using a data mining approach. Statistical process control and design of experiment approaches did not provide conclusive results. The paper discusses features considered in the study, data collected, and the data mining solution approach to identify causes of quality faults in an industrial application.",
"title": ""
},
{
"docid": "9ba6a2042e99c3ace91f0fc017fa3fdd",
"text": "This paper proposes a two-element multi-input multi-output (MIMO) open-slot antenna implemented on the display ground plane of a laptop computer for eight-band long-term evolution/wireless wide-area network operations. The metal surroundings of the antennas have been well integrated as a part of the radiation structure. In the single-element open-slot antenna, the nearby hinge slot (which is bounded by two ground planes and two hinges) is relatively large as compared with the open slot itself and acts as a good radiator. In the MIMO antenna consisting of two open-slot elements, a T slot is embedded in the display ground plane and is connected to the hinge slot. The T and hinge slots when connected behave as a radiator; whereas, the T slot itself functions as an isolation element. With the isolation element, simulated isolations between the two elements of the MIMO antenna are raised from 8.3–11.2 to 15–17.1 dB in 698–960 MHz and from 12.1–21 to 15.9–26.7 dB in 1710–2690 MHz. Measured isolations with the isolation element in the desired low- and high-frequency ranges are 17.6–18.8 and 15.2–23.5 dB, respectively. Measured and simulated efficiencies for the two-element MIMO antenna with either element excited are both larger than 50% in the desired operating frequency bands.",
"title": ""
},
{
"docid": "ad3add7522b3a58359d36e624e9e65f7",
"text": "In this paper, global and local prosodic features extracted from sentence, word and syllables are proposed for speech emotion or affect recognition. In this work, duration, pitch, and energy values are used to represent the prosodic information, for recognizing the emotions from speech. Global prosodic features represent the gross statistics such as mean, minimum, maximum, standard deviation, and slope of the prosodic contours. Local prosodic features represent the temporal dynamics in the prosody. In this work, global and local prosodic features are analyzed separately and in combination at different levels for the recognition of emotions. In this study, we have also explored the words and syllables at different positions (initial, middle, and final) separately, to analyze their contribution towards the recognition of emotions. In this paper, all the studies are carried out using simulated Telugu emotion speech corpus (IITKGP-SESC). These results are compared with the results of internationally known Berlin emotion speech corpus (Emo-DB). Support vector machines are used to develop the emotion recognition models. The results indicate that, the recognition performance using local prosodic features is better compared to the performance of global prosodic features. Words in the final position of the sentences, syllables in the final position of the words exhibit more emotion discriminative information compared to the words and syllables present in the other positions. K.S. Rao ( ) · S.G. Koolagudi · R.R. Vempada School of Information Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India e-mail: ksrao@iitkgp.ac.in S.G. Koolagudi e-mail: koolagudi@yahoo.com R.R. Vempada e-mail: ramu.csc@gmail.com",
"title": ""
},
{
"docid": "33ed6ab1eef74e6ba6649ff5a85ded6b",
"text": "With the rapid increasing of smart phones and their embedded sensing technologies, mobile crowd sensing (MCS) becomes an emerging sensing paradigm for performing large-scale sensing tasks. One of the key challenges of large-scale mobile crowd sensing systems is how to effectively select the minimum set of participants from the huge user pool to perform the tasks and achieve certain level of coverage. In this paper, we introduce a new MCS architecture which leverages the cached sensing data to fulfill partial sensing tasks in order to reduce the size of selected participant set. We present a newly designed participant selection algorithm with caching and evaluate it via extensive simulations with a real-world mobile dataset.",
"title": ""
},
{
"docid": "f13ffbb31eedcf46df1aaecfbdf61be9",
"text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.",
"title": ""
}
] |
scidocsrr
|
10debb17e51145a4ff0adf56e6609281
|
A new sentence similarity measure and sentence based extractive technique for automatic text summarization
|
[
{
"docid": "639bbe7b640c514ab405601c7c3cfa01",
"text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.",
"title": ""
},
{
"docid": "91c024a832bfc07bc00b7086bcf77add",
"text": "Topic-focused multi-document summarization aims to produce a summary biased to a given topic or user profile. This paper presents a novel extractive approach based on manifold-ranking of sentences to this summarization task. The manifold-ranking process can naturally make full use of both the relationships among all the sentences in the documents and the relationships between the given topic and the sentences. The ranking score is obtained for each sentence in the manifold-ranking process to denote the biased information richness of the sentence. Then the greedy algorithm is employed to impose diversity penalty on each sentence. The summary is produced by choosing the sentences with both high biased information richness and high information novelty. Experiments on DUC2003 and DUC2005 are performed and the ROUGE evaluation results show that the proposed approach can significantly outperform existing approaches of the top performing systems in DUC tasks and baseline approaches.",
"title": ""
}
] |
[
{
"docid": "c4183c8b08da8d502d84a650d804cac8",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "b9aaab241bab9c11ac38d6e9188b7680",
"text": "Find loads of the research methods in the social sciences book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.",
"title": ""
},
{
"docid": "ea4a1405e1c6444726d1854c7c56a30d",
"text": "This paper presents a novel integrated approach for efficient optimization based online trajectory planning of topologically distinctive mobile robot trajectories. Online trajectory optimization deforms an initial coarse path generated by a global planner by minimizing objectives such as path length, transition time or control effort. Kinodynamic motion properties of mobile robots and clearance from obstacles impose additional equality and inequality constraints on the trajectory optimization. Local planners account for efficiency by restricting the search space to locally optimal solutions only. However, the objective function is usually non-convex as the presence of obstacles generates multiple distinctive local optima. The proposed method maintains and simultaneously optimizes a subset of admissible candidate trajectories of distinctive topologies and thus seeking the overall best candidate among the set of alternative local solutions. Time-optimal trajectories for differential-drive and carlike robots are obtained efficiently by adopting the Timed-Elastic-Band approach for the underlying trajectory optimization problem. The investigation of various example scenarios and a comparative analysis with conventional local planners confirm the advantages of integrated exploration, maintenance and optimization of topologically distinctive trajectories. ∗Corresponding author Email address: christoph.roesmann@tu-dortmund.de (Christoph Rösmann) Preprint submitted to Robotics and Autonomous Systems November 12, 2016",
"title": ""
},
{
"docid": "e87a799822f1012f032cb66cd2925604",
"text": "Curcumin, the yellow color pigment of turmeric, is produced industrially from turmeric oleoresin. The mother liquor after isolation of curcumin from oleoresin contains approximately 40% oil. The oil was extracted from the mother liquor using hexane at 60 degrees C, and the hexane extract was separated into three fractions using silica gel column chromatography. These fractions were tested for antibacterial activity by pour plate method against Bacillus cereus, Bacillus coagulans, Bacillus subtilis, Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa. Fraction II eluted with 5% ethyl acetate in hexane was found to be most active fraction. The turmeric oil, fraction I, and fraction II were analyzed by GC and GC-MS. ar-Turmerone, turmerone, and curlone were found to be the major compounds present in these fractions along with other oxygenated compounds.",
"title": ""
},
{
"docid": "4654a1926d0caa787ade6aaf58e00474",
"text": "GitHub is the most widely used social, distributed version control system. It has around 10 million registered users and hosts over 16 million public repositories. Its user base is also very active as GitHub ranks in the top 100 Alexa most popular websites. In this study, we collect GitHub’s state in its entirety. Doing so, allows us to study new aspects of the ecosystem. Although GitHub is the home to millions of users and repositories, the analysis of users’ activity time-series reveals that only around 10% of them can be considered active. The collected dataset allows us to investigate the popularity of programming languages and existence of pattens in the relations between users, repositories, and programming languages. By, applying a k-means clustering method to the usersrepositories commits matrix, we find that two clear clusters of programming languages separate from the remaining. One cluster forms for “web programming” languages (Java Script, Ruby, PHP, CSS), and a second for “system oriented programming” languages (C, C++, Python). Further classification, allow us to build a phylogenetic tree of the use of programming languages in GitHub. Additionally, we study the main and the auxiliary programming languages of the top 1000 repositories in more detail. We provide a ranking of these auxiliary programming languages using various metrics, such as percentage of lines of code, and PageRank.",
"title": ""
},
{
"docid": "41481b2f081831d28ead1b685465d535",
"text": "Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to possess anti-cancer activity, anti-ulcer activity, anti-inflammatory, antioxidant activity, anti-arthritic activity, and blood building activity in Thalassemia. It has been argued that wheat grass helps blood flow, digestion, and general detoxification of the body due to the presence of biologically active compounds and minerals in it and due to its antioxidant potential which is derived from its high content of bioflavonoids such as apigenin, quercitin, luteoline. Furthermore, indole compounds, amely choline, which known for antioxidants and also possess chelating property for iron overload disorders. The presence of 70% chlorophyll, which is almost chemically identical to haemoglobin. The only difference is that the central element in chlorophyll is magnesium and in hemoglobin it is iron. In wheat grass makes it more useful in various clinical conditions involving hemoglobin deficiency and other chronic disorders ultimately considered as green blood.",
"title": ""
},
{
"docid": "071ba3d1cec138011f398cae8589b77b",
"text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "48889a388562e195eff17488f57ca1e0",
"text": "To clarify the effects of changing shift schedules from a full-day to a half-day before a night shift, 12 single nurses and 18 married nurses with children that engaged in night shift work in a Japanese hospital were investigated. Subjects worked 2 different shift patterns consisting of a night shift after a half-day shift (HF-N) and a night shift after a day shift (D-N). Physical activity levels were recorded with a physical activity volume meter to measure sleep/wake time more precisely without restricting subjects' activities. The duration of sleep before a night shift of married nurses was significantly shorter than that of single nurses for both shift schedules. Changing shift from the D-N to the HF-N increased the duration of sleep before a night shift for both groups, and made wake-up time earlier for single nurses only. Repeated ANCOVA of the series of physical activities showed significant differences with shift (p < 0.01) and marriage (p < 0.01) for variances, and age (p < 0.05) for a covariance. The paired t-test to compare the effects of changing shift patterns in each subject group and ANCOVA for examining the hourly activity differences between single and married nurses showed that the effects of a change in shift schedules seemed to have less effect on married nurses than single nurses. These differences might due to the differences of their family/home responsibilities.",
"title": ""
},
{
"docid": "da8be182ac315342bead9df4b87c6bab",
"text": "A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography's and FPM's captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment.",
"title": ""
},
{
"docid": "073486fe6bcd756af5f5325b27c57912",
"text": "This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.",
"title": ""
},
{
"docid": "a318e8755d2f2ba3c84543ba853c34fc",
"text": "Multi-view learning can provide self-supervision when different views are avail1 able of the same data. Distributional hypothesis provides another form of useful 2 self-supervision from adjacent sentences which are plentiful in large unlabelled 3 corpora. Motivated by the asymmetry in the two hemispheres of the human brain 4 as well as the observation that different learning architectures tend to emphasise 5 different aspects of sentence meaning, we present two multi-view frameworks for 6 learning sentence representations in an unsupervised fashion. One framework uses 7 a generative objective and the other a discriminative one. In both frameworks, 8 the final representation is an ensemble of two views, in which, one view encodes 9 the input sentence with a Recurrent Neural Network (RNN), and the other view 10 encodes it with a simple linear model. We show that, after learning, the vectors 11 produced by our multi-view frameworks provide improved representations over 12 their single-view learned counterparts, and the combination of different views gives 13 representational improvement over each view and demonstrates solid transferability 14 on standard downstream tasks. 15",
"title": ""
},
{
"docid": "eb5c7c9fbe64cbfd4b6c7dd5490c17c1",
"text": "Android packing services provide significant benefits in code protection by hiding original executable code, which help app developers to protect their code against reverse engineering. However, adversaries take the advantage of packers to hide their malicious code. A number of unpacking approaches have been proposed to defend against malicious packed apps. Unfortunately, most of the unpacking approaches work only for a limited time or for a particular type of packers. The analysis for different packers often requires specific domain knowledge and a significant amount of manual effort. In this paper, we conducted analyses of known Android packers appeared in recent years and propose to design an automatic detection and classification framework. The framework is capable of identifying packed apps, extracting the execution behavioral pattern of packers, and categorizing packed apps into groups. The variants of packer families share typical behavioral patterns reflecting their activities and packing techniques. The behavioral patterns obtained dynamically can be exploited to detect and classify unknown packers, which shed light on new directions for security researchers.",
"title": ""
},
{
"docid": "71573bc8f5be1025837d5c72393b4fa6",
"text": "This paper describes our initial work in developing a real-time audio-visual Chinese speech synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the MPEG-4 facial animation standard [1]. This standard offers a compact set of facial animation parameters (FAPs) and feature points (FPs) to enable realization of 20 Chinese visemes and 7 facial expressions (i.e. 27 target facial configurations). The Xface [2] open source toolkit enables us to define the influence zone for each FP and the deformation function that relates them. Hence we can easily animate a large number of coordinates in the 3D model by specifying values for a small set of FAPs and their FPs. FAP values for 27 target facial configurations were estimated from available corpora. We extended the dominance blending approach to effect animations for coarticulated visemes superposed with expression changes. We selected six sentiment-carrying text messages and synthesized expressive visual speech (for all expressions, in randomized order) with neutral audio speech. A perceptual experiment involving 11 subjects shows that they can identify the facial expression that matches the text message’s sentiment 85% of the time.",
"title": ""
},
{
"docid": "e92fd3ce5f90600f2fca84682c35c4e3",
"text": "A software-defined radar is a versatile radar system, where most of the processing, like signal generation, filtering, up-and down conversion etc. is performed by a software. This paper presents a state of the art of software-defined radar technology. It describes the design concept of software-defined radars and the two possible implementations. A global assessment is presented, and the link with the Cognitive Radar is explained.",
"title": ""
},
{
"docid": "ca1c193e5e5af821772a5d123e84b72a",
"text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.",
"title": ""
},
{
"docid": "c6485365e8ce550ea8c507aa963a00c2",
"text": "Consensus molecular subtypes and the evolution of precision medicine in colorectal cancer Rodrigo Dienstmann, Louis Vermeulen, Justin Guinney, Scott Kopetz, Sabine Tejpar and Josep Tabernero Nature Reviews Cancer 17, 79–92 (2017) In this article a source of grant funding for one of the authors was omitted from the Acknowledgements section. The online version of the article has been corrected to include: “The work of R.D. was supported by the Grant for Oncology Innovation under the project ‘Next generation of clinical trials with matched targeted therapies in colorectal cancer’”. C O R R E C T I O N",
"title": ""
},
{
"docid": "1b923168160fcd643692d5473b828ce3",
"text": "Interactive Evolutionary Computation (IEC) creates the intriguing possibility that a large variety of useful content can be produced quickly and easily for practical computer graphics and gaming applications. To show that IEC can produce such content, this paper applies IEC to particle system effects, which are the de facto method in computer graphics for generating fire, smoke, explosions, electricity, water, and many other special effects. While particle systems are capable of producing a broad array of effects, they require substantial mathematical and programming knowledge to produce. Therefore, efficient particle system generation tools are required for content developers to produce special effects in a timely manner. This paper details the design, representation, and animation of particle systems via two IEC tools called NEAT Particles and NEAT Projectiles. Both tools evolve artificial neural networks (ANN) with the NeuroEvolution of Augmenting Topologies (NEAT) method to control the behavior of particles. NEAT Particles evolves general-purpose particle effects, whereas NEAT Projectiles specializes in evolving particle weapon effects for video games. The primary advantage of this NEAT-based IEC approach is to decouple the creation of new effects from mathematics and programming, enabling content developers without programming knowledge to produce complex effects. Furthermore, it allows content designers to produce a broader range of effects than typical development tools. Finally, it acts as a concept generator, allowing content creators to interactively and efficiently explore the space of possible effects. Both NEAT Particles and NEAT Projectiles demonstrate how IEC can evolve useful content for graphical media and games, and are together a step toward the larger goal of automated content generation.",
"title": ""
},
{
"docid": "d2ec8831779e7af4e82a10c617a2e9a1",
"text": "In the new designs of military aircraft and unmanned aircraft there is a clear trend towards increasing demand of electrical power. This fact is mainly due to the replacement of mechanical, pneumatic and hydraulic equipments by partially or completely electrical systems. Generally, use of electrical power onboard is continuously increasing within the areas of communications, surveillance and general systems, such as: radar, cooling, landing gear or actuators systems. To cope with this growing demand for electric power, new levels of voltage (270 VDC), architectures and power electronics devices are being applied to the onboard electrical power distribution systems. The purpose of this paper is to present and describe the technological project HV270DC. In this project, one Electrical Power Distribution System (EPDS), applicable to the more electric aircrafts, has been developed. This system has been integrated by EADS in order to study the benefits and possible problems or risks that affect this kind of power distribution systems, in comparison with conventional distribution systems.",
"title": ""
},
{
"docid": "a05b4878404f9127d576d90d6b241588",
"text": "This paper presents an air-filled substrate integrated waveguide (AFSIW) filter post-process tuning technique. The emerging high-performance AFSIW technology is of high interest for the design of microwave and millimeter-wave substrate integrated systems based on low-cost multilayer printed circuit board (PCB) process. However, to comply with stringent specifications, especially for space, aeronautical and safety applications, a filter post-process tuning technique is desired. AFSIW single pole filter post-process tuning using a capacitive post is theoretically analyzed. It is demonstrated that a tuning of more than 3% of the resonant frequency is achieved at 21 GHz using a 0.3 mm radius post with a 40% insertion ratio. For experimental demonstration, a fourth-order AFSIW band pass filter operating in the 20.88 to 21.11 GHz band is designed and fabricated. Due to fabrication tolerances, it is shown that its performances are not in line with expected results. Using capacitive post tuning, characteristics are improved and agree with optimized results. This post-process tuning can be used for other types of substrate integrated devices.",
"title": ""
},
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
}
] |
scidocsrr
|
bef6b341dc12a62d9166bd111e7344e0
|
HOT BUTTONS AND TIME SINKS: THE EFFECTS OF ELECTRONIC COMMUNICATION DURING NONWORK TIME ON EMOTIONS AND WORK-NONWORK CONFLICT
|
[
{
"docid": "eb4c25caba8c3e6f06d3cabe6c004cd5",
"text": "The greater power of bad events over good ones is found in everyday events, major life events (e.g., trauma), close relationship outcomes, social network patterns, interpersonal interactions, and learning processes. Bad emotions, bad parents, and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good. The self is more motivated to avoid bad self-definitions than to pursue good ones. Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones. Various explanations such as diagnosticity and salience help explain some findings, but the greater power of bad events is still found when such variables are controlled. Hardly any exceptions (indicating greater power of good) can be found. Taken together, these findings suggest that bad is stronger than good, as a general principle across a broad range of psychological phenomena.",
"title": ""
}
] |
[
{
"docid": "59eaa9f4967abdc1c863f8fb256ae966",
"text": "CONTEXT\nThe projected expansion in the next several decades of the elderly population at highest risk for Parkinson disease (PD) makes identification of factors that promote or prevent the disease an important goal.\n\n\nOBJECTIVE\nTo explore the association of coffee and dietary caffeine intake with risk of PD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nData were analyzed from 30 years of follow-up of 8004 Japanese-American men (aged 45-68 years) enrolled in the prospective longitudinal Honolulu Heart Program between 1965 and 1968.\n\n\nMAIN OUTCOME MEASURE\nIncident PD, by amount of coffee intake (measured at study enrollment and 6-year follow-up) and by total dietary caffeine intake (measured at enrollment).\n\n\nRESULTS\nDuring follow-up, 102 men were identified as having PD. Age-adjusted incidence of PD declined consistently with increased amounts of coffee intake, from 10.4 per 10,000 person-years in men who drank no coffee to 1.9 per 10,000 person-years in men who drank at least 28 oz/d (P<.001 for trend). Similar relationships were observed with total caffeine intake (P<.001 for trend) and caffeine from non-coffee sources (P=.03 for trend). Consumption of increasing amounts of coffee was also associated with lower risk of PD in men who were never, past, and current smokers at baseline (P=.049, P=.22, and P=.02, respectively, for trend). Other nutrients in coffee, including niacin, were unrelated to PD incidence. The relationship between caffeine and PD was unaltered by intake of milk and sugar.\n\n\nCONCLUSIONS\nOur findings indicate that higher coffee and caffeine intake is associated with a significantly lower incidence of PD. This effect appears to be independent of smoking. The data suggest that the mechanism is related to caffeine intake and not to other nutrients contained in coffee. JAMA. 2000;283:2674-2679.",
"title": ""
},
{
"docid": "152ef51d5264a2e681acefcc536da7cf",
"text": "BACKGROUND AND PURPOSE\nHeart rate variability (HRV) as a measure of autonomic function might provide prognostic information in ischemic stroke. However, numerous difficulties are associated with HRV parameters assessment and interpretation, especially in short-term ECG recordings. For better understanding of derived HRV data and to avoid methodological bias we simultaneously recorded and analyzed heart rate, blood pressure and respiratory rate.\n\n\nMETHODS\nSeventy-five ischemic stroke patients underwent short-term ECG recordings. Linear and nonlinear parameters of HRV as well as beat-to-beat blood pressure and respiratory rate were assessed and compared in patients with different functional neurological outcomes at 7th and 90th days.\n\n\nRESULTS\nValues of Approximate, Sample and Fuzzy Entropy were significantly lower in patients with poor early neurological outcome. Patients with poor 90-day outcome had higher percentage of high frequency spectrum and normalized high frequency power, lower normalized low frequency power and lower low frequency/high frequency ratio. Low frequency/high frequency ratio correlated negatively with scores in the National Institutes of Health Stroke Scale and modified Rankin Scale (mRS) at the 7th and mRS at the 90th days. Mean RR interval, values of blood pressure as well as blood pressure variability did not differ between groups with good and poor outcomes. Respiratory frequency was significantly correlated with the functional neurological outcome at 7th and 90th days.\n\n\nCONCLUSION\nWhile HRV assessed by linear methods seems to have long-term prognostic value, complexity measures of HRV reflect the impact of the neurological state on distinct, temporary properties of heart rate dynamic. Respiratory rate during the first days of the stroke is associated with early and long-term neurological outcome and should be further investigated as a potential risk factor.",
"title": ""
},
{
"docid": "bf9e56e0e125e922de95381fb5520569",
"text": "Today, many private households as well as broadcasting or film companies own large collections of digital music plays. These are time series that differ from, e.g., weather reports or stocks market data. The task is normally that of classification, not prediction of the next value or recognizing a shape or motif. New methods for extracting features that allow to classify audio data have been developed. However, the development of appropriate feature extraction methods is a tedious effort, particularly because every new classification task requires tailoring the feature set anew. This paper presents a unifying framework for feature extraction from value series. Operators of this framework can be combined to feature extraction methods automatically, using a genetic programming approach. The construction of features is guided by the performance of the learning classifier which uses the features. Our approach to automatic feature extraction requires a balance between the completeness of the methods on one side and the tractability of searching for appropriate methods on the other side. In this paper, some theoretical considerations illustrate the trade-off. After the feature extraction, a second process learns a classifier from the transformed data. The practical use of the methods is shown by two types of experiments: classification of genres and classification according to user preferences.",
"title": ""
},
{
"docid": "a500afda393ad60ddd1bb39778655172",
"text": "The success and the failure of a data warehouse (DW) project are mainly related to the design phase according to most researchers in this domain. When analyzing the decision-making system requirements, many recurring problems appear and requirements modeling difficulties are detected. Also, we encounter the problem associated with the requirements expression by non-IT professionals and non-experts makers on design models. The ambiguity of the term of decision-making requirements leads to a misinterpretation of the requirements resulting from data warehouse design failure and incorrect OLAP analysis. Therefore, many studies have focused on the inclusion of vague data in information systems in general, but few studies have examined this case in data warehouses. This article describes one of the shortcomings of current approaches to data warehouse design which is the study of in the requirements inaccuracy expression and how ontologies can help us to overcome it. We present a survey on this topic showing that few works that take into account the imprecision in the study of this crucial phase in the decision-making process for the presentation of challenges and problems that arise and requires more attention by researchers to improve DW design. According to our knowledge, no rigorous study of vagueness in this area were made. Keywords— Data warehouses Design, requirements analysis, imprecision, ontology",
"title": ""
},
{
"docid": "a7b6a491d85ae94285808a21dbc65ce9",
"text": "In imbalanced learning, most standard classification algorithms usually fail to properly represent data distribution and provide unfavorable classification performance. More specifically, the decision rule of minority class is usually weaker than majority class, leading to many misclassification of expensive minority class data. Motivated by our previous work ADASYN [1], this paper presents a novel kernel based adaptive synthetic over-sampling approach, named KernelADASYN, for imbalanced data classification problems. The idea is to construct an adaptive over-sampling distribution to generate synthetic minority class data. The adaptive over-sampling distribution is first estimated with kernel density estimation methods and is further weighted by the difficulty level for different minority class data. The classification performance of our proposed adaptive over-sampling approach is evaluated on several real-life benchmarks, specifically on medical and healthcare applications. The experimental results show the competitive classification performance for many real-life imbalanced data classification problems.",
"title": ""
},
{
"docid": "fce58bfa94acf2b26a50f816353e6bf2",
"text": "The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered.",
"title": ""
},
{
"docid": "fe20c0bee35db1db85968b4d2793b83b",
"text": "The Smule Ocarina is a wind instrument designed for the iPhone, fully leveraging its wide array of technologies: microphone input (for breath input), multitouch (for fingering), accelerometer, real-time sound synthesis, highperformance graphics, GPS/location, and persistent data connection. In this mobile musical artifact, the interactions of the ancient flute-like instrument are both preserved and transformed via breath-control and multitouch finger-holes, while the onboard global positioning and persistent data connection provide the opportunity to create a new social experience, allowing the users of Ocarina to listen to one another. In this way, Ocarina is also a type of social instrument that enables a different, perhaps even magical, sense of global connectivity.",
"title": ""
},
{
"docid": "d95ae6900ae353fa0ed32167e0c23f16",
"text": "As well known, fully convolutional network (FCN) becomes the state of the art for semantic segmentation in deep learning. Currently, new hardware designs for deep learning have focused on improving the speed and parallelism of processing units. This motivates memristive solutions, in which the memory units (i.e., memristors) have computing capabilities. However, designing a memristive deep learning network is challenging, since memristors work very differently from the traditional CMOS hardware. This paper proposes a complete solution to implement memristive FCN (MFCN). Voltage selectors are firstly utilized to realize max-pooling layers with the detailed MFCN deconvolution hardware circuit by the massively parallel structure, which is effective since the deconvolution kernel and the input feature are similar in size. Then, deconvolution calculation is realized by converting the image into a column matrix and converting the deconvolution kernel into a sparse matrix. Meanwhile, the convolution realization in MFCN is also studied with the traditional sliding window method rather than the large matrix theory to overcome the shortcoming of low efficiency. Moreover, the conductance values of memristors are predetermined in Tensorflow with ex-situ training method. In other words, we train MFCN in software, then download the trained parameters to the simulink system by writing memristor. The effectiveness of the designed MFCN scheme is verified with improved accuracy over some existing machine learning methods. The proposed scheme is also adapt to LFW dataset with three-classification tasks. However, the MFCN training is time consuming as the computational burden is heavy with thousands of weight parameters with just six layers. In future, it is necessary to sparsify the weight parameters and layers of the MFCN network to speed up computing.",
"title": ""
},
{
"docid": "3df95e4b2b1bb3dc80785b25c289da92",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
},
{
"docid": "a8287a99def9fec3a9a2fda06a95e36e",
"text": "The abstraction of a process enables certain primitive forms of communication during process creation and destruction such as wait(). However, the operating system provides more general mechanisms for flexible inter-process communication. In this paper, we have studied and evaluated three commonly-used inter-process communication devices pipes, sockets and shared memory. We have identified the various factors that could affect their performance such as message size, hardware caches and process scheduling, and constructed experiments to reliably measure the latency and transfer rate of each device. We identified the most reliable timer APIs available for our measurements. Our experiments reveal that shared memory provides the lowest latency and highest throughput, followed by kernel pipes and lastly, TCP/IP sockets. However, the latency trends provide interesting insights into the construction of each mechanism. We also make certain observations on the pros and cons of each mechanism, highlighting its usefulness for different kinds of applications.",
"title": ""
},
{
"docid": "2e35483beb568ab514601ba21d70c2d3",
"text": "Determining the intended sense of words in text – word sense disambiguation (WSD) – is a long-standing problem in natural language processing. In this paper, we present WSD algorithms which use neural network language models to achieve state-of-the-art precision. Each of these methods learns to disambiguate word senses using only a set of word senses, a few example sentences for each sense taken from a licensed lexicon, and a large unlabeled text corpus. We classify based on cosine similarity of vectors derived from the contexts in unlabeled query and labeled example sentences. We demonstrate state-of-the-art results when using the WordNet sense inventory, and significantly better than baseline performance using the New Oxford American Dictionary inventory. The best performance was achieved by combining an LSTM language model with graph label propagation.",
"title": ""
},
{
"docid": "c180a56ae8ab74cd6a77f9f47ee76544",
"text": "Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics.",
"title": ""
},
{
"docid": "af0dfe672a8828587e3b27ef473ea98e",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "2c25a1333dc94bf98c74b693997e2793",
"text": "In recent years, HCI has shown a rising interest in the creative practices associated with massive online communities, including crafters, hackers, DIY, and other expert amateurs. One strategy for researching creativity at this scale is through an analysis of a community's outputs, including its creative works, custom created tools, and emergent practices. In this paper, we offer one such case study, a historical account of World of Warcraft (WoW) machinima (i.e., videos produced inside of video games), which shows how the aesthetic needs and requirements of video making community coevolved with the community-made creativity support tools in use at the time. We view this process as inhabiting different layers and practices of appropriation, and through an analysis of them, we trace the ways that support for emerging stylistic conventions become built into creativity support tools over time.",
"title": ""
},
{
"docid": "e0d553cc4ca27ce67116c62c49c53d23",
"text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.",
"title": ""
},
{
"docid": "ea2d97e8bde8e21b8291c370ce5815bf",
"text": "Can the cell's perception of time be expressed through the length of the shortest telomere? To address this question, we analyze an asymmetric random walk that models telomere length for each division that can decrease by a fixed length a or, if recognized by a polymerase, it increases by a fixed length b ≫ a. Our analysis of the model reveals two phases, first, a determinist drift of the length toward a quasi-equilibrium state, and second, persistence of the length near an attracting state for the majority of divisions. The measure of stability of the latter phase is the expected number of divisions at the attractor (\"lifetime\") prior to crossing a threshold T that model senescence. Using numerical simulations, we further study the distribution of times for the shortest telomere to reach the threshold T. We conclude that the telomerase regulates telomere stability by creating an effective potential barrier that separates statistically the arrival time of the shortest from the next shortest to T. The present model explains how random telomere dynamics underlies the extension of cell survival time.",
"title": ""
},
{
"docid": "46209913057e33c17d38a565e50097a3",
"text": "Power-on reset circuits are available as discrete devices as well as on-chip solutions and are indispensable to initialize some critical nodes of analog and digital designs during power-on. In this paper, we present a power-on reset circuit specifically designed for on-chip applications. The mentioned POR circuit should meet certain design requirements necessary to be integrated on-chip, some of them being area-efficiency, power-efficiency, supply rise-time insensitivity and ambient temperature insensitivity. The circuit is implemented within a small area (60mum times 35mum) using the 2.5V tolerant MOSFETs of a 0.28mum CMOS technology. It has a maximum quiescent current consumption of 40muA and works over infinite range of supply rise-times and ambient temperature range of -40degC to 150degC",
"title": ""
},
{
"docid": "91446020934f6892a3a4807f5a7b3829",
"text": "Collaborative filtering recommends items to a user based on the interests of other users having similar preferences. However, high dimensional, sparse data result in poor performance in collaborative filtering. This paper introduces an approach called multiple metadata-based collaborative filtering (MMCF), which utilizes meta-level information to alleviate this problem, e.g., metadata such as genre, director, and actor in the case of movie recommendation. MMCF builds a k-partite graph of users, movies and multiple metadata, and extracts implicit relationships among the metadata and between users and the metadata. Then the implicit relationships are propagated further by applying random walk process in order to alleviate the problem of sparseness in the original data set. The experimental results show substantial improvement over previous approaches on the real Netflix movie dataset.",
"title": ""
},
{
"docid": "ee9cb495280dc6e252db80c23f2f8c2b",
"text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.",
"title": ""
},
{
"docid": "d78acb79ccd229af7529dae1408dea6a",
"text": "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.",
"title": ""
}
] |
scidocsrr
|
49cfcea811b0d8d1823a5281c2317fb0
|
Untrimmed Video Classification for Activity Detection: submission to ActivityNet Challenge
|
[
{
"docid": "848aae58854681e75fae293e2f8d2fc5",
"text": "Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, such as object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep features from covolutional neural network(CNN) have attracted many researchers in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. We apply deep feature in object detection task, especially in vehicle detection task. To make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding. We experiment each task for different public datasets, and prove our framework is robust.",
"title": ""
}
] |
[
{
"docid": "97c3860dfb00517f744fd9504c4e7f9f",
"text": "The plastic film surface treatment load is considered as a nonlinear capacitive load, which is rather difficult for designing of an inverter. The series resonant inverter (SRI) connected to the load via transformer has been found effective for it's driving. In this paper, a surface treatment based on a pulse density modulation (PDM) and pulse frequency modulation (PFM) hybrid control scheme is described. The PDM scheme is used to regulate the output power of the inverter and the PFM scheme is used to compensate for temperature and other environmental influences on the discharge. Experimental results show that the PDM and PFM hybrid control series-resonant inverter (SRI) makes the corona discharge treatment simple and compact, thus leading to higher efficiency.",
"title": ""
},
{
"docid": "78321a0af7f5ab76809c6f7d08f2c15a",
"text": "The mass media are ranked with respect to their perceived helpfulness in satisfying clusters of needs arising from social roles and individual dispositions. For example, integration into the sociopolitical order is best served by newspaper; while \"knowing oneself \" is best served by books. Cinema and books are more helpful as means of \"escape\" than is television. Primary relations, holidays and other cultural activities are often more important than the mass media in satisfying needs. Television is the least specialized medium, serving many different personal and political needs. The \"interchangeability\" of the media over a variety of functions orders televisions, radio, newspapers, books, and cinema in a circumplex. We speculate about which attributes of the media explain the social and psychological needs they serve best. The data, drawn from an Israeli survey, are presented as a basis for cross-cultural comparison. Disciplines Communication | Social and Behavioral Sciences This journal article is available at ScholarlyCommons: http://repository.upenn.edu/asc_papers/267 ON THE USE OF THE MASS MEDIA FOR IMPORTANT THINGS * ELIHU KATZ MICHAEL GUREVITCH",
"title": ""
},
{
"docid": "c75388c19397bf1e743970cb32649b17",
"text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.",
"title": ""
},
{
"docid": "086f5e6dd7889d8dcdaddec5852afbdb",
"text": "Fast advances in the wireless technology and the intensive penetration of cell phones have motivated banks to spend large budget on building mobile banking systems, but the adoption rate of mobile banking is still underused than expected. Therefore, research to enrich current knowledge about what affects individuals to use mobile banking is required. Consequently, this study employs the Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate what impacts people to adopt mobile banking. Through sampling 441 respondents, this study empirically concluded that individual intention to adopt mobile banking was significantly influenced by social influence, perceived financial cost, performance expectancy, and perceived credibility, in their order of influencing strength. The behavior was considerably affected by individual intention and facilitating conditions. As for moderating effects of gender and age, this study discovered that gender significantly moderated the effects of performance expectancy and perceived financial cost on behavioral intention, and the age considerably moderated the effects of facilitating conditions and perceived self-efficacy on actual adoption behavior.",
"title": ""
},
{
"docid": "8f6d9ed651c783cf88bd6b3ab5b3012c",
"text": "To the Editor: Gianotti-Crosti syndrome (GCS) classically presents in children as a self-limited, symmetric erythematous papular eruption affecting the cheeks, extremities, and buttocks. While initial reports implicated hepatitis B virus as the etiologic agent, many other bacterial, viral, and vaccine triggers have since been described. A previously healthy 2-year-old boy presented with a 3-week history of a cutaneous eruption that initially appeared on his legs and subsequently progressed to affect his arms and face. Two weeks after onset of the eruption, he was immunized with intramuscular Vaxigrip influenza vaccination (Sanofi Pasteur), and new lesions appeared at the immunization site on his right upper arm. Physical examination demonstrated an afebrile child with erythematous papules on the cheeks, arms, and legs (Fig 1). He had a localized papular eruption on his right upper arm (Fig 2). There was no lymphadenopathy or hepatosplenomegaly. Laboratory investigations revealed leukocytosis (white cell count, 14,600/mm) with a normal differential, reactive thrombocytosis ( platelet count, 1,032,000/mm), a positive urine culture for cytomegalovirus, and positive IgM serology for Epstein-Barr virus (EBV). Histopathologic examination of a skin biopsy specimen from the right buttock revealed a perivascular and somewhat interstitial lymphocytic infiltrate in the superficial and mid-dermis with intraepidermal exocytosis of lymphocytes, mild spongiosis and papillary dermal edema. He was treated with 2.5% hydrocortisone cream, and the eruption resolved. Twelve months later, he presented with a similar papular eruption localized to the left upper arm at the site of a recent intramuscular influenza vaccination (Vaxigrip). Although an infection represents the most important etiologic agent, a second event involving immunomodulation might lead to further disease accentuation, thus explaining the association of GCS with vaccinations. In our case, there was evidence of both cytomegalovirus (CMV) and EBV infection as well as a recent history of immunization. Localized accentuation of papules at the immunization site was unusual, as previous cases of GCS following immunizations have had a widespread and typically symmetric eruption. It is possible that trauma from the injection or a component of the vaccine elicited a Koebner response, causing local accentuation. There are no previous reports of recurrence of vaccine-associated GCS. One report documented recurrence with two different infectious triggers. As GCS is a mild and selflimiting disease, further vaccinations are not contraindicated. Andrei I. Metelitsa, MD, FRCPC, and Loretta Fiorillo, MD, FRCPC",
"title": ""
},
{
"docid": "6e63abd83cc2822f011c831234c6d2e7",
"text": "The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-theart in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.",
"title": ""
},
{
"docid": "de04d3598687b34b877d744956ca4bcd",
"text": "We investigate the reputational impact of financial fraud for outside directors based on a sample of firms facing shareholder class action lawsuits. Following a financial fraud lawsuit, outside directors do not face abnormal turnover on the board of the sued firm but experience a significant decline in other board seats held. The decline in other directorships is greater for more severe cases of fraud and when the outside director bears greater responsibility for monitoring fraud. Interlocked firms that share directors with the sued firm exhibit valuation declines at the lawsuit filing. When fraud-affiliated directors depart from boards of interlocked firms, these firms experience a significant increase in valuation.",
"title": ""
},
{
"docid": "b60a4efcdd52d6209069415540016849",
"text": "Vulnerabilities need to be detected and removed from software. Although previous studies demonstrated the usefulness of employing prediction techniques in deciding about vulnerabilities of software components, the accuracy and improvement of effectiveness of these prediction techniques is still a grand challenging research question. This paper proposes a hybrid technique based on combining N-gram analysis and feature selection algorithms for predicting vulnerable software components where features are defined as continuous sequences of token in source code files, i.e., Java class file. Machine learning-based feature selection algorithms are then employed to reduce the feature and search space. We evaluated the proposed technique based on some Java Android applications, and the results demonstrated that the proposed technique could predict vulnerable classes, i.e., software components, with high precision, accuracy and recall.",
"title": ""
},
{
"docid": "b1b511c0e014861dac12c2254f6f1790",
"text": "This paper describes automatic speech recognition (ASR) systems developed jointly by RWTH, UPB and FORTH for the 1ch, 2ch and 6ch track of the 4th CHiME Challenge. In the 2ch and 6ch tracks the final system output is obtained by a Confusion Network Combination (CNC) of multiple systems. The Acoustic Model (AM) is a deep neural network based on Bidirectional Long Short-Term Memory (BLSTM) units. The systems differ by front ends and training sets used for the acoustic training. The model for the 1ch track is trained without any preprocessing. For each front end we trained and evaluated individual acoustic models. We compare the ASR performance of different beamforming approaches: a conventional superdirective beamformer [1] and an MVDR beamformer as in [2], where the steering vector is estimated based on [3]. Furthermore we evaluated a BLSTM supported Generalized Eigenvalue beamformer using NN-GEV [4]. The back end is implemented using RWTH’s open-source toolkits RASR [5], RETURNN [6] and rwthlm [7]. We rescore lattices with a Long Short-Term Memory (LSTM) based language model. The overall best results are obtained by a system combination that includes the lattices from the system of UPB’s submission [8]. Our final submission scored second in each of the three tracks of the 4th CHiME Challenge.",
"title": ""
},
{
"docid": "b73526f1fb0abb4373421994dbd07822",
"text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.",
"title": ""
},
{
"docid": "874973c7a28652d5d9859088b965e76c",
"text": "Recommender systems are commonly defined as applications that e-commerce sites exploit to suggest products and provide consumers with information to facilitate their decision-making processes.1 They implicitly assume that we can map user needs and constraints, through appropriate recommendation algorithms, and convert them into product selections using knowledge compiled into the intelligent recommender. Knowledge is extracted from either domain experts (contentor knowledge-based approaches) or extensive logs of previous purchases (collaborative-based approaches). Furthermore, the interaction process, which turns needs into products, is presented to the user with a rationale that depends on the underlying recommendation technology and algorithms. For example, if the system funnels the behavior of other users in the recommendation, it explicitly shows reviews of the selected products or quotes from a similar user. Recommender systems are now a popular research area2 and are increasingly used by e-commerce sites.1 For travel and tourism,3 the two most successful recommender system technologies (see Figure 1) are Triplehop’s TripMatcher (used by www. ski-europe.com, among others) and VacationCoach’s expert advice platform, MePrint (used by travelocity.com). Both of these recommender systems try to mimic the interactivity observed in traditional counselling sessions with travel agents when users search for advice on a possible holiday destination. From a technical viewpoint, they primarily use a content-based approach, in which the user expresses needs, benefits, and constraints using the offered language (attributes). The system then matches the user preferences with items in a catalog of destinations (described with the same language). VacationCoach exploits user profiling by explicitly asking the user to classify himself or herself in one profile (for example, as a “culture creature,” “beach bum,” or “trail trekker”), which induces implicit needs that the user doesn’t provide. The user can even input precise profile information by completing the appropriate form. TripleHop’s matching engine uses a more sophisticated approach to reduce user input. It guesses importance of attributes that the user does not explicitly mention. It then combines statistics on past user queries with a prediction computed as a weighted average of importance assigned by similar users.4",
"title": ""
},
{
"docid": "d106a47637195845ed3d218dbb766c2c",
"text": "The efficiency of three forward-pruning techniques, i.e., futility pruning, null-move pruning, and LMR, is analyzed in shogi, a Japanese chess variant. It is shown that the techniques with the a–b pruning reduce the effective branching factor of shogi endgames to 2.8 without sacrificing much accuracy of the search results. Because the average number of the raw branching factor in shogi is around 80, the pruning techniques reduce the search space more effectively than in chess. 2011 International Federation for Information Processing Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8f1a5420deb75a2b664ceeaae8fc03f9",
"text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "b413cd956623afce3d50780ff90b0efe",
"text": "Parkinson's disease (PD) is the second most common neurodegenerative disorder. The majority of cases do not arise from purely genetic factors, implicating an important role of environmental factors in disease pathogenesis. Well-established environmental toxins important in PD include pesticides, herbicides, and heavy metals. However, many toxicants linked to PD and used in animal models are rarely encountered. In this context, other factors such as dietary components may represent daily exposures and have gained attention as disease modifiers. Several in vitro, in vivo, and human epidemiological studies have found a variety of dietary factors that modify PD risk. Here, we critically review findings on association between dietary factors, including vitamins, flavonoids, calorie intake, caffeine, alcohol, and metals consumed via food and fatty acids and PD. We have also discussed key data on heterocyclic amines that are produced in high-temperature cooked meat, which is a new emerging field in the assessment of dietary factors in neurological diseases. While more research is clearly needed, significant evidence exists that specific dietary factors can modify PD risk.",
"title": ""
},
{
"docid": "b07b369dc622fad777fd09b23c284e12",
"text": "Stroke is the number one cause of severe physical disability in the UK. Recent studies have shown that technologies such as virtual reality and imaging can provide an engaging and motivating tool for physical rehabilitation. In this paper we summarize previous work in our group using virtual reality technology and webcam-based games. We then present early work we are conducting in experimenting with desktop augmented reality (AR) for rehabilitation. AR allows the user to use real objects to interact with computer-generated environments. Markers attached to the real objects enable the system (via a webcam) to track the position and orientation of each object as it is moved. The system can then augment the captured image of the real environment with computer-generated graphics to present a variety of game or task-driven scenarios to the user. We discuss the development of rehabilitation prototypes using available AR libraries and express our thoughts on the potential of AR technology.",
"title": ""
},
{
"docid": "ba6709c1413a1c28c99e686e065ce564",
"text": "Essential oils are complex mixtures of hydrocarbons and their oxygenated derivatives arising from two different isoprenoid pathways. Essential oils are produced by glandular trichomes and other secretory structures, specialized secretory tissues mainly diffused onto the surface of plant organs, particularly flowers and leaves, thus exerting a pivotal ecological role in plant. In addition, essential oils have been used, since ancient times, in many different traditional healing systems all over the world, because of their biological activities. Many preclinical studies have documented antimicrobial, antioxidant, anti-inflammatory and anticancer activities of essential oils in a number of cell and animal models, also elucidating their mechanism of action and pharmacological targets, though the paucity of in human studies limits the potential of essential oils as effective and safe phytotherapeutic agents. More well-designed clinical trials are needed in order to ascertain the real efficacy and safety of these plant products.",
"title": ""
},
{
"docid": "56a4a9b20391f13e7ced38586af9743b",
"text": "The most common type of nasopharyngeal tumor is nasopharyngeal carcinoma. The etiology is multifactorial with race, genetics, environment and Epstein-Barr virus (EBV) all playing a role. While rare in Caucasian populations, it is one of the most frequent nasopharyngeal cancers in Chinese, and has endemic clusters in Alaskan Eskimos, Indians, and Aleuts. Interestingly, as native-born Chinese migrate, the incidence diminishes in successive generations, although still higher than the native population. EBV is nearly always present in NPC, indicating an oncogenic role. There are raised antibodies, higher titers of IgA in patients with bulky (large) tumors, EBERs (EBV encoded early RNAs) in nearly all tumor cells, and episomal clonal expansion (meaning the virus entered the tumor cell before clonal expansion). Consequently, the viral titer can be used to monitor therapy or possibly as a diagnostic tool in the evaluation of patients who present with a metastasis from an unknown primary. The effect of environmental carcinogens, especially those which contain a high levels of volatile nitrosamines are also important in the etiology of NPC. Chinese eat salted fish, specifically Cantonese-style salted fish, and especially during early life. Perhaps early life (weaning period) exposure is important in the ‘‘two-hit’’ hypothesis of cancer development. Smoking, cooking, and working under poor ventilation, the use of nasal oils and balms for nose and throat problems, and the use of herbal medicines have also been implicated but are in need of further verification. Likewise, chemical fumes, dusts, formaldehyde exposure, and radiation have all been implicated in this complicated disorder. Various human leukocyte antigens (HLA) are also important etiologic or prognostic indicators in NPC. While histocompatibility profiles of HLA-A2, HLA-B17 and HLA-Bw46 show increased risk for developing NPC, there is variable expression depending on whether they occur alone or jointly, further conferring a variable prognosis (B17 is associated with a poor and A2B13 with a good prognosis, respectively).",
"title": ""
},
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
},
{
"docid": "afae66e9ff49274bbb546cd68490e5e4",
"text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.",
"title": ""
}
] |
scidocsrr
|
cfa5df626c7295941eb72c22ff6b61cf
|
Fast and robust face recognition via coding residual map learning based adaptive masking
|
[
{
"docid": "da416ce58897f6f86d9cd7b0de422508",
"text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.",
"title": ""
},
{
"docid": "7fdb4e14a038b11bb0e92917d1e7ce70",
"text": "Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.",
"title": ""
},
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
}
] |
[
{
"docid": "9cb682049f4a4d1291189b7cfccafb1e",
"text": "The sequencing by hybridization (SBH) of determining the order in which nucleotides should occur on a DNA string is still under discussion for enhancements on computational intelligence although the next generation of DNA sequencing has come into existence. In the last decade, many works related to graph theory-based DNA sequencing have been carried out in the literature. This paper proposes a method for SBH by integrating hypergraph with genetic algorithm (HGGA) for designing a novel analytic technique to obtain DNA sequence from its spectrum. The paper represents elements of the spectrum and its relation as hypergraph and applies the unimodular property to ensure the compatibility of relations between l-mers. The hypergraph representation and unimodular property are bound with the genetic algorithm that has been customized with a novel selection and crossover operator reducing the computational complexity with accelerated convergence. Subsequently, upon determining the primary strand, an anti-homomorphism is invoked to find the reverse complement of the sequence. The proposed algorithm is implemented in the GenBank BioServer datasets, and the results are found to prove the efficiency of the algorithm. The HGGA is a non-classical algorithm with significant advantages and computationally attractive complexity reductions ranging to $$O(n^{2} )$$ O ( n 2 ) with improved accuracy that makes it prominent for applications other than DNA sequencing like image processing, task scheduling and big data processing.",
"title": ""
},
{
"docid": "b3962fd4000fced796f3764d009c929e",
"text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.",
"title": ""
},
{
"docid": "3a52576a2fdaa7f6f9632dc8c4bf0971",
"text": "As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.",
"title": ""
},
{
"docid": "b90b7b44971cf93ba343b5dcdd060875",
"text": "This paper discusses a general approach to qualitative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator’s control action.",
"title": ""
},
{
"docid": "a8a802b8130d2b6a1b2dae84d53fb7c9",
"text": "This paper addresses an open challenge in educational data mining, i.e., the problem of using observed prerequisite relations among courses to learn a directed universal concept graph, and using the induced graph to predict unobserved prerequisite relations among a broader range of courses. This is particularly useful to induce prerequisite relations among courses from different providers (universities, MOOCs, etc.). We propose a new framework for inference within and across two graphs---at the course level and at the induced concept level---which we call Concept Graph Learning (CGL). In the training phase, our system projects the course-level links onto the concept space to induce directed concept links; in the testing phase, the concept links are used to predict (unobserved) prerequisite links for test-set courses within the same institution or across institutions. The dual mappings enable our system to perform an interlingua-style transfer learning, e.g. treating the concept graph as the interlingua, and inducing prerequisite links in a transferable manner across different universities. Experiments on our newly collected data sets of courses from MIT, Caltech, Princeton and CMU show promising results, including the viability of CGL for transfer learning.",
"title": ""
},
{
"docid": "6fb1f05713db4e771d9c610fa9c9925d",
"text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.",
"title": ""
},
{
"docid": "7968e0f2960a7dce6017699fd1222e36",
"text": "This work investigates the role of contrasting discourse relations signaled by cue phrases, together with phrase positional information, in predicting sentiment at the phrase level. Two domains of online reviews were chosen. The first domain is of nutritional supplement reviews, which are often poorly structured yet also allow certain simplifying assumptions to be made. The second domain is of hotel reviews, which have somewhat different characteristics. A corpus is built from these reviews, and manually tagged for polarity. We propose and evaluate a few new features that are realized through a lightweight method of discourse analysis, and use these features in a hybrid lexicon and machine learning based classifier. Our results show that these features may be used to obtain an improvement in classification accuracy compared to other traditional machine learning approaches.",
"title": ""
},
{
"docid": "bf164afc6315bf29a07e6026a3db4a26",
"text": "iBeacons are a new way to interact with hardware. An iBeacon is a Bluetooth Low Energy device that only sends a signal in a specific format. They are like a lighthouse that sends light signals to boats. This paper explains what an iBeacon is, how it works and how it can simplify your daily life, what restriction comes with iBeacon and how to improve this restriction., as well as, how to use Location-based Services to track items. E.g., every time you touchdown at an airport and wait for your suitcase at the luggage reclaim, you have no information when your luggage will arrive at the conveyor belt. With an iBeacon inside your suitcase, it is possible to track the luggage and to receive a push notification about it even before you can see it. This is just one possible solution to use them. iBeacon can create a completely new shopping experience or make your home smarter. This paper demonstrates the luggage tracking use case and evaluates its possibilities and restrictions.",
"title": ""
},
{
"docid": "3bff3136e5e2823d0cca2f864fe9e512",
"text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.",
"title": ""
},
{
"docid": "8a4b1c87b85418ce934f16003a481f27",
"text": "Current parking space vacancy detection systems use simple trip sensors at the entry and exit points of parking lots. Unfortunately, this type of system fails when a vehicle takes up more than one spot or when a parking lot has different types of parking spaces. Therefore, I propose a camera-based system that would use computer vision algorithms for detecting vacant parking spaces. My algorithm uses a combination of car feature point detection and color histogram classification to detect vacant parking spaces in static overhead images.",
"title": ""
},
{
"docid": "d2f4159b73f6baf188d49c43e6215262",
"text": "In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.",
"title": ""
},
{
"docid": "4033a48235fc21987549bdc0ca1a893c",
"text": "A novel algorithm for vehicle safety distance between driving cars for vehicle safety warning system is presented in this paper. The presented system concept includes a distance obstacle detection and safety distance calculation. The system detects the distance between the car and the in front of vehicles (obstacles) and uses the vehicle speed and other parameters to calculate the braking safety distance of the moving car. The system compares the obstacle distance and braking safety distance which are used to determine the moving vehicle's safety distance is enough or not. This paper focuses on the solution algorithm presentation.",
"title": ""
},
{
"docid": "44dbbc80c05cbbd95bacdf2f0a724db2",
"text": "Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. In this paper, we propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm which benefits from the idea of sparsity and morphological diversity. This entails building data-driven dictionaries for neutral and expressive components. The DCS algorithm then uses these dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition. Experiments on publicly available expression and face data sets show the effectiveness of our method.",
"title": ""
},
{
"docid": "4d403184b8f482449130bbb0ee1fb2cf",
"text": "Finite element analysis A 2D finite element analysis for the numerical prediction of capacity curve of unreinforced masonry (URM) walls is conducted. The studied model is based on the fiber finite element approach. The emphasis of this paper will be on the errors obtained from fiber finite element analysis of URM structures under pushover analysis. The masonry material is modeled by different constitutive stress-strain model in compression and tension. OpenSees software is employed to analysis the URM walls. Comparison of numerical predictions with experimental data, it is shown that the fiber model employed in OpenSees cannot properly predict the behavior of URM walls with balance between accuracy and low computational efforts. Additionally, the finite element analyses results show appropriate predictions of some experimental data when the real tensile strength of masonry material is changed. Hence, from the viewpoint of this result, it is concluded that obtained results from fiber finite element analyses employed in OpenSees are unreliable because the exact behavior of masonry material is different from the adopted masonry material models used in modeling process.",
"title": ""
},
{
"docid": "3f2081f9c1cf10e9ec27b2541f828320",
"text": "As the heart of an aircraft, the aircraft engine's condition directly affects the safety, reliability, and operation of the aircraft. Prognostics and health management for aircraft engines can provide advance warning of failure and estimate the remaining useful life. However, aircraft engine systems are complex with both intangible and uncertain factors, it is difficult to model the complex degradation process, and no single prognostic approach can effectively solve this critical and complicated problem. Thus, fusion prognostics is conducted to obtain more accurate prognostics results. In this paper, a prognostics and health management-oriented integrated fusion prognostic framework is developed to improve the system state forecasting accuracy. This framework strategically fuses the monitoring sensor data and integrates the strengths of the data-driven prognostics approach and the experience-based approach while reducing their respective limitations. As an application example, this developed fusion prognostics framework is employed to predict the remaining useful life of an aircraft gas turbine engine based on sensor data. The results demonstrate that the proposed fusion prognostics framework is an effective prognostics tool, which can provide a more accurate and robust remaining useful life estimation than any single prognostics method.",
"title": ""
},
{
"docid": "572ae23dd73dfb0a7cbc04d05772528f",
"text": "Machine learning models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. We hypothesize that this counterintuitive behavior is a result of the high-dimensional geometry of the data manifold, and explore this hypothesis on a simple highdimensional dataset. For this dataset we show a fundamental bound relating the classification error rate to the average distance to the nearest misclassification, which is independent of the model. We train different neural network architectures on this dataset and show their error sets approach this theoretical bound. As a result of the theory, the vulnerability of machine learning models to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this foundational synthetic case will point a way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.",
"title": ""
},
{
"docid": "f9d4b66f395ec6660da8cb22b96c436c",
"text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.",
"title": ""
},
{
"docid": "ec5ebfbe28daebaaac23fbf031b75ab3",
"text": "Theoretical models predict that overcondent investors trade excessively. We test this prediction by partitioning investors on gender. Psychological research demonstrates that, in areas such as nance, men are more overcondent than women. Thus, theory predicts that men will trade more excessively than women. Using account data for over 35,000 households from a large discount brokerage, we analyze the common stock investments of men and women from February 1991 through January 1997. We document that men trade 45 percent more than women. Trading reduces men’s net returns by 2.65 percentage points a year as opposed to 1.72 percentage points for women.",
"title": ""
},
{
"docid": "699c6a7b4f938d6a45d65878f08335e4",
"text": "Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.",
"title": ""
},
{
"docid": "7c2cb105e5fad90c90aea0e59aae5082",
"text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when",
"title": ""
}
] |
scidocsrr
|
1bb30aafa0064f1e7701cab0e6b4d216
|
A new approach to wafer sawing: stealth laser dicing technology
|
[
{
"docid": "ef706ea7a6dcd5b71602ea4c28eb9bd3",
"text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.",
"title": ""
},
{
"docid": "b7617b5dd2a6f392f282f6a34f5b6751",
"text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.",
"title": ""
}
] |
[
{
"docid": "c23a86bc6d8011dab71ac5e1e2051c3b",
"text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.",
"title": ""
},
{
"docid": "a0ca7d86ae79c263644c8cd5ae4c0aed",
"text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.",
"title": ""
},
{
"docid": "cb1bfa58eb89539663be0f2b4ea8e64d",
"text": "Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a ‘good’ hierarchical clustering is one that minimizes a particular cost function [21]. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining ‘good’ objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a ‘natural’ ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, and design algorithms for this scenario.",
"title": ""
},
{
"docid": "662ec285031306816814378e6e192782",
"text": "One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.",
"title": ""
},
{
"docid": "4bc1a78a3c9749460da218fd9d314e56",
"text": "Fast and accurate side-chain conformation prediction is important for homology modeling, ab initio protein structure prediction, and protein design applications. Many methods have been presented, although only a few computer programs are publicly available. The SCWRL program is one such method and is widely used because of its speed, accuracy, and ease of use. A new algorithm for SCWRL is presented that uses results from graph theory to solve the combinatorial problem encountered in the side-chain prediction problem. In this method, side chains are represented as vertices in an undirected graph. Any two residues that have rotamers with nonzero interaction energies are considered to have an edge in the graph. The resulting graph can be partitioned into connected subgraphs with no edges between them. These subgraphs can in turn be broken into biconnected components, which are graphs that cannot be disconnected by removal of a single vertex. The combinatorial problem is reduced to finding the minimum energy of these small biconnected components and combining the results to identify the global minimum energy conformation. This algorithm is able to complete predictions on a set of 180 proteins with 34342 side chains in <7 min of computer time. The total chi(1) and chi(1 + 2) dihedral angle accuracies are 82.6% and 73.7% using a simple energy function based on the backbone-dependent rotamer library and a linear repulsive steric energy. The new algorithm will allow for use of SCWRL in more demanding applications such as sequence design and ab initio structure prediction, as well addition of a more complex energy function and conformational flexibility, leading to increased accuracy.",
"title": ""
},
{
"docid": "98788b45932c8564d29615f49407d179",
"text": "BACKGROUND\nAbnormal forms of grief, currently referred to as complicated grief or prolonged grief disorder, have been discussed extensively in recent years. While the diagnostic criteria are still debated, there is no doubt that prolonged grief is disabling and may require treatment. To date, few interventions have demonstrated efficacy.\n\n\nMETHODS\nWe investigated whether outpatients suffering from prolonged grief disorder (PGD) benefit from a newly developed integrative cognitive behavioural therapy for prolonged grief (PG-CBT). A total of 51 patients were randomized into two groups, stratified by the type of death and their relationship to the deceased; 24 patients composed the treatment group and 27 patients composed the wait list control group (WG). Treatment consisted of 20-25 sessions. Main outcome was change in grief severity; secondary outcomes were reductions in general psychological distress and in comorbidity.\n\n\nRESULTS\nPatients on average had 2.5 comorbid diagnoses in addition to PGD. Between group effect sizes were large for the improvement of grief symptoms in treatment completers (Cohen׳s d=1.61) and in the intent-to-treat analysis (d=1.32). Comorbid depressive symptoms also improved in PG-CBT compared to WG. The completion rate was 79% in PG-CBT and 89% in WG.\n\n\nLIMITATIONS\nThe major limitations of this study were a small sample size and that PG-CBT took longer than the waiting time.\n\n\nCONCLUSIONS\nPG-CBT was found to be effective with an acceptable dropout rate. Given the number of bereaved people who suffer from PGD, the results are of high clinical relevance.",
"title": ""
},
{
"docid": "58d8e3bd39fa470d1dfa321aeba53106",
"text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.",
"title": ""
},
{
"docid": "9eaf39d4b612c3bd272498eb8a91effc",
"text": "The relationship between the different approaches to quality in ISO standards is reviewed, contrasting the manufacturing approach to quality in ISO 9000 (quality is conformance to requirements) with the product orientation of ISO 8402 (quality is the presence of specified features) and the goal orientation of quality in use in ISO 14598-1 (quality is meeting user needs). It is shown how ISO 9241-11 enables quality in use to be measured, and ISO 13407 defines the activities necessary in the development lifecycle for achieving quality in use. APPROACHES TO QUALITY Although the term quality seems self-explanatory in everyday usage, in practice there are many different views of what it means and how it should be achieved as part of a software production process. ISO DEFINITIONS OF QUALITY ISO 9000 is concerned with quality assurance to provide confidence that a product will satisfy given requirements. Interpreted literally, this puts quality in the hands of the person producing the requirements specification a product may be deemed to have quality even if the requirements specification is inappropriate. This is one of the interpretations of quality reviewed by Garvin (1984). He describes it as Manufacturing quality: a product which conforms to specified requirements. A different emphasis is given in ISO 8402 which defines quality as the totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. This is an example of what Garvin calls Product quality: an inherent characteristic of the product determined by the presence or absence of measurable product attributes. Many organisations would like to be able to identify those attributes which can be designed into a product or evaluated to ensure quality. ISO 9126 (1992) takes this approach, and categorises the attributes of software quality as: functionality, efficiency, usability, reliability, maintainability and portability. To the extent that user needs are well-defined and common to the intended users this implies that quality is an inherent attribute of the product. However, if different groups of users have different needs, then they may require different characteristics for a product to have quality for their purposes. Assessment of quality thus becomes dependent on the perception of the user. USER PERCEIVED QUALITY AND QUALITY IN USE Garvin defines User perceived quality as the combination of product attributes which provide the greatest satisfaction to a specified user. Most approaches to quality do not deal explicitly with userperceived quality. User-perceived quality is regarded as an intrinsically inaccurate judgement of product quality. For instance Garvin, 1984, observes that \"Perceptions of quality can be as subjective as assessments of aesthetics\". However, there is a more fundamental reason for being concerned with user-perceived quality. Products can only have quality in relation to their intended purpose. For instance, the quality attributes required of an office carpet may be very different from those required of a bedroom carpet. For conventional products this is assumed to be selfevident. For general-purpose products it creates a problem. A text editor could be used by programmers for producing code, or by secretaries for producing letters. Some of the quality attributes required will be the same, but others will be different. Even for a word processor, the functionality, usability and efficiency attributes required by a trained user may be very different from those required by an occasional user. Reconciling work on usability with traditional approaches to software quality has led to another broader and potentially important view of quality which has been outside the scope of most existing quality systems. This embraces userperceived quality by relating quality to the needs of the user of an interactive product. ISO 14598-1 defines External quality as the extent to which a product satisfies stated and implied needs when used under specified conditions. This moves the focus of quality from the product in isolation to the satisfaction of the needs of particular users in particular situations. The purpose of a product is to help users achieve particular goals, which leads to the definition of Quality in use in ISO DIS 14598-1 as the effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in specified environments. A product meets the requirements of the user if it is effective (accurate and complete), efficient in use of time and resources, and satisfying, regardless of the specific attributes it possesses. Specifying requirements in terms of performance has many benefits. This is recognised in the rules for drafting ISO standards (ISO, 1992) which suggest that to provide design flexibility, standards should specify the performance required of a product rather than the technical attributes needed to achieve the performance. Quality in use is a means of applying this principle to the performance which a product enables a human to achieve. An example is the ISO standard for VDT display screens (ISO 9241-3). The purpose of the standard is to ensure that the screen has the technical attributes required to achieve quality in use. The current version of the standard is specified in terms of the technical attributes of a traditional CRT. It is intended to extend the standard to permit alternative new technology screens to conform if it can be demonstrated that users are as effective, efficient and satisfied with the new screen as with an existing screen which meets the technical specifications. SOFTWARE QUALITY IN USE: ISO 14598-1 The purpose of designing an interactive system is to meet the needs of users: to provide quality in use (see Figure 1, from ISO/IEC 14598-1). The internal software attributes will determine the quality of a software product in use in a particular context. Software quality attributes are the cause, quality in use the effect. Quality in use is (or at least should be) the objective, software product quality is the means of achieving it. system behaviour external quality requirements External quality internal quality requirements Internal quality software attributes Specification Design and development Needs Quality in use Operation",
"title": ""
},
{
"docid": "b76f10452e4a4b0d7408e6350b263022",
"text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.",
"title": ""
},
{
"docid": "8387c06436e850b4fb00c6b5e0dcf19f",
"text": "Since the beginning of the epidemic, human immunodeficiency virus (HIV) has infected around 70 million people worldwide, most of whom reside is sub-Saharan Africa. There have been very promising developments in the treatment of HIV with anti-retroviral drug cocktails. However, drug resistance to anti-HIV drugs is emerging, and many people infected with HIV have adverse reactions or do not have ready access to currently available HIV chemotherapies. Thus, there is a need to discover new anti-HIV agents to supplement our current arsenal of anti-HIV drugs and to provide therapeutic options for populations with limited resources or access to currently efficacious chemotherapies. Plant-derived natural products continue to serve as a reservoir for the discovery of new medicines, including anti-HIV agents. This review presents a survey of plants that have shown anti-HIV activity, both in vitro and in vivo.",
"title": ""
},
{
"docid": "f8d01364ff29ad18480dfe5d164bbebf",
"text": "With companies such as Netflix and YouTube accounting for more than 50% of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.",
"title": ""
},
{
"docid": "85bc241c03d417099aa155766e6a1421",
"text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.",
"title": ""
},
{
"docid": "c7f0856c282d1039e44ba6ef50948d32",
"text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.",
"title": ""
},
{
"docid": "1053653b3584180dd6f97866c13ce40a",
"text": "• • The order of authorship on this paper is random and contributions were equal. We would like to thank Ron Burt, Jim March and Mike Tushman for many helpful suggestions. Olav Sorenson provided particularly extensive comments on this paper. We would like to acknowledge the financial support of the University of Chicago, Graduate School of Business and a grant from the Kauffman Center for Entrepreneurial Leadership. Clarifying the relationship between organizational aging and innovation processes is an important step in understanding the dynamics of high-technology industries, as well as for resolving debates in organizational theory about the effects of aging on organizational functioning. We argue that aging has two seemingly contradictory consequences for organizational innovation. First, we believe that aging is associated with increases in firms' rates of innovation. Simultaneously, however, we argue that the difficulties of keeping pace with incessant external developments causes firms' innovative outputs to become obsolete relative to the most current environmental demands. These seemingly contradictory outcomes are intimately related and reflect inherent trade-offs in organizational learning and innovation processes. Multiple longitudinal analyses of the relationship between firm age and patenting behavior in the semiconductor and biotechnology industries lend support to these arguments. Introduction In an increasingly knowledge-based economy, pinpointing the factors that shape the ability of organizations to produce influential ideas and innovations is a central issue for organizational studies. Among all organizational outputs, innovation is fundamental not only because of its direct impact on the viability of firms, but also because of its profound effects on the paths of social and economic change. In this paper, we focus on an ubiquitous organizational process-aging-and examine its multifaceted influence on organizational innovation. In so doing, we address an important unresolved issue in organizational theory, namely the nature of the relationship between aging and organizational behavior (Hannan 1998). Evidence clarifying the relationship between organizational aging and innovation promises to improve our understanding of the organizational dynamics of high-technology markets, and in particular the dynamics of technological leadership. For instance, consider the possibility that aging has uniformly positive consequences for innovative activity: on the foundation of accumulated experience, older firms innovate more frequently, and their innovations have greater significance than those of younger enterprises. In this scenario, technological change paradoxically may be associated with organizational stability, as incumbent organizations come to dominate the technological frontier and their preeminence only increases with their tenure. 1 Now consider the …",
"title": ""
},
{
"docid": "e786d22cd1c30014d1a1dcdc655a56fb",
"text": "Chemical fingerprints are used to represent chemical molecules by recording the presence or absence, or by counting the number of occurrences, of particular features or substructures, such as labeled paths in the 2D graph of bonds, of the corresponding molecule. These fingerprint vectors are used to search large databases of small molecules, currently containing millions of entries, using various similarity measures, such as the Tanimoto or Tversky's measures and their variants. Here, we derive simple bounds on these similarity measures and show how these bounds can be used to considerably reduce the subset of molecules that need to be searched. We consider both the case of single-molecule and multiple-molecule queries, as well as queries based on fixed similarity thresholds or aimed at retrieving the top K hits. We study the speedup as a function of query size and distribution, fingerprint length, similarity threshold, and database size |D| and derive analytical formulas that are in excellent agreement with empirical values. The theoretical considerations and experiments show that this approach can provide linear speedups of one or more orders of magnitude in the case of searches with a fixed threshold, and achieve sublinear speedups in the range of O(|D|0.6) for the top K hits in current large databases. This pruning approach yields subsecond search times across the 5 million compounds in the ChemDB database, without any loss of accuracy.",
"title": ""
},
{
"docid": "dd271275654da4bae73ee41d76fe165c",
"text": "BACKGROUND\nThe recovery period for patients who have been in an intensive care unitis often prolonged and suboptimal. Anxiety, depression and post-traumatic stress disorder are common psychological problems. Intensive care staff offer various types of intensive aftercare. Intensive care follow-up aftercare services are not standard clinical practice in Norway.\n\n\nOBJECTIVE\nThe overall aim of this study is to investigate how adult patients experience theirintensive care stay their recovery period, and the usefulness of an information pamphlet.\n\n\nMETHOD\nA qualitative, exploratory research with semi-structured interviews of 29 survivors after discharge from intensive care and three months after discharge from the hospital.\n\n\nRESULTS\nTwo main themes emerged: \"Being on an unreal, strange journey\" and \"Balancing between who I was and who I am\" Patients' recollection of their intensive care stay differed greatly. Continuity of care and the nurse's ability to see and value individual differences was highlighted. The information pamphlet helped intensive care survivors understand that what they went through was normal.\n\n\nCONCLUSIONS\nContinuity of care and an individual approach is crucial to meet patients' uniqueness and different coping mechanisms. Intensive care survivors and their families must be included when information material and rehabilitation programs are designed and evaluated.",
"title": ""
},
{
"docid": "ec0733962301d6024da773ad9d0f636d",
"text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.",
"title": ""
},
{
"docid": "7239b0f0a1b894c6383c538450c90e8a",
"text": "To address the problem of underexposure, underrepresentation, and underproduction of diverse professionals in the field of computing, we target middle school education using an idea that combines computational thinking with dance and movement choreography. This lightning talk delves into a virtual reality education and entertainment application named Virtual Environment Interactions (VEnvI). Our in vivo study examines how VEnvI can be used to teach fundamental computer science concepts such as sequences, loops, variables, conditionals, functions, and parallel programming. We aim to reach younger students through a fun and intuitive interface for choreographing dance movements with a virtual character. Our study contrasts the highly immersive and embodied virtual reality metaphor of using VEnvI with a non-immersive desktop metaphor. Additionally, we examine the effects of user attachment by comparing the learning results gained with customizable virtual characters in contrast with character presets. By analyzing qualitative and quantitative user responses measuring cognition, presence, usability, and satisfaction, we hope to find how virtual reality can enhance interest in the field of computer science among middle school students.",
"title": ""
},
{
"docid": "e4761bfc7c9b41881441928883660156",
"text": "This paper presents a digital low-dropout regulator (D-LDO) with a proposed transient-response boost technique, which enables the reduction of transient response time, as well as overshoot/undershoot, when the load current is abruptly drawn. The proposed D-LDO detects the deviation of the output voltage by overshoot/undershoot, and increases its loop gain, for the time that the deviation is beyond a limit. Once the output voltage is settled again, the loop gain is returned. With the D-LDO fabricated on an 110-nm CMOS technology, we measured its settling time and peak of undershoot, which were reduced by 60% and 72%, respectively, compared with and without the transient-response boost mode. Using the digital logic gates, the chip occupies a small area of 0.04 mm2, and it achieves a maximum current efficiency of 99.98%, by consuming the quiescent current of 15 μA at 0.7-V input voltage.",
"title": ""
}
] |
scidocsrr
|
a2fb0018d07bcf972886b10cc66ce964
|
Recurrent Neural Networks for Customer Purchase Prediction on Twitter
|
[
{
"docid": "e2c6437d257559211d182b5707aca1a4",
"text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.",
"title": ""
},
{
"docid": "cf2fc7338a0a81e4c56440ec7c3c868e",
"text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.",
"title": ""
},
{
"docid": "64330f538b3d8914cbfe37565ab0d648",
"text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"title": ""
}
] |
[
{
"docid": "6d9735b19ab2cb1251bd294045145367",
"text": "Waveguide twists are often necessary to provide polarization rotation between waveguide-based components. At terahertz frequencies, it is desirable to use a twist design that is compact in order to reduce loss; however, these designs are difficult if not impossible to realize using standard machining. This paper presents a micromachined compact waveguide twist for terahertz frequencies. The Rud-Kirilenko twist geometry is ideally suited to the micromachining processes developed at the University of Virginia. Measurements of a WR-1.5 micromachined twist exhibit a return loss near 20 dB and a median insertion loss of 0.5 dB from 600 to 750 GHz.",
"title": ""
},
{
"docid": "b3801b9d9548c49c79eacef4c71e84ad",
"text": "Identifying that a given binary program implements a specific cryptographic algorithm and finding out more information about the cryptographic code is an important problem. Proprietary programs and especially malicious software (so called malware) often use cryptography and we want to learn more about the context, e.g., which algorithms and keys are used by the program. This helps an analyst to quickly understand what a given binary program does and eases analysis. In this paper, we present several methods to identify cryptographic primitives (e.g., entire algorithms or only keys) within a given binary program in an automated way. We perform fine-grained dynamic binary analysis and use the collected information as input for several heuristics that characterize specific, unique aspects of cryptographic code. Our evaluation shows that these methods improve the state-of-the-art approaches in this area and that we can successfully extract cryptographic keys from a given malware binary.",
"title": ""
},
{
"docid": "a7d3d2f52a45cdb378863d4e8d96bc27",
"text": "This paper presents a three-phase single-stage bidirectional isolated matrix based AC-DC converter for energy storage. The matrix (3 × 1) topology directly converts the three-phase line voltages into high-frequency AC voltage which is subsequently, processed using a high-frequency transformer followed by a controlled rectifier. A modified Space Vector Modulation (SVM) based switching scheme is proposed to achieve high input power quality with high power conversion efficiency. Compared to the conventional two stage converter, the proposed converter provides single-stage conversion resulting in higher power conversion efficiency and higher power density. The operating principles of the proposed converter in both AC-DC and DC-AC mode are explained followed by steady state analysis. Simulation results are presented for 230 V, 50 Hz to 48 V isolated bidirectional converter at 2 kW output power to validate the theoretical claims.",
"title": ""
},
{
"docid": "9847936462257d8f0d03473c9a78f27d",
"text": "In this paper, a vision-guided autonomous quadrotor in an air-ground multi-robot system has been proposed. This quadrotor is equipped with a monocular camera, IMUs and a flight computer, which enables autonomous flights. Two complementary pose/motion estimation methods, respectively marker-based and optical-flow-based, are developed by considering different altitudes in a flight. To achieve smooth take-off, stable tracking and safe landing with respect to a moving ground robot and desired trajectories, appropriate controllers are designed. Additionally, data synchronization and time delay compensation are applied to improve the system performance. Real-time experiments are conducted in both indoor and outdoor environments.",
"title": ""
},
{
"docid": "eb3fad94acaf1f36783fdb22f3932ec7",
"text": "This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.",
"title": ""
},
{
"docid": "bb19e122737f08997585999575d2a394",
"text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "46938d041228481cf3363f2c6dfcc524",
"text": "This paper investigates conditions under which modi cations to the reward function of a Markov decision process preserve the op timal policy It is shown that besides the positive linear transformation familiar from utility theory one can add a reward for tran sitions between states that is expressible as the di erence in value of an arbitrary poten tial function applied to those states Further more this is shown to be a necessary con dition for invariance in the sense that any other transformation may yield suboptimal policies unless further assumptions are made about the underlying MDP These results shed light on the practice of reward shap ing a method used in reinforcement learn ing whereby additional training rewards are used to guide the learning agent In par ticular some well known bugs in reward shaping procedures are shown to arise from non potential based rewards and methods are given for constructing shaping potentials corresponding to distance based and subgoal based heuristics We show that such po tentials can lead to substantial reductions in learning time",
"title": ""
},
{
"docid": "12a214f172562d92c89183379a0c06a3",
"text": "Robots that work with people foster social relationships between people and systems. The home is an interesting place to study the adoption and use of these systems. The home provides challenges from both technical and interaction perspectives. In addition, the home is a seat for many specialized human behaviors and needs, and has a long history of what is collected and used to functionally, aesthetically, and symbolically fit the home. To understand the social impact of robotic technologies, this paper presents an ethnographic study of consumer robots in the home. Six families' experience of floor cleaning after receiving a new vacuum (a Roomba robotic vacuum or the Flair, a handheld upright) was studied. While the Flair had little impact, the Roomba changed people, cleaning activities, and other product use. In addition, people described the Roomba in aesthetic and social terms. The results of this study, while initial, generate implications for how robots should be designed for the home.",
"title": ""
},
{
"docid": "f1977e5f8fbc0df4df0ac6bf1715c254",
"text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.",
"title": ""
},
{
"docid": "a6a364819f397a8e28ac0b19480253cc",
"text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.",
"title": ""
},
{
"docid": "ee141b7fd5c372fb65d355fe75ad47af",
"text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.",
"title": ""
},
{
"docid": "7cdc858ad5837132c80ac278f3760e24",
"text": "Gallium Nitride (GaN) based power devices have the potential to achieve higher efficiency and higher switching frequency than those possible with Silicon (Si) power devices. In literature, GaN based converters are claimed to offer higher power density. However, a detailed comparative analysis on the power density of GaN and Si based low power dc-dc flyback converter is not reported. In this paper, comparison of a 100 W, dc-dc flyback converter based on GaN and Si is presented. Both the converters are designed to ensure an efficiency of 80%. Based on this, the switching frequency for both the converters are determined. The analysis shows that the GaN based converter can be operated at approximately ten times the switching frequency of Si-based converter. This leads to a reduction in the area product of the flyback transformer required in GaN based converter. It is found that the volume of the flyback transformer can be reduced by a factor of six for a GaN based converter as compared to a Si based converter. Further, it is observed that the value of output capacitance used in the GaN based converter reduces by a factor of ten as compared to the Si based converter, implying a reduction in the size of the output capacitors. Therefore, a significant improvement in the power density of the GaN based converter as compared to the Si based converter is seen.",
"title": ""
},
{
"docid": "145bbea9b4eb7c484c190aed77e2a8b2",
"text": "The Rey–Osterrieth Complex Figure Test (ROCF), which was developed by Rey in 1941 and standardized by Osterrieth in 1944, is a widely used neuropsychological test for the evaluation of visuospatial constructional ability and visual memory. Recently, the ROCF has been a useful tool for measuring executive function that is mediated by the prefrontal lobe. The ROCF consists of three test conditions: Copy, Immediate Recall and Delayed Recall. At the first step, subjects are given the ROCF stimulus card, and then asked to draw the same figure. Subsequently, they are instructed to draw what they remembered. Then, after a delay of 30 min, they are required to draw the same figure once again. The anticipated results vary according to the scoring system used, but commonly include scores related to location, accuracy and organization. Each condition of the ROCF takes 10 min to complete and the overall time of completion is about 30 min.",
"title": ""
},
{
"docid": "a2a7b5c0b4e95e0c7bcb42e29fa8db57",
"text": "0747-5632/$ see front matter 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2012.11.017 ⇑ Corresponding author. Address: School of Psychology, Australian Catholic University, 1100 Nudgee Rd., Banyo, QLD 4014, Australia. Tel.: +61 7 3623 7346; fax: +61 7 3623 7277. E-mail address: rachel.grieve@acu.edu.au (R. Grieve). Rachel Grieve ⇑, Michaelle Indian, Kate Witteveen, G. Anne Tolan, Jessica Marrington",
"title": ""
},
{
"docid": "38aa324964214620c55eb4edfecf1bd2",
"text": "This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.",
"title": ""
},
{
"docid": "eeb31177629a38882fa3664ad0ddfb48",
"text": "Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car’s interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car’s indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline. ACM Classification",
"title": ""
},
{
"docid": "14024a813302548d0bd695077185de1c",
"text": "In this paper, we propose an innovative touch-less palm print recognition system. This project is motivated by the public’s demand for non-invasive and hygienic biometric technology. For various reasons, users are concerned about touching the biometric scanners. Therefore, we propose to use a low-resolution web camera to capture the user’s hand at a distance for recognition. The users do not need to touch any device for their palm print to be acquired. A novel hand tracking and palm print region of interest (ROI) extraction technique are used to track and capture the user’s palm in real-time video stream. The discriminative palm print features are extracted based on a new method that applies local binary pattern (LBP) texture descriptor on the palm print directional gradient responses. Experiments show promising result using the proposed method. Performance can be further improved when a modified probabilistic neural network (PNN) is used for feature matching. Verification can be performed in less than one second in the proposed system. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1c2285aef1bcd54fb2203ebb7c992647",
"text": "OBJECTIVES\nExtracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process.\n\n\nMETHODS\nWe developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables.\n\n\nRESULTS\nAt the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure.\n\n\nCONCLUSION\nComputer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system.",
"title": ""
},
{
"docid": "7f897e5994685f0b158da91cef99c855",
"text": "Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resources. Earlier solutions for geo-distributed clouds have focused primarily on achieving global efficiency in resource sharing that results in significant inefficiencies in local resource allocation for individual datacenters leading to unfairness in revenue and profit earned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals during a 24 hour time period. Based on the established contracts, individual cloud service providers employ a cost-aware job scheduling and provisioning algorithm that enables tasks to complete and meet their response time requirements. The proposed techniques are evaluated through extensive experiments using realistic workloads and the results demonstrate the effectiveness, scalability and resource sharing efficiency of the proposed model.",
"title": ""
}
] |
scidocsrr
|
2b3ef3368782f4c4de17ddecc03f3a18
|
Habits in everyday life: thought, emotion, and action.
|
[
{
"docid": "a25041f4b95b68d2b8b9356d2f383b69",
"text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.",
"title": ""
}
] |
[
{
"docid": "2e6c44dd18f44512528752101f2161be",
"text": "This paper presents a LVDS (low voltage differential signal) driver, which works at 2Gbps, with a pre-emphasis circuit compensating the attenuation of limited bandwidth of channel. To make the output common-mode (CM) voltage stable over process, temperature, and supply voltage variations, a closed-loop negative feedback circuit is added in this work. The LVDS driver is designed in 0.13um CMOS technology using both thick (3.3V) and thin (1.2V) gate oxide device, simulated with transmission line model and package parasitic model. The simulated results show that this driver can operate up to 2Gbps with random data patterns.",
"title": ""
},
{
"docid": "832e1a93428911406759f696eb9cb101",
"text": "Reinforcement learning provides both qualitative and quantitative frameworks for understanding and modeling adaptive decision-making in the face of rewards and punishments. Here we review the latest dispatches from the forefront of this field, and map out some of the territories where lie monsters.",
"title": ""
},
{
"docid": "8994337878d2ac35464cb4af5e32fccc",
"text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.",
"title": ""
},
{
"docid": "65500c886a91a58ac95365c1e8539902",
"text": "This introductory overview tutorial on social network analysis (SNA) demonstrates through theory and practical case studies applications to research, particularly on social media, digital interaction and behavior records. NodeXL provides an entry point for non-programmers to access the concepts and core methods of SNA and allows anyone who can make a pie chart to now build, analyze and visualize complex networks.",
"title": ""
},
{
"docid": "9c2debf407dce58d77910ccdfc55a633",
"text": "In cybersecurity competitions, participants either create new or protect preconfigured information systems and then defend these systems against attack in a real-world setting. Institutions should consider important structural and resource-related issues before establishing such a competition. Critical infrastructures increasingly rely on information systems and on the Internet to provide connectivity between systems. Maintaining and protecting these systems requires an education in information warfare that doesn't merely theorize and describe such concepts. A hands-on, active learning experience lets students apply theoretical concepts in a physical environment. Craig Kaucher and John Saunders found that even for management-oriented graduate courses in information assurance, such an experience enhances the students' understanding of theoretical concepts. Cybersecurity exercises aim to provide this experience in a challenging and competitive environment. Many educational institutions use and implement these exercises as part of their computer science curriculum, and some are organizing competitions with commercial partners as capstone exercises, ad hoc hack-a-thons, and scenario-driven, multiday, defense-only competitions. Participants have exhibited much enthusiasm for these exercises, from the DEFCON capture-the-flag exercise to the US Military Academy's Cyber Defense Exercise (CDX). In February 2004, the US National Science Foundation sponsored the Cyber Security Exercise Workshop aimed at harnessing this enthusiasm and interest. The educators, students, and government and industry representatives attending the workshop discussed the feasibility and desirability of establishing regular cybersecurity exercises for postsecondary-level students. This article summarizes the workshop report.",
"title": ""
},
{
"docid": "eb18d3bab3346ede781d11433f1267b4",
"text": "INTRODUCTION\nIn the developing countries, diabetes mellitus as a chronic diseases, have replaced infectious diseases as the main causes of morbidity and mortality. International Diabetes Federation (IDF) recently estimates 382 million people have diabetes globally and more than 34.6 million people in the Middle East Region and this number will increase to 67.9 million by 2035. The aim of this study was to analyze Iran's research performance on diabetes in national and international context.\n\n\nMETHODS\nThis Scientometric analysis is based on the Iranian publication data in diabetes research retrieved from the Scopus citation database till the end of 2014. The string used to retrieve the data was developed using \"diabetes\" keyword in title, abstract and keywords, and finally Iran in the affiliation field was our main string.\n\n\nRESULTS\nIran's cumulative publication output in diabetes research consisted of 4425 papers from 1968 to 2014, with an average number of 96.2 papers per year and an annual average growth rate of 25.5%. Iran ranked 25th place with 4425 papers among top 25 countries with a global share of 0.72 %. Average of Iran's publication output was 6.19 citations per paper. The average citation per paper for Iranian publications in diabetes research increased from 1.63 during 1968-1999 to 10.42 for 2014.\n\n\nCONCLUSIONS\nAlthough diabetic population of Iran is increasing, number of diabetes research is not remarkable. International Diabetes Federation suggested increased funding for research in diabetes in Iran for cost-effective diabetes prevention and treatment. In addition to universal and comprehensive services for diabetes care and treatment provided by Iranian health care system, Iranian policy makers should invest more on diabetes research.",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
},
{
"docid": "8e18fa3850177d016a85249555621723",
"text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.",
"title": ""
},
{
"docid": "f68f82e0d7f165557433580ad1e3e066",
"text": "Four experiments demonstrate effects of prosodic structure on speech production latencies. Experiments 1 to 3 exploit a modified version of the Sternberg et al. (1978, 1980) prepared speech production paradigm to look for evidence of the generation of prosodic structure during the final stages of sentence production. Experiment 1 provides evidence that prepared sentence production latency is a function of the number of phonological words that a sentence comprises when syntactic structure, number of lexical items, and number of syllables are held constant. Experiment 2 demonstrated that production latencies in Experiment 1 were indeed determined by prosodic structure rather than the number of content words that a sentence comprised. The phonological word effect was replicated in Experiment 3 using utterances with a different intonation pattern and phrasal structure. Finally, in Experiment 4, an on-line version of the sentence production task provides evidence for the phonological word as the preferred unit of articulation during the on-line production of continuous speech. Our findings are consistent with the hypothesis that the phonological word is a unit of processing during the phonological encoding of connected speech. q 1997 Academic Press",
"title": ""
},
{
"docid": "82180726cc1aaaada69f3b6cb0e89acc",
"text": "The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair. Keywords—EPW, fall avoidance control, skyhook, wheeled inverted pendulum.",
"title": ""
},
{
"docid": "f84f7ad81967a6704490243b2b1fbbe4",
"text": "A fundamental question in frontal lobe function is how motivational and emotional parameters of behavior apply to executive processes. Recent advances in mood and personality research and the technology and methodology of brain research provide opportunities to address this question empirically. Using event-related-potentials to track error monitoring in real time, the authors demonstrated that variability in the amplitude of the error-related negativity (ERN) is dependent on mood and personality variables. College students who are high on negative affect (NA) and negative emotionality (NEM) displayed larger ERN amplitudes early in the experiment than participants who are low on these dimensions. As the high-NA and -NEM participants disengaged from the task, the amplitude of the ERN decreased. These results reveal that affective distress and associated behavioral patterns are closely related with frontal lobe executive functions.",
"title": ""
},
{
"docid": "b31aaa6805524495f57a2f54d0dd86f1",
"text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?",
"title": ""
},
{
"docid": "1d29d30089ffd9748c925a20f8a1216e",
"text": "• Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain.",
"title": ""
},
{
"docid": "af0dfe672a8828587e3b27ef473ea98e",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "565f815ef0c1dd5107f053ad39dade20",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "f6e8bda7c3915fa023f1b0f88f101f46",
"text": "This paper presents a formulation to the obstacle avoidance problem for semi-autonomous ground vehicles. The planning and tracking problems have been divided into a two-level hierarchical controller. The high level solves a nonlinear model predictive control problem to generate a feasible and obstacle free path. It uses a nonlinear vehicle model and utilizes a coordinate transformation which uses vehicle position along a path as the independent variable. The low level uses a higher fidelity model and solves the MPC problem with a sequential quadratic programming approach to track the planned path. Simulations show the method’s ability to safely avoid multiple obstacles while tracking the lane centerline. Experimental tests on a semi-autonomous passenger vehicle driving at high speed on ice show the effectiveness of the approach.",
"title": ""
},
{
"docid": "3f5a6580d3c8d13a8cefaea9fd6f68b2",
"text": "Most theorizing on the relationship between corporate social/environmental performance (CSP) and corporate financial performance (CFP) assumes that the current evidence is too fractured or too variable to draw any generalizable conclusions. With this integrative, quantitative study, we intend to show that the mainstream claim that we have little generalizable knowledge about CSP and CFP is built on shaky grounds. Providing a methodologically more rigorous review than previous efforts, we conduct a meta-analysis of 52 studies (which represent the population of prior quantitative inquiry) yielding a total sample size of 33,878 observations. The metaanalytic findings suggest that corporate virtue in the form of social responsibility and, to a lesser extent, environmental responsibility is likely to pay off, although the operationalizations of CSP and CFP also moderate the positive association. For example, CSP appears to be more highly correlated with accounting-based measures of CFP than with market-based indicators, and CSP reputation indices are more highly correlated with CFP than are other indicators of CSP. This meta-analysis establishes a greater degree of certainty with respect to the CSP–CFP relationship than is currently assumed to exist by many business scholars.",
"title": ""
},
{
"docid": "ccc3cf21c4c97f9c56915b4d1e804966",
"text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.",
"title": ""
},
{
"docid": "6f72afeb0a2c904e17dca27f53be249e",
"text": "With its three-term functionality offering treatment of both transient and steady-state responses, proportional-integral-derivative (PID) control provides a generic and efficient solution to real-world control problems. The wide application of PID control has stimulated and sustained research and development to \"get the best out of PID\", and \"the search is on to find the next key technology or methodology for PID tuning\". This article presents remedies for problems involving the integral and derivative terms. PID design objectives, methods, and future directions are discussed. Subsequently, a computerized simulation-based approach is presented, together with illustrative design results for first-order, higher order, and nonlinear plants. Finally, we discuss differences between academic research and industrial practice, so as to motivate new research directions in PID control.",
"title": ""
},
{
"docid": "072d187f56635ebc574f2eedb8a91d14",
"text": "With the development of location-based social networks, an increasing amount of individual mobility data accumulate over time. The more mobility data are collected, the better we can understand the mobility patterns of users. At the same time, we know a great deal about online social relationships between users, providing new opportunities for mobility prediction. This paper introduces a noveltyseeking driven predictive framework for mining location-based social networks that embraces not only a bunch of Markov-based predictors but also a series of location recommendation algorithms. The core of this predictive framework is the cooperation mechanism between these two distinct models, determining the propensity of seeking novel and interesting locations.",
"title": ""
}
] |
scidocsrr
|
a6490c57d5ff74f170b49165fa9ec1de
|
Cooperative Co-evolution for large scale optimization through more frequent random grouping
|
[
{
"docid": "d099cf0b4a74ddb018775b524ec92788",
"text": "This report proposes 15 large-scale benchmark problems as an extension to the existing CEC’2010 large-scale global optimization benchmark suite. The aim is to better represent a wider range of realworld large-scale optimization problems and provide convenience and flexibility for comparing various evolutionary algorithms specifically designed for large-scale global optimization. Introducing imbalance between the contribution of various subcomponents, subcomponents with nonuniform sizes, and conforming and conflicting overlapping functions are among the major new features proposed in this report.",
"title": ""
},
{
"docid": "07bbe54e3d0c9ef27ef5f9f1f1a2150c",
"text": "Evolutionary algorithms (EAs) have been applied with success to many numerical and combinatorial optimization problems in recent years. However, they often lose their effectiveness and advantages when applied to large and complex problems, e.g., those with high dimensions. Although cooperative coevolution has been proposed as a promising framework for tackling high-dimensional optimization problems, only limited studies were reported by decomposing a high-dimensional problem into single variables (dimensions). Such methods of decomposition often failed to solve nonseparable problems, for which tight interactions exist among different decision variables. In this paper, we propose a new cooperative coevolution framework that is capable of optimizing large scale nonseparable problems. A random grouping scheme and adaptive weighting are introduced in problem decomposition and coevolution. Instead of conventional evolutionary algorithms, a novel differential evolution algorithm is adopted. Theoretical analysis is presented in this paper to show why and how the new framework can be effective for optimizing large nonseparable problems. Extensive computational studies are also carried out to evaluate the performance of newly proposed algorithm on a large number of benchmark functions with up to 1000 dimensions. The results show clearly that our framework and algorithm are effective as well as efficient for large scale evolutionary optimisation problems. We are unaware of any other evolutionary algorithms that can optimize 1000-dimension nonseparable problems as effectively and efficiently as we have done.",
"title": ""
}
] |
[
{
"docid": "f35dc45e28f2483d5ac66271590b365d",
"text": "We present a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords. It does not require any lexical resources (such as WordNet). It can be trained either on one corpus with syntactic annotation, or on a combination of a small semantically annotated primary corpus and a large, syntactically analyzed generalization corpus. Our model is able to predict inverse selectional preferences, that is, plausibility scores for predicates given argument heads. We evaluate our model on one NLP task (pseudo-disambiguation) and one cognitive task (prediction of human plausibility judgments), gauging the influence of different parameters and comparing our model against other model classes. We obtain consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. As for parameters, we identify settings that yield good performance across a range of experimental conditions. However, frequency remains a major influence of prediction quality, and we also identify more robust parameter settings suitable for applications with many infrequent items.",
"title": ""
},
{
"docid": "2793e8eb1410b2379a8a416f0560df0a",
"text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.",
"title": ""
},
{
"docid": "bcb9886f4ba3651793581e021030cde2",
"text": "This study looked at the individual difference correlates of self-rated character strengths and virtues. In all, 280 adults completed a short 24-item measure of strengths, a short personality measure of the Big Five traits and a fluid intelligence test. The Cronbach alphas for the six higher order virtues were satisfactory but factor analysis did not confirm the a priori classification yielding five interpretable factors. These factors correlated significantly with personality and intelligence. Intelligence and neuroticism were correlated negatively with all the virtues, while extraversion and conscientiousness were positively correlated with all virtues. Structural equation modeling showed personality and religiousness moderated the effect of intelligence on the virtues. Extraversion and openness were the largest correlates of the virtues. The use of shortened measured in research is discussed.",
"title": ""
},
{
"docid": "d89d80791ac8157d054652e5f1292ebb",
"text": "The Great Gatsby Curve, the observation that for OECD countries, greater crosssectional income inequality is associated with lower mobility, has become a prominent part of scholarly and policy discussions because of its implications for the relationship between inequality of outcomes and inequality of opportunities. We explore this relationship by focusing on evidence and interpretation of an intertemporal Gatsby Curve for the United States. We consider inequality/mobility relationships that are derived from nonlinearities in the transmission process of income from parents to children and the relationship that is derived from the effects of inequality of socioeconomic segregation, which then affects children. Empirical evidence for the mechanisms we identify is strong. We find modest reduced form evidence and structural evidence of an intertemporal Gatsby Curve for the US as mediated by social influences. Steven N. Durlauf Ananth Seshadri Department of Economics Department of Economics University of Wisconsin University of Wisconsin 1180 Observatory Drive 1180 Observatory Drive Madison WI, 53706 Madison WI, 53706 durlauf@gmail.com aseshadr@ssc.wisc.edu",
"title": ""
},
{
"docid": "6e02cdb0ade3479e0df03c30d9d69fa3",
"text": "Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.",
"title": ""
},
{
"docid": "cbe1dc1b56716f57fca0977383e35482",
"text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.",
"title": ""
},
{
"docid": "5c112eb4be8321d79b63790e84de278f",
"text": "Service-dominant logic continues its evolution, facilitated by an active community of scholars throughout the world. Along its evolutionary path, there has been increased recognition of the need for a crisper and more precise delineation of the foundational premises and specification of the axioms of S-D logic. It also has become apparent that a limitation of the current foundational premises/axioms is the absence of a clearly articulated specification of the mechanisms of (often massive-scale) coordination and cooperation involved in the cocreation of value through markets and, more broadly, in society. This is especially important because markets are even more about cooperation than about the competition that is more frequently discussed. To alleviate this limitation and facilitate a better understanding of cooperation (and coordination), an eleventh foundational premise (fifth axiom) is introduced, focusing on the role of institutions and institutional arrangements in systems of value cocreation: service ecosystems. Literature on institutions across multiple social disciplines, including marketing, is briefly reviewed and offered as further support for this fifth axiom.",
"title": ""
},
{
"docid": "3ee79d711d6f8d1bbaef7e348a1c8dbc",
"text": "As a commentary to Juhani Iivari’s insightful essay, I briefly analyze design science research as an embodiment of three closely related cycles of activities. The Relevance Cycle inputs requirements from the contextual environment into the research and introduces the research artifacts into environmental field testing. The Rigor Cycle provides grounding theories and methods along with domain experience and expertise from the foundations knowledge base into the research and adds the new knowledge generated by the research to the growing knowledge base. The central Design Cycle supports a tighter loop of research activity for the construction and evaluation of design artifacts and processes. The recognition of these three cycles in a research project clearly positions and differentiates design science from other research paradigms. The commentary concludes with a claim to the pragmatic nature of design science.",
"title": ""
},
{
"docid": "3d238cc92a56e64f32f08e0833d117b3",
"text": "The efficiency of two biomass pretreatment technologies, dilute acid hydrolysis and dissolution in an ionic liquid, are compared in terms of delignification, saccharification efficiency and saccharide yields with switchgrass serving as a model bioenergy crop. When subject to ionic liquid pretreatment (dissolution and precipitation of cellulose by anti-solvent) switchgrass exhibited reduced cellulose crystallinity, increased surface area, and decreased lignin content compared to dilute acid pretreatment. Pretreated material was characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, Raman spectroscopy and chemistry methods. Ionic liquid pretreatment enabled a significant enhancement in the rate of enzyme hydrolysis of the cellulose component of switchgrass, with a rate increase of 16.7-fold, and a glucan yield of 96.0% obtained in 24h. These results indicate that ionic liquid pretreatment may offer unique advantages when compared to the dilute acid pretreatment process for switchgrass. However, the cost of the ionic liquid process must also be taken into consideration.",
"title": ""
},
{
"docid": "709a6b1a5c49bf0e41a24ed5a6b392c9",
"text": "Th e paper presents a literature review of the main concepts of hotel revenue management (RM) and current state-of-the-art of its theoretical research. Th e article emphasises on the diff erent directions of hotel RM research and is structured around the elements of the hotel RM system and the stages of RM process. Th e elements of the hotel RM system discussed in the paper include hotel RM centres (room division, F&B, function rooms, spa & fi tness facilities, golf courses, casino and gambling facilities, and other additional services), data and information, the pricing (price discrimination, dynamic pricing, lowest price guarantee) and non-pricing (overbookings, length of stay control, room availability guarantee) RM tools, the RM software, and the RM team. Th e stages of RM process have been identifi ed as goal setting, collection of data and information, data analysis, forecasting, decision making, implementation and monitoring. Additionally, special attention is paid to ethical considerations in RM practice, the connections between RM and customer relationship management, and the legal aspect of RM. Finally, the article outlines future research perspectives and discloses potential evolution of RM in future.",
"title": ""
},
{
"docid": "464b66e2e643096bd344bea8026f4780",
"text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.",
"title": ""
},
{
"docid": "7232ba57ae29c9ec395fe2b4501b6fd3",
"text": "We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our metaalgorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our black-box boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models.",
"title": ""
},
{
"docid": "02e1e622c64b67c1a170ce36a3873082",
"text": "As retrieval systems become more complex, learning to rank approaches are being developed to automatically tune their parameters. Using online learning to rank, retrieval systems can learn directly from implicit feedback inferred from user interactions. In such an online setting, algorithms must obtain feedback for effective learning while simultaneously utilizing what has already been learned to produce high quality results. We formulate this challenge as an exploration–exploitation dilemma and propose two methods for addressing it. By adding mechanisms for balancing exploration and exploitation during learning, each method extends a state-of-the-art learning to rank method, one based on listwise learning and the other on pairwise learning. Using a recently developed simulation framework that allows assessment of online performance, we empirically evaluate both methods. Our results show that balancing exploration and exploitation can substantially and significantly improve the online retrieval performance of both listwise and pairwise approaches. In addition, the results demonstrate that such a balance affects the two approaches in different ways, especially when user feedback is noisy, yielding new insights relevant to making online learning to rank effective in practice.",
"title": ""
},
{
"docid": "84750fa3f3176d268ae85830a87f7a24",
"text": "Context: The pull-based model, widely used in distributed software development, offers an extremely low barrier to entry for potential contributors (anyone can submit of contributions to any project, through pull-requests). Meanwhile, the project’s core team must act as guardians of code quality, ensuring that pull-requests are carefully inspected before being merged into the main development line. However, with pull-requests becoming increasingly popular, the need for qualified reviewers also increases. GitHub facilitates this, by enabling the crowd-sourcing of pull-request reviews to a larger community of coders than just the project’s core team, as a part of their social coding philosophy. However, having access to more potential reviewers does not necessarily mean that it’s easier to find the right ones (the “needle in a haystack” problem). If left unsupervised, this process may result in communication overhead and delayed pull-request processing. Objective: This study aims to investigate whether and how previous approaches used in bug triaging and code review can be adapted to recommending reviewers for pull-requests, and how to improve the recommendation performance. Method: First, we extend three typical approaches used in bug triaging and code review for the new challenge of assigning reviewers to pull-requests. Second, we analyze social relations between contributors and reviewers, and propose a novel approach by mining each project’s comment networks (CNs). Finally, we combine the CNs with traditional approaches, and evaluate the effectiveness of all these methods on 84 GitHub projects through both quantitative and qualitative analysis. Results: We find that CN-based recommendation can achieve, by itself, similar performance as the traditional approaches. However, the mixed approaches can achieve significant improvements compared to using either of them independently. Conclusion: Our study confirms that traditional approaches to bug triaging and code review are feasible for pull-request reviewer recommendations on GitHub. Furthermore, their performance can be improved significantly by combining them with information extracted from prior social interactions between developers on GitHub. These results prompt for novel tools to support process automation in social coding platforms, that combine social (e.g., common interests among developers) and technical factors (e.g., developers’ expertise). © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4eec0ef04e80280c07bc1e9fd41e942a",
"text": "One of the challenges with research on student engagement is the large variation in the measurement of this construct, which has made it challenging to compare fi ndings across studies. This chapter contributes to our understanding of the measurement of student in engagement in three ways. First, we describe strengths and limitations of different methods for assessing student engagement (i.e., self-report measures, experience sampling techniques, teacher ratings, interviews, and observations). Second, we compare and contrast 11 self-report survey measures of student engagement that have been used in prior research. Across these 11 measures, we describe what is measured (scale name and items), use of measure, samples, and the extent of reliability and validity information available on each measure. Finally, we outline limitations with current approaches to measurement and promising future directions. Researchers, educators, and policymakers are increasingly focused on student engagement as the key to addressing problems of low achievement, high levels of student boredom, alienation, and high dropout rates (Fredricks, Blumenfeld, & Paris, 2004 ) . Students become more disengaged as they progress from elementary to middle school, with some estimates that 25–40% of youth are showing signs of disengagement (i.e., uninvolved, apathetic, not trying very hard, and not paying attention) (Steinberg, Brown, & Dornbush, 1996 ; Yazzie-Mintz, 2007 ) . The consequences of disengagement for middle and high school youth from disadvantaged backgrounds are especially severe; these youth are less likely to graduate from high school and face limited employment prospects, increasing their risk for poverty, poorer health, and involvement in the criminal justice system (National Research Council and the Institute of Medicine, 2004 ) . Although there is growing interest in student engagement, there has been considerable variation in how this construct has been conceptualized over time (Appleton, Christenson, & Furlong, 2008 ; Fredricks et al., 2004 ; Jimerson, Campos, & Grief, 2003 ) . Scholars have used a broad range J. A. Fredricks , Ph.D. (*) Human Development , Connecticut College , New London , CT , USA e-mail: jfred@conncoll.edu W. McColskey , Ph.D. SERVE Center , University of North Carolina , Greensboro , NC , USA e-mail: wmccolsk@serve.org The Measurement of Student Engagement: A Comparative Analysis of Various Methods and Student Self-report Instruments Jennifer A. Fredricks and Wendy McColskey 764 J.A. Fredricks and W. McColskey of terms including student engagement, school engagement, student engagement in school, academic engagement, engagement in class, and engagement in schoolwork. In addition, there has been variation in the number of subcomponents of engagement including different conceptualizations. Some scholars have proposed a two-dimensional model of engagement which includes behavior (e.g., participation, effort, and positive conduct) and emotion (e.g., interest, belonging, value, and positive emotions) (Finn, 1989 ; Marks, 2000 ; Skinner, Kindermann, & Furrer, 2009b ) . More recently, others have outlined a three-component model of engagement that includes behavior, emotion, and a cognitive dimension (i.e., self-regulation, investment in learning, and strategy use) (e.g., Archaumbault, 2009 ; Fredricks et al., 2004 ; Jimerson et al., 2003 ; Wigfi eld et al., 2008 ) . Finally, Christenson and her colleagues (Appleton, Christenson, Kim, & Reschly, 2006 ; Reschly & Christenson, 2006 ) conceptualized engagement as having four dimensions: academic, behavioral, cognitive, and psychological (subsequently referred to as affective) engagement. In this model, aspects of behavior are separated into two different components: academics, which includes time on task, credits earned, and homework completion, and behavior, which includes attendance, class participation, and extracurricular participation. One commonality across the myriad of conceptualizations is that engagement is multidimensional. However, further theoretical and empirical work is needed to determine the extent to which these different dimensions are unique constructs and whether a three or four component model more accurately describes the construct of student engagement. Even when scholars have similar conceptualizations of engagement, there has been considerable variability in the content of items used in instruments. This has made it challenging to compare fi ndings from different studies. This chapter expands on our understanding of the measurement of student engagement in three ways. First, the strengths and limitations of different methods for assessing student engagement are described. Second, 11 self-report survey measures of student engagement that have been used in prior research are compared and contrasted on several dimensions (i.e., what is measured, purposes and uses, samples, and psychometric properties). Finally, we discuss limitations with current approaches to measurement. What is Student Engagement We defi ne student engagement as a meta-construct that includes behavioral, emotional, and cognitive engagement (Fredricks et al., 2004 ) . Although there are large individual bodies of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what makes engagement unique is its potential as a multidimensional or “meta”-construct that includes these three dimensions. Behavioral engagement draws on the idea of participation and includes involvement in academic, social, or extracurricular activities and is considered crucial for achieving positive academic outcomes and preventing dropping out (Connell & Wellborn, 1991 ; Finn, 1989 ) . Other scholars defi ne behavioral engagement in terms of positive conduct, such as following the rules, adhering to classroom norms, and the absence of disruptive behavior such as skipping school or getting into trouble (Finn, Pannozzo, & Voelkl, 1995 ; Finn & Rock, 1997 ) . Emotional engagement focuses on the extent of positive (and negative) reactions to teachers, classmates, academics, or school. Others conceptualize emotional engagement as identifi cation with the school, which includes belonging, or a feeling of being important to the school, and valuing, or an appreciation of success in school-related outcomes (Finn, 1989 ; Voelkl, 1997 ) . Positive emotional engagement is presumed to create student ties to the institution and infl uence their willingness to do the work (Connell & Wellborn, 1991 ; Finn, 1989 ) . Finally, cognitive engagement is defi ned as student’s level of investment in learning. It includes being thoughtful, strategic, and willing to exert the necessary effort for comprehension of complex ideas or mastery of diffi cult skills (Corno & Mandinach, 1983 ; Fredricks et al., 2004 ; Meece, Blumenfeld, & Hoyle, 1988 ) . 765 37 The Measurement of Student Engagement... An important question is how engagement differs from motivation. Although the terms are used interchangeably by some, they are different and the distinctions between them are important. Motivation refers to the underlying reasons for a given behavior and can be conceptualized in terms of the direction, intensity, quality, and persistence of one’s energies (Maehr & Meyer, 1997 ) . A proliferation of motivational constructs (e.g., intrinsic motivation, goal theory, and expectancy-value models) have been developed to answer two broad questions “Can I do this task” and “Do I want to do this task and why?” ( Eccles, Wigfi eld, & Schiefele, 1998 ) . One commonality across these different motivational constructs is an emphasis on individual differences and underlying psychological processes. In contrast, engagement tends to be thought of in terms of action, or the behavioral, emotional, and cognitive manifestations of motivation (Skinner, Kindermann, Connell, & Wellborn, 2009a ) . An additional difference is that engagement refl ects an individual’s interaction with context (Fredricks et al., 2004 ; Russell, Ainsley, & Frydenberg, 2005 ) . In other words, an individual is engaged in something (i.e., task, activity, and relationship), and their engagement cannot be separated from their environment. This means that engagement is malleable and is responsive to variations in the context that schools can target in interventions (Fredricks et al., 2004 ; Newmann, Wehlage, & Lamborn, 1992 ). The self-system model of motivational development (Connell, 1990 ; Connell & Wellborn, 1991 ; Deci & Ryan, 1985 ) provides one theoretical model for studying motivation and engagement. This model is based on the assumption that individuals have three fundamental motivational needs: autonomy, competence, and relatedness. If schools provide children with opportunities to meet these three needs, students will be more engaged. Students’ need for relatedness is more likely to occur in classrooms where teachers and peers create a caring and supportive environment; their need for autonomy is met when they feel like they have a choice and when they are motivated by internal rather than external factors; and their need for competence is met when they experience the classroom as optimal in structure and feel like they can achieve desired ends (Fredricks et al., 2004 ) . In contrast, if students experience schools as uncaring, coercive, and unfair, they will become disengaged or disaffected (Skinner et al., 2009a, 2009b ) . This model assumes that motivation is a necessary but not suffi cient precursor to engagement (Appleton et al., 2008 ; Connell & Wellborn, 1991 ) . Methods for Studying Engagement",
"title": ""
},
{
"docid": "678d9eab7d1e711f97bf8ef5aeaebcc4",
"text": "This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues.",
"title": ""
},
{
"docid": "225b834e820b616e0ccfed7259499fd6",
"text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.",
"title": ""
},
{
"docid": "8a32bdadcaa2c94f83e95c19e400835b",
"text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.",
"title": ""
},
{
"docid": "c0484f3055d7e7db8dfea9d4483e1e06",
"text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.",
"title": ""
},
{
"docid": "88ac730e4e54ecc527bcd188b7cc5bf5",
"text": "In this paper we outline the nature of Neuro-linguistic Programming and explore its potential for learning and teaching. The paper draws on current research by Mathison (2003) to illustrate the role of language and internal imagery in teacherlearner interactions, and the way language influences beliefs about learning. Neuro-linguistic Programming (NLP) developed in the USA in the 1970's. It has achieved widespread popularity as a method for communication and personal development. The title, coined by the founders, Bandler and Grinder (1975a), refers to purported systematic, cybernetic links between a person's internal experience (neuro), their language (linguistic) and their patterns of behaviour (programming). In essence NLP is a form of modelling that offers potential for systematic and detailed understanding of people's subjective experience. NLP is eclectic, drawing on models and strategies from a wide range of sources. We outline NLP's approach to teaching and learning, and explore applications through illustrative data from Mathison's study. A particular implication for the training of educators is that of attention to communication skills. Finally we summarise criticisms of NLP that may represent obstacles to its acceptance by academe.",
"title": ""
}
] |
scidocsrr
|
3cff79c9c9419de7a4a231917714c1e5
|
Design of Secure and Lightweight Authentication Protocol for Wearable Devices Environment
|
[
{
"docid": "a85d07ae3f19a0752f724b39df5eca2b",
"text": "Despite two decades of intensive research, it remains a challenge to design a practical anonymous two-factor authentication scheme, for the designers are confronted with an impressive list of security requirements (e.g., resistance to smart card loss attack) and desirable attributes (e.g., local password update). Numerous solutions have been proposed, yet most of them are shortly found either unable to satisfy some critical security requirements or short of a few important features. To overcome this unsatisfactory situation, researchers often work around it in hopes of a new proposal (but no one has succeeded so far), while paying little attention to the fundamental question: whether or not there are inherent limitations that prevent us from designing an “ideal” scheme that satisfies all the desirable goals? In this work, we aim to provide a definite answer to this question. We first revisit two foremost proposals, i.e. Tsai et al.'s scheme and Li's scheme, revealing some subtleties and challenges in designing such schemes. Then, we systematically explore the inherent conflicts and unavoidable trade-offs among the design criteria. Our results indicate that, under the current widely accepted adversarial model, certain goals are beyond attainment. This also suggests a negative answer to the open problem left by Huang et al. in 2014. To the best of knowledge, the present study makes the first step towards understanding the underlying evaluation metric for anonymous two-factor authentication, which we believe will facilitate better design of anonymous two-factor protocols that offer acceptable trade-offs among usability, security and privacy.",
"title": ""
}
] |
[
{
"docid": "f478bbf48161da50017d3ec9f8e677b4",
"text": "Between November 1998 and December 1999, trained medical record abstractors visited the Micronesian jurisdictions of Chuuk, Kosrae, Pohnpei, and Yap (the four states of the Federated States of Micronesia), as well as the Republic of Palau (Belau), the Republic of Kiribati, the Republic of the Marshall Islands (RMI), and the Republic of Nauru to review all available medical records in order to describe the epidemiology of cancer in Micronesia. Annualized age-adjusted, site-specific cancer period prevalence rates for individual jurisdictions were calculated. Site-specific cancer occurrence in Micronesia follows a pattern characteristic of developing nations. At the same time, cancers associated with developed countries are also impacting these populations. Recommended are jurisdiction-specific plans that outline the steps and resources needed to establish or improve local cancer registries; expand cancer awareness and screening activities; and improve diagnostic and treatment capacity.",
"title": ""
},
{
"docid": "62a51c43d4972d41d3b6cdfa23f07bb9",
"text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.",
"title": ""
},
{
"docid": "6d97cbe726eca4b883cf7c8c2d939f8b",
"text": "In this paper, a new ensemble forecasting model for short-term load forecasting (STLF) is proposed based on extreme learning machine (ELM). Four important improvements are used to support the ELM for increased forecasting performance. First, a novel wavelet-based ensemble scheme is carried out to generate the individual ELM-based forecasters. Second, a hybrid learning algorithm blending ELM and the Levenberg-Marquardt method is proposed to improve the learning accuracy of neural networks. Third, a feature selection method based on the conditional mutual information is developed to select a compact set of input variables for the forecasting model. Fourth, to realize an accurate ensemble forecast, partial least squares regression is utilized as a combining approach to aggregate the individual forecasts. Numerical testing shows that proposed method can obtain better forecasting results in comparison with other standard and state-of-the-art methods.",
"title": ""
},
{
"docid": "cbb6bac245862ed0265f6d32e182df92",
"text": "With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.",
"title": ""
},
{
"docid": "bf4776d6d01d63d3eb6dbeba693bf3de",
"text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.",
"title": ""
},
{
"docid": "71b0dbd905c2a9f4111dfc097bfa6c67",
"text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One",
"title": ""
},
{
"docid": "a5d100fd83620d9cc868a33ab6367be2",
"text": "Identifying the lineage path of neural cells is critical for understanding the development of brain. Accurate neural cell detection is a crucial step to obtain reliable delineation of cell lineage. To solve this task, in this paper we present an efficient neural cell detection method based on SSD (single shot multibox detector) neural network model. Our method adapts the original SSD architecture and removes the unnecessary blocks, leading to a light-weight model. Moreover, we formulate the cell detection as a binary regression problem, which makes our model much simpler. Experimental results demonstrate that, with only a small training set, our method is able to accurately capture the neural cells under severe shape deformation in a fast way.",
"title": ""
},
{
"docid": "2a8f464e709dcae4e34f73654aefe31f",
"text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.",
"title": ""
},
{
"docid": "6513c4ca4197e9ff7028e527a621df0a",
"text": "The development of complex distributed systems demands for the creation of suitable architectural styles (or paradigms) and related run-time infrastructures. An emerging style that is receiving increasing attention is based on the notion of event. In an event-based architecture, distributed software components interact by generating and consuming events. An event is the occurrence of some state change in a component of a software system, made visible to the external world. The occurrence of an event in a component is asynchronously notified to any other component that has declared some interest in it. This paradigm (usually called “publish/subscribe” from the names of the two basic operations that regulate the communication) holds the promise of supporting a flexible and effective interaction among highly reconfigurable, distributed software components. In the past two years, we have developed an object-oriented infrastructure called JEDI (Java Event-based Distributed Infrastructure). JEDI supports the development and operation of event-based systems and has been used to implement a significant example of distributed system, namely, the OPSS workflow management system (WFMS). The paper illustrates JEDI main features and how we have used them to implement OPSS. Moreover, the paper provides an initial evaluation of our experiences in using the event-based architectural style and a classification of some of the event-based infrastructures presented in the literature.",
"title": ""
},
{
"docid": "4243f0bafe669ab862aaad2b184c6a0e",
"text": "Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that are physically authentic – those corresponding to actual changes in 3D physical properties (like surface normals, illumination condition, etc.). These adversaries arguably pose a more serious concern, as they demonstrate the possibility of causing neural network failure by small perturbations of real-world 3D objects and scenes. In the contexts of object classification and visual question answering, we augment state-of-the-art deep neural networks that receive 2D input images with a rendering module (either differentiable or not) in front, so that a 3D scene (in the physical space) is rendered into a 2D image (in the image space), and then mapped to a prediction (in the output space). The adversarial perturbations can now go beyond the image space, and have clear meanings in the 3D physical world. Through extensive experiments, we found that a vast majority of image-space adversaries cannot be explained by adjusting parameters in the physical space, i.e., they are usually physically inauthentic. But it is still possible to successfully attack beyond the image space on the physical space (such that authenticity is enforced), though this is more difficult than image-space attacks, reflected in lower success rates and heavier perturbations required.",
"title": ""
},
{
"docid": "6737955fd1876a40fc0e662a4cac0711",
"text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.",
"title": ""
},
{
"docid": "289942ca889ccea58d5b01dab5c82719",
"text": "Concepts of basal ganglia organization have changed markedly over the past decade, due to significant advances in our understanding of the anatomy, physiology and pharmacology of these structures. Independent evidence from each of these fields has reinforced a growing perception that the functional architecture of the basal ganglia is essentially parallel in nature, regardless of the perspective from which these structures are viewed. This represents a significant departure from earlier concepts of basal ganglia organization, which generally emphasized the serial aspects of their connectivity. Current evidence suggests that the basal ganglia are organized into several structurally and functionally distinct 'circuits' that link cortex, basal ganglia and thalamus, with each circuit focused on a different portion of the frontal lobe. In this review, Garrett Alexander and Michael Crutcher, using the basal ganglia 'motor' circuit as the principal example, discuss recent evidence indicating that a parallel functional architecture may also be characteristic of the organization within each individual circuit.",
"title": ""
},
{
"docid": "45009303764570cbfa3532a9d98f5393",
"text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.",
"title": ""
},
{
"docid": "0e7da1ef24306eea2e8f1193301458fe",
"text": "We consider the problem of object figure-ground segmentation when the object categories are not available during training (i.e. zero-shot). During training, we learn standard segmentation models for a handful of object categories (called “source objects”) using existing semantic segmentation datasets. During testing, we are given images of objects (called “target objects”) that are unseen during training. Our goal is to segment the target objects from the background. Our method learns to transfer the knowledge from the source objects to the target objects. Our experimental results demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "e830098f9c045d376177e6d2644d4a06",
"text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.",
"title": ""
},
{
"docid": "cae9e77074db114690a6ed1330d9b14c",
"text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.",
"title": ""
},
{
"docid": "180a840a22191da6e9a99af3d41ab288",
"text": "The hippocampal CA3 region is classically viewed as a homogeneous autoassociative network critical for associative memory and pattern completion. However, recent evidence has demonstrated a striking heterogeneity along the transverse, or proximodistal, axis of CA3 in spatial encoding and memory. Here we report the presence of striking proximodistal gradients in intrinsic membrane properties and synaptic connectivity for dorsal CA3. A decreasing gradient of mossy fiber synaptic strength along the proximodistal axis is mirrored by an increasing gradient of direct synaptic excitation from entorhinal cortex. Furthermore, we uncovered a nonuniform pattern of reactivation of fear memory traces, with the most robust reactivation during memory retrieval occurring in mid-CA3 (CA3b), the region showing the strongest net recurrent excitation. Our results suggest that heterogeneity in both intrinsic properties and synaptic connectivity may contribute to the distinct spatial encoding and behavioral role of CA3 subregions along the proximodistal axis.",
"title": ""
},
{
"docid": "6a82dfa1d79016388c38ccba77c56ae5",
"text": "Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.",
"title": ""
},
{
"docid": "bb799a3aac27f4ac764649e1f58ee9fb",
"text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.",
"title": ""
},
{
"docid": "97adb3a003347f579706cd01a762bdc9",
"text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.",
"title": ""
}
] |
scidocsrr
|
8c75c8b4274533b14d267aed457d651c
|
Building Neuromorphic Circuits with Memristive Devices
|
[
{
"docid": "5deaf3ef06be439ad0715355d3592cff",
"text": "Hybrid reconfigurable logic circuits were fabricated by integrating memristor-based crossbars onto a foundry-built CMOS (complementary metal-oxide-semiconductor) platform using nanoimprint lithography, as well as materials and processes that were compatible with the CMOS. Titanium dioxide thin-film memristors served as the configuration bits and switches in a data routing network and were connected to gate-level CMOS components that acted as logic elements, in a manner similar to a field programmable gate array. We analyzed the chips using a purpose-built testing system, and demonstrated the ability to configure individual devices, use them to wire up various logic gates and a flip-flop, and then reconfigure devices.",
"title": ""
}
] |
[
{
"docid": "0ab14a40df6fe28785262d27a4f5b8ce",
"text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.",
"title": ""
},
{
"docid": "d5ac5e10fc2cc61e625feb28fc9095b5",
"text": "Article history: Received 8 July 2016 Received in revised form 15 November 2016 Accepted 29 December 2016 Available online 25 January 2017 As part of the post-2015 United Nations sustainable development agenda, the world has its first urban sustainable development goal (USDG) “to make cities and human settlements inclusive, safe, resilient and sustainable”. This paper provides an overview of the USDG and explores some of the difficulties around using this goal as a tool for improving cities. We argue that challenges emerge around selecting the indicators in the first place and also around the practical use of these indicators once selected. Three main practical problems of indicator use include 1) the poor availability of standardized, open and comparable data 2) the lack of strong data collection institutions at the city scale to support monitoring for the USDG and 3) “localization” the uptake and context specific application of the goal by diverse actors in widely different cities. Adding to the complexity, the USDG conversation is taking place at the same time as the proliferation of a bewildering array of indicator systems at different scales. Prompted by technological change, debates on the “data revolution” and “smart city” also have direct bearing on the USDG. We argue that despite these many complexities and challenges, the USDG framework has the potential to encourage and guide needed reforms in our cities but only if anchored in local institutions and initiatives informed by open, inclusive and contextually sensitive data collection and monitoring. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f86e3894a6c61c3734e1aabda3500ef0",
"text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.",
"title": ""
},
{
"docid": "15da6453d3580a9f26ecb79f9bc8e270",
"text": "In 2005 the Commission for Africa noted that ‘Tackling HIV and AIDS requires a holistic response that recognises the wider cultural and social context’ (p. 197). Cultural factors that range from beliefs and values regarding courtship, sexual networking, contraceptive use, perspectives on sexual orientation, explanatory models for disease and misfortune and norms for gender and marital relations have all been shown to be factors in the various ways that HIV/AIDS has impacted on African societies (UNESCO, 2002). Increasingly the centrality of culture is being recognised as important to HIV/AIDS prevention, treatment, care and support. With culture having both positive and negative influences on health behaviour, international donors and policy makers are beginning to acknowledge the need for cultural approaches to the AIDS crisis (Nguyen et al., 2008). The development of cultural approaches to HIV/AIDS presents two major challenges for South Africa. First, the multi-cultural nature of the country means that there is no single sociocultural context in which the HIV/AIDS epidemic is occurring. South Africa is home to a rich tapestry of racial, ethnic, religious and linguistic groups. As a result of colonial history and more recent migration, indigenous Africans have come to live alongside large populations of people with European, Asian and mixed descent, all of whom could lay claim to distinctive cultural practices and spiritual beliefs. Whilst all South Africans are affected by the spread of HIV, the burden of the disease lies with the majority black African population (see Shisana et al., 2005; UNAIDS, 2007). Therefore, this chapter will focus on some sociocultural aspects of life within the majority black African population of South Africa, most of whom speak languages that are classified within the broad linguistic grouping of Bantu languages. This large family of linguistically related ethnic groups span across southern Africa and comprise the bulk of the African people who reside in South Africa today (Hammond-Tooke, 1974). A second challenge involves the legitimacy of the culture concept. Whilst race was used in apartheid as the rationale for discrimination, notions of culture and cultural differences were legitimised by segregating the country into various ‘homelands’. Within the homelands, the majority black South Africans could presumably",
"title": ""
},
{
"docid": "bc6be8b5fd426e7f8d88645a2b21ff6a",
"text": "irtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.” This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills”—and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials and in testimony delivered just a few months ago before the Senate Finance Committee, business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking. And ACT, Inc. offers a test of critical thinking for college students. These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay. Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides. But, for reasons I’ll explain, the programs were not very effective—and today we still lament students’ lack of critical thinking. After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about Critical Thinking",
"title": ""
},
{
"docid": "bcd81794f9e1fc6f6b92fd36ccaa8dac",
"text": "Reliable detection and avoidance of obstacles is a crucial prerequisite for autonomously navigating robots as both guarantee safety and mobility. To ensure safe mobility, the obstacle detection needs to run online, thereby taking limited resources of autonomous systems into account. At the same time, robust obstacle detection is highly important. Here, a too conservative approach might restrict the mobility of the robot, while a more reckless one might harm the robot or the environment it is operating in. In this paper, we present a terrain-adaptive approach to obstacle detection that relies on 3D-Lidar data and combines computationally cheap and fast geometric features, like step height and steepness, which are updated with the frequency of the lidar sensor, with semantic terrain information, which is updated with at lower frequency. We provide experiments in which we evaluate our approach on a real robot on an autonomous run over several kilometers containing different terrain types. The experiments demonstrate that our approach is suitable for autonomous systems that have to navigate reliable on different terrain types including concrete, dirt roads and grass.",
"title": ""
},
{
"docid": "fc5f80f0554d248524f2aa67ad628773",
"text": "Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other\"s first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating the effectiveness of 29 simple acoustic and visual non-verbal features. Our results show that Conscientiousness and Emotional Stability/Neuroticism are the best recognizable traits. The lower accuracy levels for Extraversion and Agreeableness are explained through the interaction between situational characteristics and the differential activation of the behavioral dispositions underlying those traits.",
"title": ""
},
{
"docid": "969ba9848fa6d02f74dabbce2f1fe3ab",
"text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "08ab7142ae035c3594d3f3ae339d3e27",
"text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.",
"title": ""
},
{
"docid": "51a859f71bd2ec82188826af18204f02",
"text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.",
"title": ""
},
{
"docid": "ac4c1d903e20b90da555b11ef2edd2f5",
"text": "Program translation is an important tool to migrate legacy code in one language into an ecosystem built in a different language. In this work, we are the first to employ deep neural networks toward tackling this problem. We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step. To capture this intuition, we design a tree-to-tree neural network to translate a source tree into a target one. Meanwhile, we develop an attention mechanism for the tree-to-tree model, so that when the decoder expands one non-terminal in the target tree, the attention mechanism locates the corresponding sub-tree in the source tree to guide the expansion of the decoder. We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects.",
"title": ""
},
{
"docid": "194156892cbdb0161e9aae6a01f78703",
"text": "Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.",
"title": ""
},
{
"docid": "16c56a9ca685cb1100d175268b6e8ba6",
"text": "In this paper, we study the stochastic gradient descent method in analyzing nonconvex statistical optimization problems from a diffusion approximation point of view. Using the theory of large deviation of random dynamical system, we prove in the small stepsize regime and the presence of omnidirectional noise the following: starting from a local minimizer (resp. saddle point) the SGD iteration escapes in a number of iteration that is exponentially (resp. linearly) dependent on the inverse stepsize. We take the deep neural network as an example to study this phenomenon. Based on a new analysis of the mixing rate of multidimensional Ornstein-Uhlenbeck processes, our theory substantiate a very recent empirical results by Keskar et al. (2016), suggesting that large batch sizes in training deep learning for synchronous optimization leads to poor generalization error.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "e0fc6fc1425bb5786847c3769c1ec943",
"text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.",
"title": ""
},
{
"docid": "0a5e2cc403ba9a4397d04c084b25f43e",
"text": "Ebola virus disease (EVD) distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals' behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals' behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.",
"title": ""
},
{
"docid": "58c4c9bd2033645ece7db895d368cda6",
"text": "Nanorobotics is the technology of creating machines or robots of the size of few hundred nanometres and below consisting of components of nanoscale or molecular size. There is an all around development in nanotechnology towards realization of nanorobots in the last two decades. In the present work, the compilation of advancement in nanotechnology in context to nanorobots is done. The challenges and issues in movement of a nanorobot and innovations present in nature to overcome the difficulties in moving at nano-size regimes are discussed. The efficiency aspect in context to artificial nanorobot is also presented.",
"title": ""
},
{
"docid": "bb01b5e24d7472ab52079dcb8a65358d",
"text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.",
"title": ""
}
] |
scidocsrr
|
513882e9992781626e656c002f99dbdf
|
Rectangular Dielectric Resonator Antenna Array for 28 GHz Applications
|
[
{
"docid": "364eb800261105453f36b005ba1faf68",
"text": "This article presents empirically-based large-scale propagation path loss models for fifth-generation cellular network planning in the millimeter-wave spectrum, based on real-world measurements at 28 GHz and 38 GHz in New York City and Austin, Texas, respectively. We consider industry-standard path loss models used for today's microwave bands, and modify them to fit the propagation data measured in these millimeter-wave bands for cellular planning. Network simulations with the proposed models using a commercial planning tool show that roughly three times more base stations are required to accommodate 5G networks (cell radii up to 200 m) compared to existing 3G and 4G systems (cell radii of 500 m to 1 km) when performing path loss simulations based on arbitrary pointing angles of directional antennas. However, when directional antennas are pointed in the single best directions at the base station and mobile, coverage range is substantially improved with little increase in interference, thereby reducing the required number of 5G base stations. Capacity gains for random pointing angles are shown to be 20 times greater than today's fourth-generation Long Term Evolution networks, and can be further improved when using directional antennas pointed in the strongest transmit and receive directions with the help of beam combining techniques.",
"title": ""
}
] |
[
{
"docid": "5f5828952aa0a0a95e348a0c0b2296fb",
"text": "Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter.",
"title": ""
},
{
"docid": "f90efcef80233888fb8c218d1e5365a6",
"text": "BACKGROUND\nMany low- and middle-income countries are undergoing a nutrition transition associated with rapid social and economic transitions. We explore the coexistence of over and under- nutrition at the neighborhood and household level, in an urban poor setting in Nairobi, Kenya.\n\n\nMETHODS\nData were collected in 2010 on a cohort of children aged under five years born between 2006 and 2010. Anthropometric measurements of the children and their mothers were taken. Additionally, dietary intake, physical activity, and anthropometric measurements were collected from a stratified random sample of adults aged 18 years and older through a separate cross-sectional study conducted between 2008 and 2009 in the same setting. Proportions of stunting, underweight, wasting and overweight/obesity were dettermined in children, while proportions of underweight and overweight/obesity were determined in adults.\n\n\nRESULTS\nOf the 3335 children included in the analyses with a total of 6750 visits, 46% (51% boys, 40% girls) were stunted, 11% (13% boys, 9% girls) were underweight, 2.5% (3% boys, 2% girls) were wasted, while 9% of boys and girls were overweight/obese respectively. Among their mothers, 7.5% were underweight while 32% were overweight/obese. A large proportion (43% and 37%%) of overweight and obese mothers respectively had stunted children. Among the 5190 adults included in the analyses, 9% (6% female, 11% male) were underweight, and 22% (35% female, 13% male) were overweight/obese.\n\n\nCONCLUSION\nThe findings confirm an existing double burden of malnutrition in this setting, characterized by a high prevalence of undernutrition particularly stunting early in life, with high levels of overweight/obesity in adulthood, particularly among women. In the context of a rapid increase in urban population, particularly in urban poor settings, this calls for urgent action. Multisectoral action may work best given the complex nature of prevailing circumstances in urban poor settings. Further research is needed to understand the pathways to this coexistence, and to test feasibility and effectiveness of context-specific interventions to curb associated health risks.",
"title": ""
},
{
"docid": "e04cccfd59c056678e39fc4aed0eaa2b",
"text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.",
"title": ""
},
{
"docid": "0872a229806a1055ec6e42d7a36ef626",
"text": "Attribute selection (AS) refers to the problem of selecting those input attributes or features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition and signal processing. Unlike other dimensionality reduction methods, attribute selectors preserve the original meaning of the attributes after reduction. This has found application in tasks that involve datasets containing huge numbers of attributes (in the order of tens of thousands) which, for some learning algorithms, might be impossible to process further. Recent examples include text processing and web content classification. AS techniques have also been applied to small and medium-sized datasets in order to locate the most informative attributes for later use. One of the many successful applications of rough set theory has been to this area. The rough set ideology of using only the supplied data and no other information has many benefits in AS, where most other methods require supplementary knowledge. However, the main limitation of rough set-based attribute selection in the literature is the restrictive requirement that all data is discrete. In classical rough set theory, it is not possible to consider real-valued or noisy data. This paper investigates a novel approach based on fuzzy-rough sets, fuzzy rough feature selection (FRFS), that addresses these problems and retains dataset semantics. FRFS is applied to two challenging domains where a feature reducing step is important; namely, web content classification and complex systems monitoring. The utility of this approach is demonstrated and is compared empirically with several dimensionality reducers. In the experimental studies, FRFS is shown to equal or improve classification accuracy when compared to the results from unreduced data. Classifiers that use a lower dimensional set of attributes which are retained by fuzzy-rough reduction outperform those that employ more attributes returned by the existing crisp rough reduction method. In addition, it is shown that FRFS is more powerful than the other AS techniques in the comparative study",
"title": ""
},
{
"docid": "8492ba0660b06ca35ab3f4e96f3a33c3",
"text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.",
"title": ""
},
{
"docid": "32b292c3ea5c95411a5e67d664d6ce30",
"text": "Many difficult combinatorial optimization problems have been modeled as static problems. However, in practice, many problems are dynamic and changing, while some decisions have to be made before all the design data are known. For example, in the Dynamic Vehicle Routing Problem (DVRP), new customer orders appear over time, and new routes must be reconfigured while executing the current solution. Montemanni et al. [1] considered a DVRP as an extension to the standard vehicle routing problem (VRP) by decomposing a DVRP as a sequence of static VRPs, and then solving them with an ant colony system (ACS) algorithm. This paper presents a genetic algorithm (GA) methodology for providing solutions for the DVRP model employed in [1]. The effectiveness of the proposed GA is evaluated using a set of benchmarks found in the literature. Compared with a tabu search approach implemented herein and the aforementioned ACS, the proposed GA methodology performs better in minimizing travel costs.",
"title": ""
},
{
"docid": "8a8c1099dfe0cf45746f11da7d6923d8",
"text": "The future of procedural content generation (PCG) lies beyond the dominant motivations of “replayability” and creating large environments for players to explore. This paper explores both the past and potential future for PCG, identifying five major lenses through which we can view PCG and its role in a game: data vs. process intensiveness, the interactive extent of the content, who has control over the generator, how many players interact with it, and the aesthetic purpose for PCG being used in the game. Using these lenses, the paper proposes several new research directions for PCG that require both deep technical research and innovative game design.",
"title": ""
},
{
"docid": "3eeaf56aaf9dda0f2b16c1c46f6c1c75",
"text": "In satellite earth station antenna systems there is an increasing demand for complex single aperture, multi-function and multi-frequency band capable feed systems. In this work, a multi band feed system (6/12 GHz) is described which employs quadrature junctions (QJ) and supports transmit and receive functionality in the C and Ku bands respectively. This feed system is designed for a 16.4 m diameter shaped cassegrain antenna. It is a single aperture, 4 port system with transmit capability in circular polarization (CP) mode over the 6.625-6.69 GHz band and receive in the linear polarization (LP) mode over the 12.1-12.3 GHz band",
"title": ""
},
{
"docid": "c744354fcc6115a83c916dcc71b381f4",
"text": "The spread of false rumours during emergencies can jeopardise the well-being of citizens as they are monitoring the stream of news from social media to stay abreast of the latest updates. In this paper, we describe the methodology we have developed within the PHEME project for the collection and sampling of conversational threads, as well as the tool we have developed to facilitate the annotation of these threads so as to identify rumourous ones. We describe the annotation task conducted on threads collected during the 2014 Ferguson unrest and we present and analyse our findings. Our results show that we can collect effectively social media rumours and identify multiple rumours associated with a range of stories that would have been hard to identify by relying on existing techniques that need manual input of rumour-specific keywords.",
"title": ""
},
{
"docid": "e227e21d9b0523fdff82ca898fea0403",
"text": "As computer games become more complex and consumers demand more sophisticated computer controlled agents, developers are required to place a greater emphasis on the artificial intelligence aspects of their games. One source of sophisticated AI techniques is the artificial intelligence research community. This paper discusses recent efforts by our group at the University of Michigan Artificial Intelligence Lab to apply state of the art artificial intelligence techniques to computer games. Our experience developing intelligent air combat agents for DARPA training exercises, described in John Laird's lecture at the 1998 Computer Game Developer's Conference, suggested that many principles and techniques from the research community are applicable to games. A more recent project, called the Soar/Games project, has followed up on this by developing agents for computer games, including Quake II and Descent 3. The result of these two research efforts is a partially implemented design of an artificial intelligence engine for games based on well established AI systems and techniques.",
"title": ""
},
{
"docid": "debe25489a0176c48c07d1f2d5b8513e",
"text": "In order to formulate a high-level understanding of driver behavior from massive naturalistic driving data, an effective approach is needed to automatically process or segregate data into low-level maneuvers. Besides traditional computer vision processing, this study addresses the lane-change detection problem by using vehicle dynamic signals (steering angle and vehicle speed) extracted from the CAN-bus, which is collected with 58 drivers around Dallas, TX area. After reviewing the literature, this study proposes a machine learning-based segmentation and classification algorithm, which is stratified into three stages. The first stage is preprocessing and prefiltering, which is intended to reduce noise and remove clear left and right turning events. Second, a spectral time-frequency analysis segmentation approach is employed to generalize all potential time-variant lane-change and lane-keeping candidates. The final stage compares two possible classification methods—1) dynamic time warping feature with k -nearest neighbor classifier and 2) hidden state sequence prediction with a combined hidden Markov model. The overall optimal classification accuracy can be obtained at 80.36% for lane-change-left and 83.22% for lane-change-right. The effectiveness and issues of failures are also discussed. With the availability of future large-scale naturalistic driving data, such as SHRP2, this proposed effective lane-change detection approach can further contribute to characterize both automatic route recognition as well as distracted driving state analysis.",
"title": ""
},
{
"docid": "b1b1af81e84e1f79a0193773a22199d4",
"text": "Layered multicast is an efficient technique to deliver video to heterogeneous receivers over wired and wireless networks. In this paper, we consider such a multicast system in which the server adapts the bandwidth and forward-error correction code (FEC) of each layer so as to maximize the overall video quality, given the heterogeneous client characteristics in terms of their end-to-end bandwidth, packet drop rate over the wired network, and bit-error rate in the wireless hop. In terms of FECs, we also study the value of a gateway which “transcodes” packet-level FECs to byte-level FECs before forwarding packets from the wired network to the wireless clients. We present an analysis of the system, propose an efficient algorithm on FEC allocation for the base layer, and formulate a dynamic program with a fast and accurate approximation for the joint bandwidth and FEC allocation of the enhancement layers. Our results show that a transcoding gateway performs only slightly better than the nontranscoding one in terms of end-to-end loss rate, and our allocation is effective in terms of FEC parity and bandwidth served to each user.",
"title": ""
},
{
"docid": "3dfb419706ae85d232753a085dc145f7",
"text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.",
"title": ""
},
{
"docid": "479fbdcd776904e9ba20fd95b4acb267",
"text": "Tall building developments have been rapidly increasing worldwide. This paper reviews the evolution of tall building’s structural systems and the technological driving force behind tall building developments. For the primary structural systems, a new classification – interior structures and exterior structures – is presented. While most representative structural systems for tall buildings are discussed, the emphasis in this review paper is on current trends such as outrigger systems and diagrid structures. Auxiliary damping systems controlling building motion are also discussed. Further, contemporary “out-of-the-box” architectural design trends, such as aerodynamic and twisted forms, which directly or indirectly affect the structural performance of tall buildings, are reviewed. Finally, the future of structural developments in tall buildings is envisioned briefly.",
"title": ""
},
{
"docid": "bf5874dc1fc1c968d7c41eb573d8d04a",
"text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.",
"title": ""
},
{
"docid": "5ce00014f84277aca0a4b7dfefc01cbb",
"text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.",
"title": ""
},
{
"docid": "e9582d921b783a378e91c7b5ddaf9d16",
"text": "Pneumatic soft actuators produce flexion and meet the new needs of collaborative robotics, which is rapidly emerging in the industry landscape 4.0. The soft actuators are not only aimed at industrial progress, but their application ranges in the field of medicine and rehabilitation. Safety and reliability are the main requirements for coexistence and human-robot interaction; such requirements, together with the versatility and lightness, are the precious advantages that is offered by this new category of actuators. The objective is to develop an actuator with high compliance, low cost, high versatility and easy to produce, aimed at the realization of the fingers of a robotic hand that can faithfully reproduce the motion of a real hand. The proposed actuator is equipped with an intrinsic compliance thanks to the hyper-elastic silicone rubber used for its realization; the bending is allowed by the high compliance of the silicone and by a square-meshed gauze which contains the expansion and guides the movement through appropriate cuts in correspondence of the joints. A numerical model of the actuator is developed and an optimal configuration of the five fingers of the hand is achieved; finally, the index finger is built, on which the experimental validation tests are carried out.",
"title": ""
},
{
"docid": "81f9a52b6834095cd7be70b39af0e7f0",
"text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.",
"title": ""
},
{
"docid": "3eec1e9abcb677a4bc8f054fa8827f4f",
"text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.",
"title": ""
},
{
"docid": "241f5a88f53c929cc11ce0edce191704",
"text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.",
"title": ""
}
] |
scidocsrr
|
d2e63cfca2fea6b2e02ea3e37e6d077a
|
BLACKLISTED SPEAKER IDENTIFICATION USING TRIPLET NEURAL NETWORKS
|
[
{
"docid": "c9ecb6ac5417b5fea04e5371e4250361",
"text": "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.",
"title": ""
}
] |
[
{
"docid": "3d5fb6eff6d0d63c17ef69c8130d7a77",
"text": "A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds.",
"title": ""
},
{
"docid": "bea412d20a95c853fe06e7640acb9158",
"text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "779d5380c72827043111d00510e32bfd",
"text": "OBJECTIVE\nThe purpose of this review is 2-fold. The first is to provide a review for physiatrists already providing care for women with musculoskeletal pelvic floor pain and a resource for physiatrists who are interested in expanding their practice to include this patient population. The second is to describe how musculoskeletal dysfunctions involving the pelvic floor can be approached by the physiatrist using the same principles used to evaluate and treat others dysfunctions in the musculoskeletal system. This discussion clarifies that evaluation and treatment of pelvic floor pain of musculoskeletal origin is within the scope of practice for physiatrists. The authors review the anatomy of the pelvic floor, including the bony pelvis and joints, muscle and fascia, and the peripheral and autonomic nervous systems. Pertinent history and physical examination findings are described. The review concludes with a discussion of differential diagnosis and treatment of musculoskeletal pelvic floor pain in women. Improved recognition of pelvic floor dysfunction by healthcare providers will reduce impairment and disability for women with pelvic floor pain. A physiatrist is in the unique position to treat the musculoskeletal causes of this condition because it requires an expert grasp of anatomy, function, and the linked relationship between the spine and pelvis. Further research regarding musculoskeletal causes and treatment of pelvic floor pain will help validate these concepts and improve awareness and care for women limited by this condition.",
"title": ""
},
{
"docid": "337b03633afacc96b443880ad996f013",
"text": "Mobile security becomes a hot topic recently, especially in mobile payment and privacy data fields. Traditional solution can't keep a good balance between convenience and security. Against this background, a dual OS security solution named Trusted Execution Environment (TEE) is proposed and implemented by many institutions and companies. However, it raised TEE fragmentation and control problem. Addressing this issue, a mobile security infrastructure named Trusted Execution Environment Integration (TEEI) is presented to integrate multiple different TEEs. By using Trusted Virtual Machine (TVM) tech-nology, TEEI allows multiple TEEs running on one secure world on one mobile device at the same time and isolates them safely. Furthermore, a Virtual Network protocol is proposed to enable communication and cooperation among TEEs which includes TEE on TVM and TEE on SE. At last, a SOA-like Internal Trusted Service (ITS) framework is given to facilitate the development and maintenance of TEEs.",
"title": ""
},
{
"docid": "452f71b953ddffad88cec65a4c7fbece",
"text": "The password based authorization scheme for all available security systems can effortlessly be hacked by the hacker or a malicious user. One might not be able to guarantee that the person who is using the password is authentic or not. Only biometric systems are one which make offered automated authentication. There are very exceptional chances of losing the biometric identity, only if the accident of an individual may persists. Footprint based biometric system has been evaluated so far. In this paper a number of approaches of footprint recognition have been deliberated. General Terms Biometric pattern recognition, Image processing.",
"title": ""
},
{
"docid": "8183fe0c103e2ddcab5b35549ed8629f",
"text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.",
"title": ""
},
{
"docid": "f4aa06f7782a22eeb5f30d0ad27eaff9",
"text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.",
"title": ""
},
{
"docid": "6f9ae554513bba3c685f86909e31645f",
"text": "Triboelectric energy harvesting has been applied to various fields, from large-scale power generation to small electronics. Triboelectric energy is generated when certain materials come into frictional contact, e.g., static electricity from rubbing a shoe on a carpet. In particular, textile-based triboelectric energy-harvesting technologies are one of the most promising approaches because they are not only flexible, light, and comfortable but also wearable. Most previous textile-based triboelectric generators (TEGs) generate energy by vertically pressing and rubbing something. However, we propose a corrugated textile-based triboelectric generator (CT-TEG) that can generate energy by stretching. Moreover, the CT-TEG is sewn into a corrugated structure that contains an effective air gap without additional spacers. The resulting CT-TEG can generate considerable energy from various deformations, not only by pressing and rubbing but also by stretching. The maximum output performances of the CT-TEG can reach up to 28.13 V and 2.71 μA with stretching and releasing motions. Additionally, we demonstrate the generation of sufficient energy from various activities of a human body to power about 54 LEDs. These results demonstrate the potential application of CT-TEGs for self-powered systems.",
"title": ""
},
{
"docid": "719c945e9f45371f8422648e0e81178f",
"text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey",
"title": ""
},
{
"docid": "5956e9399cfe817aa1ddec5553883bef",
"text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.",
"title": ""
},
{
"docid": "b72d0d187fe12d1f006c8e17834af60e",
"text": "Pseudoangiomatous stromal hyperplasia (PASH) is a rare benign mesenchymal proliferative lesion of the breast. In this study, we aimed to show a case of PASH with mammographic and sonographic features, which fulfill the criteria for benign lesions and to define its recently discovered elastography findings. A 49-year-old premenopausal female presented with breast pain in our outpatient surgery clinic. In ultrasound images, a hypoechoic solid mass located at the 3 o'clock position in the periareolar region of the right breast was observed. Due to it was not detected on earlier mammographies, the patient underwent a tru-cut biopsy, although the mass fulfilled the criteria for benign lesions on mammography, ultrasound, and elastography. Elastography is a new technique differentiating between benign and malignant lesions. It is also useful to determine whether a biopsy is necessary or follow-up is sufficient.",
"title": ""
},
{
"docid": "c851bad8a1f7c8526d144453b3f2aa4f",
"text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.",
"title": ""
},
{
"docid": "aefa4559fa6f8e0c046cd7e02d3e1b92",
"text": "The concept of smart city is considered as the new engine for economic and social growths since it is supported by the rapid development of information and communication technologies. However, each technology not only brings its advantages, but also the challenges that cities have to face in order to implement it. So, this paper addresses two research questions : « What are the most important technologies that drive the development of smart cities ?» and « what are the challenges that cities will face when adopting these technologies ? » Relying on a literature review of studies published between 1990 and 2017, the ensuing results show that Artificial Intelligence and Internet of Things represent the most used technologies for smart cities. So, the focus of this paper will be on these two technologies by showing their advantages and their challenges.",
"title": ""
},
{
"docid": "123a21d9913767e1a8d1d043f6feab01",
"text": "Permanent magnet synchronous machines generate parasitic torque pulsations owing to distortion of the stator flux linkage distribution, variable magnetic reluctance at the stator slots, and secondary phenomena. The consequences are speed oscillations which, although small in magnitude, deteriorate the performance of the drive in demanding applications. The parasitic effects are analysed and modelled using the complex state-variable approach. A fast current control system is employed to produce highfrequency electromagnetic torque components for compensation. A self-commissioning scheme is described which identifies the machine parameters, particularly the torque ripple functions which depend on the angular position of the rotor. Variations of permanent magnet flux density with temperature are compensated by on-line adaptation. The algorithms for adaptation and control are implemented in a standard microcontroller system without additional hardware. The effectiveness of the adaptive torque ripple compensation is demonstrated by experiments.",
"title": ""
},
{
"docid": "ccc4994ba255084af5456925ba6c164e",
"text": "This letter proposes a novel, small, printed monopole antenna for ultrawideband (UWB) applications with dual band-notch function. By cutting an inverted fork-shaped slit in the ground plane, additional resonance is excited, and hence much wider impedance bandwidth can be produced. To generate dual band-notch characteristics, we use a coupled inverted U-ring strip in the radiating patch. The measured results reveal that the presented dual band-notch monopole antenna offers a wide bandwidth with two notched bands, covering all the 5.2/5.8-GHz WLAN, 3.5/5.5-GHz WiMAX, and 4-GHz C-bands. The proposed antenna has a small size of 12<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{2}$</tex> </formula> or about <formula formulatype=\"inline\"><tex Notation=\"TeX\">$0.15 \\lambda \\times 0.25 \\lambda$</tex></formula> at 4.2 GHz (first resonance frequency), which has a size reduction of 28% with respect to the previous similar antenna. Simulated and measured results are presented to validate the usefulness of the proposed antenna structure UWB applications.",
"title": ""
},
{
"docid": "e75ec4137b0c559a1c375d97993448b0",
"text": "In recent years, consumer-class UAVs have come into public view and cyber security starts to attract the attention of researchers and hackers. The tasks of positioning, navigation and return-to-home (RTH) of UAV heavily depend on GPS. However, the signal structure of civil GPS used by UAVs is completely open and unencrypted, and the signal received by ground devices is very weak. As a result, GPS signals are vulnerable to jamming and spoofing. The development of software define radio (SDR) has made GPS-spoofing easy and costless. GPS-spoofing may cause UAVs to be out of control or even hijacked. In this paper, we propose a novel method to detect GPS-spoofing based on monocular camera and IMU sensor of UAV. Our method was demonstrated on the UAV of DJI Phantom 4.",
"title": ""
},
{
"docid": "bd20bbe7deb2383b6253ec3f576dcf56",
"text": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8f957dab2aa6b186b61bc309f3f2b5c3",
"text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.",
"title": ""
},
{
"docid": "455c080ab112cd4f71a29ab84af019f5",
"text": "We propose a novel image inpainting approach in which the exemplar and the sparse representation are combined together skillfully. In the process of image inpainting, often there will be such a situation: although the sum of squared differences (SSD) of exemplar patch is the smallest among all the candidate patches, there may be a noticeable visual discontinuity in the recovered image when using the exemplar patch to replace the target patch. In this case, we cleverly use the sparse representation of image over a redundant dictionary to recover the target patch, instead of using the exemplar patch to replace it, so that we can promptly prevent the occurrence and accumulation of errors, and obtain satisfied results. Experiments on a number of real and synthetic images demonstrate the effectiveness of proposed algorithm, and the recovered images can better meet the requirements of human vision.",
"title": ""
}
] |
scidocsrr
|
acba07b0f0738c55be978ceeccf1a993
|
Emotion Recognition Based on Joint Visual and Audio Cues
|
[
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "d9ffb9e4bba1205892351b1328977f6c",
"text": "Bayesian network models provide an attractive framework for multimodal sensor fusion. They combine an intuitive graphical representation with efficient algorithms for inference and learning. However, the unsupervised nature of standard parameter learning algorithms for Bayesian networks can lead to poor performance in classification tasks. We have developed a supervised learning framework for Bayesian networks, which is based on the Adaboost algorithm of Schapire and Freund. Our framework covers static and dynamic Bayesian networks with both discrete and continuous states. We have tested our framework in the context of a novel multimodal HCI application: a speech-based command and control interface for a Smart Kiosk. We provide experimental evidence for the utility of our boosted learning approach.",
"title": ""
},
{
"docid": "c8e321ac8b32643ac9cbe151bb9e5f8f",
"text": "The most expressive way humans display emotions is through facial expressions. In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input. We introduce and test different Bayesian network classifiers for classifying expressions from video, focusing on changes in distribution assumptions, and feature dependency structures. In particular we use Naive–Bayes classifiers and change the distribution from Gaussian to Cauchy, and use Gaussian Tree-Augmented Naive Bayes (TAN) classifiers to learn the dependencies among different facial motion features. We also introduce a facial expression recognition from live video input using temporal cues. We exploit the existing methods and propose a new architecture of hidden Markov models (HMMs) for automatically segmenting and recognizing human facial expression from video sequences. The architecture performs both segmentation and recognition of the facial expressions automatically using a multi-level architecture composed of an HMM layer and a Markov model layer. We explore both person-dependent and person-independent recognition of expressions and compare the different methods. 2003 Elsevier Inc. All rights reserved. * Corresponding author. E-mail addresses: iracohen@ifp.uiuc.edu (I. Cohen), nicu@science.uva.nl (N. Sebe), ashutosh@ us.ibm.com (A. Garg), lawrence.chen@kodak.com (L. Chen), huang@ifp.uiuc.edu (T.S. Huang). 1077-3142/$ see front matter 2003 Elsevier Inc. All rights reserved. doi:10.1016/S1077-3142(03)00081-X I. Cohen et al. / Computer Vision and Image Understanding 91 (2003) 160–187 161",
"title": ""
}
] |
[
{
"docid": "e0ee4f306bb7539d408f606d3c036ac5",
"text": "Despite the growing popularity of mobile web browsing, the energy consumed by a phone browser while surfing the web is poorly understood. We present an infrastructure for measuring the precise energy used by a mobile browser to render web pages. We then measure the energy needed to render financial, e-commerce, email, blogging, news and social networking sites. Our tools are sufficiently precise to measure the energy needed to render individual web elements, such as cascade style sheets (CSS), Javascript, images, and plug-in objects. Our results show that for popular sites, downloading and parsing cascade style sheets and Javascript consumes a significant fraction of the total energy needed to render the page. Using the data we collected we make concrete recommendations on how to design web pages so as to minimize the energy needed to render the page. As an example, by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience. We conclude by estimating the point at which offloading browser computations to a remote proxy can save energy on the phone.",
"title": ""
},
{
"docid": "10994a99bb4da87a34d835720d005668",
"text": "Wireless sensor networks (WSNs), consisting of a large number of nodes to detect ambient environment, are widely deployed in a predefined area to provide more sophisticated sensing, communication, and processing capabilities, especially concerning the maintenance when hundreds or thousands of nodes are required to be deployed over wide areas at the same time. Radio frequency identification (RFID) technology, by reading the low-cost passive tags installed on objects or people, has been widely adopted in the tracing and tracking industry and can support an accurate positioning within a limited distance. Joint utilization of WSN and RFID technologies is attracting increasing attention within the Internet of Things (IoT) community, due to the potential of providing pervasive context-aware applications with advantages from both fields. WSN-RFID convergence is considered especially promising in context-aware systems with indoor positioning capabilities, where data from deployed WSN and RFID systems can be opportunistically exploited to refine and enhance the collected data with position information. In this papera, we design and evaluate a hybrid system which combines WSN and RFID technologies to provide an indoor positioning service with the capability of feeding position information into a general-purpose IoT environment. Performance of the proposed system is evaluated by means of simulations and a small-scale experimental set-up. The performed analysis demonstrates that the joint use of heterogeneous technologies can increase the robustness and the accuracy of the indoor positioning systems.",
"title": ""
},
{
"docid": "1c6bf44a2fea9e9b1ffc015759f8986f",
"text": "Convolutional neural networks (CNNs) typically suffer from slow convergence rates in training, which limits their wider application. This paper presents a new CNN learning approach, based on second-order methods, aimed at improving: a) Convergence rates of existing gradient-based methods, and b) Robustness to the choice of learning hyper-parameters (e.g., learning rate). We derive an efficient back-propagation algorithm for simultaneously computing both gradients and second derivatives of the CNN's learning objective. These are then input to a Long Short Term Memory (LSTM) to predict optimal updates of CNN parameters in each learning iteration. Both meta-learning of the LSTM and learning of the CNN are conducted jointly. Evaluation on image classification demonstrates that our second-order backpropagation has faster convergences rates than standard gradient-based learning for the same CNN, and that it converges to better optima leading to better performance under a budgeted time for learning. We also show that an LSTM learned to learn a small CNN network can be readily used for learning a larger network.",
"title": ""
},
{
"docid": "564045d00d2e347252fda301a332f30a",
"text": "In this contribution, the control of a reverse osmosis desalination plant by using an optimal multi-loop approach is presented. Controllers are assumed to be players of a cooperative game, whose solution is obtained by multi-objective optimization (MOO). The MOO problem is solved by applying a genetic algorithm and the final solution is found from this Pareto set. For the reverse osmosis plant a control scheme consisting of two PI control loops are proposed. Simulation results show that in some cases, as for example this desalination plant, multi-loop control with several controllers, which have been obtained by join multi-objective optimization, perform as good as more complex controllers but with less implementation effort.",
"title": ""
},
{
"docid": "848e56ec20ccab212567087178e36979",
"text": "The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people’s travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.",
"title": ""
},
{
"docid": "e8e658d677a3b1a23650b25edd32fc84",
"text": "The aim of the study is to facilitate the suture on the sacral promontory for laparoscopic sacrocolpopexy. We hypothesised that a new method of sacral anchorage using a biosynthetic material, the polyether ether ketone (PEEK) harpoon, might be adequate because of its tensile strength, might reduce complications owing to its well-known biocompatibility, and might shorten the duration of surgery. We verified the feasibility of insertion and quantified the stress resistance of the harpoons placed in the promontory in nine fresh cadavers, using four stress tests in each case. Mean values were analysed and compared using the Wilcoxon and Fisher’s exact tests. The harpoon resists for at least 30 s against a pulling force of 1 N, 5 N and 10 N. Maximum tensile strength is 21 N for the harpoon and 32 N for the suture. Harpoons broke in 6 % and threads in 22 % of cases. Harpoons detached owing to ligament rupture in 64 % of the cases. Regarding failures of the whole complex, the failure involves the harpoon in 92 % of cases and the thread in 56 %. The four possible placements of the harpoon in the promontory were equally safe in terms of resistance to traction. The PEEK harpoon can be easily anchored in the promontory. Thread is more resistant to traction than the harpoon, but the latter makes the surgical technique easier. Any of the four locations tested is feasible for anchoring the device.",
"title": ""
},
{
"docid": "4d383a53c180d5dc4473ab9d7795639a",
"text": "With pervasive applications of medical imaging in health-care, biomedical image segmentation plays a central role in quantitative analysis, clinical diagnosis, and medical intervention. Since manual annotation suffers limited reproducibility, arduous efforts, and excessive time, automatic segmentation is desired to process increasingly larger scale histopathological data. Recently, deep neural networks (DNNs), particularly fully convolutional networks (FCNs), have been widely applied to biomedical image segmentation, attaining much improved performance. At the same time, quantization of DNNs has become an active research topic, which aims to represent weights with less memory (precision) to considerably reduce memory and computation requirements of DNNs while maintaining acceptable accuracy. In this paper, we apply quantization techniques to FCNs for accurate biomedical image segmentation. Unlike existing literatures on quantization which primarily targets memory and computation complexity reduction, we apply quantization as a method to reduce overfitting in FCNs for better accuracy. Specifically, we focus on a state-of-the-art segmentation framework, suggestive annotation [26], which judiciously extracts representative annotation samples from the original training dataset, obtaining an effective small-sized balanced training dataset. We develop two new quantization processes for this framework: (1) suggestive annotation with quantization for highly representative training samples, and (2) network training with quantization for high accuracy. Extensive experiments on the MICCAI Gland dataset show that both quantization processes can improve the segmentation performance, and our proposed method exceeds the current state-of-the-art performance by up to 1%. In addition, our method has a reduction of up to 6.4x on memory usage.",
"title": ""
},
{
"docid": "71b31941082d639dfc6178ff74fba487",
"text": "This paper describes ETH Zurich’s submission to the TREC 2016 Clinical Decision Support (CDS) track. In three successive stages, we apply query expansion based on literal as well as semantic term matches, rank documents in a negation-aware manner and, finally, re-rank them based on clinical intent types as well as semantic and conceptual affinity to the medical case in question. Empirical results show that the proposed method can distill patient representations from raw clinical notes that result in a retrieval performance superior to that of manually constructed case descriptions.",
"title": ""
},
{
"docid": "3be0bd7f02c941f32903f6ad2379f45b",
"text": "Spinal cord injury induces the disruption of blood-spinal cord barrier and triggers a complex array of tissue responses, including endoplasmic reticulum (ER) stress and autophagy. However, the roles of ER stress and autophagy in blood-spinal cord barrier disruption have not been discussed in acute spinal cord trauma. In the present study, we respectively detected the roles of ER stress and autophagy in blood-spinal cord barrier disruption after spinal cord injury. Besides, we also detected the cross-talking between autophagy and ER stress both in vivo and in vitro. ER stress inhibitor, 4-phenylbutyric acid, and autophagy inhibitor, chloroquine, were respectively or combinedly administrated in the model of acute spinal cord injury rats. At day 1 after spinal cord injury, blood-spinal cord barrier was disrupted and activation of ER stress and autophagy were involved in the rat model of trauma. Inhibition of ER stress by treating with 4-phenylbutyric acid decreased blood-spinal cord barrier permeability, prevented the loss of tight junction (TJ) proteins and reduced autophagy activation after spinal cord injury. On the contrary, inhibition of autophagy by treating with chloroquine exacerbated blood-spinal cord barrier permeability, promoted the loss of TJ proteins and enhanced ER stress after spinal cord injury. When 4-phenylbutyric acid and chloroquine were combinedly administrated in spinal cord injury rats, chloroquine abolished the blood-spinal cord barrier protective effect of 4-phenylbutyric acid by exacerbating ER stress after spinal cord injury, indicating that the cross-talking between autophagy and ER stress may play a central role on blood-spinal cord barrier integrity in acute spinal cord injury. The present study illustrates that ER stress induced by spinal cord injury plays a detrimental role on blood-spinal cord barrier integrity, on the contrary, autophagy induced by spinal cord injury plays a furthersome role in blood-spinal cord barrier integrity in acute spinal cord injury.",
"title": ""
},
{
"docid": "c27ba892408391234da524ffab0e7418",
"text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.",
"title": ""
},
{
"docid": "be3640467394a0e0b5a5035749b442e9",
"text": "Data pre-processing is an important and critical step in the data mining process and it has a huge impact on the success of a data mining project.[1](3) Data pre-processing is a step of the Knowledge discovery in databases (KDD) process that reduces the complexity of the data and offers better conditions to subsequent analysis. Through this the nature of the data is better understood and the data analysis is performed more accurately and efficiently. Data pre-processing is challenging as it involves extensive manual effort and time in developing the data operation scripts. There are a number of different tools and methods used for pre-processing, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access; and feature extraction, which pulls out specified data that is significant in some particular context. Pre-processing technique is also useful for association rules algo. LikeAprior, Partitioned, Princer-search algo. and many more algos.",
"title": ""
},
{
"docid": "566913d3a3d2e8fe24d6f5ff78440b94",
"text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.",
"title": ""
},
{
"docid": "3aa36b86391a2596ea1fe1fe75470362",
"text": "Experimental and computational studies of the hovering performance of microcoaxial shrouded rotors were carried out. The ATI Mini Multi-Axis Force/Torque Transducer system was used to measure all six components of the force and moment. Meanwhile, numerical simulation of flow field around rotor was carried out using sliding mesh method and multiple reference frame technique by ANASYS FLUENT. The computational results were well agreed with experimental data. Several important factors, such as blade pitch angle, rotor spacing and tip clearance, which influence the performance of shrouded coaxial rotor are studied in detail using CFD method in this paper. Results shows that, evaluated in terms of Figure of Merit, open coaxial rotor is suited for smaller pitch angle condition while shrouded coaxial rotor is suited for larger pitch angle condition. The negative pressure region around the shroud lip is the main source of the thrust generation. In order to have a better performance for shrouded coaxial rotor, the tip clearance must be smaller. The thrust sharing of upper- and lower-rotor is also discussed in this paper.",
"title": ""
},
{
"docid": "785bd7171800d3f2f59f90838a84dc37",
"text": "BACKGROUND\nCancer is considered to develop due to disruptions in the tissue microenvironment in addition to genetic disruptions in the tumor cells themselves. The two most important microenvironmental disruptions in cancer are arguably tissue hypoxia and disrupted circadian rhythmicity. Endothelial cells, which line the luminal side of all blood vessels transport oxygen or endocrine circadian regulators to the tissue and are therefore of key importance for circadian disruption and hypoxia in tumors.\n\n\nSCOPE OF REVIEW\nHere I review recent findings on the role of circadian rhythms and hypoxia in cancer and metastasis, with particular emphasis on how these pathways link tumor metastasis to pathological functions of blood vessels. The involvement of disrupted cell metabolism and redox homeostasis in this context and the use of novel zebrafish models for such studies will be discussed.\n\n\nMAJOR CONCLUSIONS\nCircadian rhythms and hypoxia are involved in tumor metastasis on all levels from pathological deregulation of the cell to the tissue and the whole organism. Pathological tumor blood vessels cause hypoxia and disruption in circadian rhythmicity which in turn drives tumor metastasis. Zebrafish models may be used to increase our understanding of the mechanisms behind hypoxia and circadian regulation of metastasis.\n\n\nGENERAL SIGNIFICANCE\nDisrupted blood flow in tumors is currently seen as a therapeutic goal in cancer treatment, but may drive invasion and metastasis via pathological hypoxia and circadian clock signaling. Understanding the molecular details behind such regulation is important to optimize treatment for patients with solid tumors in the future. This article is part of a Special Issue entitled Redox regulation of differentiation and de-differentiation.",
"title": ""
},
{
"docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8",
"text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.",
"title": ""
},
{
"docid": "fada1434ec6e060eee9a2431688f82f3",
"text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.",
"title": ""
},
{
"docid": "3ca76a840ac35d94677fa45c767e61f1",
"text": "A three dimensional (3-D) imaging system is implemented by employing 2-D range migration algorithm (RMA) for frequency modulated continuous wave synthetic aperture radar (FMCW-SAR). The backscattered data of a 1-D synthetic aperture at specific altitudes are coherently integrated to form 2-D images. These 2-D images at different altitudes are stitched vertically to form a 3-D image. Numerical simulation for near-field scenario are also presented to validate the proposed algorithm.",
"title": ""
},
{
"docid": "e82681b5140f3a9b283bbd02870f18d5",
"text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization",
"title": ""
},
{
"docid": "ba573c3dd5206e7f71be11d030060484",
"text": "The availability of camera phones provides people with a mobile platform for decoding bar codes, whereas conventional scanners lack mobility. However, using a normal camera phone in such applications is challenging due to the out-of-focus problem. In this paper, we present the research effort on the bar code reading algorithms using a VGA camera phone, NOKIA 7650. EAN-13, a widely used 1D bar code standard, is taken as an example to show the efficiency of the method. A wavelet-based bar code region location and knowledge-based bar code segmentation scheme is applied to extract bar code characters from poor-quality images. All the segmented bar code characters are input to the recognition engine, and based on the recognition distance, the bar code character string with the smallest total distance is output as the final recognition result of the bar code. In order to train an efficient recognition engine, the modified Generalized Learning Vector Quantization (GLVQ) method is designed for optimizing a feature extraction matrix and the class reference vectors. 19 584 samples segmented from more than 1000 bar code images captured by NOKIA 7650 are involved in the training process. Testing on 292 bar code images taken by the same phone, the correct recognition rate of the entire bar code set reaches 85.62%. We are confident that auto focus or macro modes on camera phones will bring the presented method into real world mobile use.",
"title": ""
}
] |
scidocsrr
|
9705b47395ef0884d8739af8b47e69b1
|
Tell me a story--a conceptual exploration of storytelling in healthcare education.
|
[
{
"docid": "4ade01af5fd850722fd690a5d8f938f4",
"text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.",
"title": ""
}
] |
[
{
"docid": "4da3f01ac76da39be45ab39c1e46bcf0",
"text": "Depth cameras are low-cost, plug & play solution to generate point cloud. 3D depth camera yields depth images which do not convey the actual distance. A 3D camera driver does not support raw depth data output, these are usually filtered and calibrated as per the sensor specifications and hence a method is required to map every pixel back to its original point in 3D space. This paper demonstrates the method to triangulate a pixel from the 2D depth image back to its actual position in 3D space. Further this method illustrates the independence of this mapping operation, which facilitates parallel computing. Triangulation method and ratios between the pixel positions and camera parameters are used to estimate the true position in 3D space. The algorithm performance can be increased by 70% by the usage of TPL libraries. This performance differs from processor to processor",
"title": ""
},
{
"docid": "9d5d667c6d621bd90a688c993065f5df",
"text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.",
"title": ""
},
{
"docid": "80e26e5bcbadf034896fcd206cd16099",
"text": "This paper focuses on localization that serves as a smart service. Among the primary services provided by Internet of Things (IoT), localization offers automatically discoverable services. Knowledge relating to an object's position, especially when combined with other information collected from sensors and shared with other smart objects, allows us to develop intelligent systems to fast respond to changes in an environment. Today, wireless sensor networks (WSNs) have become a critical technology for various kinds of smart environments through which different kinds of devices can connect with each other coinciding with the principles of IoT. Among various WSN techniques designed for positioning an unknown node, the trilateration approach based on the received signal strength is the most suitable for localization due to its implementation simplicity and low hardware requirement. However, its performance is susceptible to external factors, such as the number of people present in a room, the shape and dimension of an environment, and the positions of objects and devices. To improve the localization accuracy of trilateration, we develop a novel distributed localization algorithm with a dynamic-circle-expanding mechanism capable of more accurately establishing the geometric relationship between an unknown node and reference nodes. The results of real world experiments and computer simulation show that the average error of position estimation is 0.67 and 0.225 m in the best cases, respectively. This suggests that the proposed localization algorithm outperforms other existing methods.",
"title": ""
},
{
"docid": "d0ad2b6a36dce62f650323cb5dd40bc9",
"text": "If two hospitals are providing identical services in all respects, except for the brand name, why are customers willing to pay more for one hospital than the other? That is, the brand name is not just a name, but a name that contains value (brand equity). Brand equity is the value that the brand name endows to the product, such that consumers are willing to pay a premium price for products with the particular brand name. Accordingly, a company needs to manage its brand carefully so that its brand equity does not depreciate. Although measuring brand equity is important, managers have no brand equity index that is psychometrically robust and parsimonious enough for practice. Indeed, index construction is quite different from conventional scale development. Moreover, researchers might still be unaware of the potential appropriateness of formative indicators for operationalizing particular constructs. Toward this end, drawing on the brand equity literature and following the index construction procedure, this study creates a brand equity index for a hospital. The results reveal a parsimonious five-indicator brand equity index that can adequately capture the full domain of brand equity. This study also illustrates the differences between index construction and scale development.",
"title": ""
},
{
"docid": "9a522060a52474850ff328cef5ea4121",
"text": "Mild cognitive impairment (MCI) is the prodromal stage of Alzheimer's disease (AD). Identifying MCI subjects who are at high risk of converting to AD is crucial for effective treatments. In this study, a deep learning approach based on convolutional neural networks (CNN), is designed to accurately predict MCI-to-AD conversion with magnetic resonance imaging (MRI) data. First, MRI images are prepared with age-correction and other processing. Second, local patches, which are assembled into 2.5 dimensions, are extracted from these images. Then, the patches from AD and normal controls (NC) are used to train a CNN to identify deep learning features of MCI subjects. After that, structural brain image features are mined with FreeSurfer to assist CNN. Finally, both types of features are fed into an extreme learning machine classifier to predict the AD conversion. The proposed approach is validated on the standardized MRI datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. This approach achieves an accuracy of 79.9% and an area under the receiver operating characteristic curve (AUC) of 86.1% in leave-one-out cross validations. Compared with other state-of-the-art methods, the proposed one outperforms others with higher accuracy and AUC, while keeping a good balance between the sensitivity and specificity. Results demonstrate great potentials of the proposed CNN-based approach for the prediction of MCI-to-AD conversion with solely MRI data. Age correction and assisted structural brain image features can boost the prediction performance of CNN.",
"title": ""
},
{
"docid": "44bee5e310c91c778e874d347c64bc18",
"text": "In this paper, we consider a deterministic global optimization algorithm for solving a general linear sum of ratios (LFP). First, an equivalent optimization problem (LFP1) of LFP is derived by exploiting the characteristics of the constraints of LFP. By a new linearizing method the linearization relaxation function of the objective function of LFP1 is derived, then the linear relaxation programming (RLP) of LFP1 is constructed and the proposed branch and bound algorithm is convergent to the global minimum through the successive refinement of the linear relaxation of the feasible region of the objection function and the solutions of a series of RLP. And finally the numerical experiments are given to illustrate the feasibility of the proposed algorithm. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "7c3c06529ae52055de668cbefce39c5f",
"text": "Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.",
"title": ""
},
{
"docid": "c59cae78ce3482450776755b9d9d5199",
"text": "Traditional information systems return answers after a user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step toward solving this problem. In this paper, we study a new information-access paradigm, called “type-ahead search” in which the system searches the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by allowing keywords to appear at different places in the underlying data. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms for both single-keyword queries and multi-keyword queries, using previously computed and cached results in order to achieve a high interactive speed. We develop novel techniques to support fuzzy search by allowing mismatches between query keywords and answers. We have deployed several real prototypes using these techniques. One of them has been deployed to support type-ahead search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.",
"title": ""
},
{
"docid": "07a4f79dbe16be70877724b142013072",
"text": "Safety planning in the construction industry is generally done separately from the project execution planning. This separation creates difficulties for safety engineers to analyze what, when, why and where safety measures are needed for preventing accidents. Lack of information and integration of available data (safety plan, project schedule, 2D project drawings) during the planning stage often results in scheduling work activities with overlapping space needs that then can create hazardous conditions, for example, work above other crew. These space requirements are time dependent and often neglected due to the manual effort that is required to handle the data. Representation of project-specific activity space requirements in 4D models hardly happen along with schedule and work break-down structure. Even with full cooperation of all related stakeholders, current safety planning and execution still largely depends on manual observation and past experiences. The traditional manual observation is inefficient, error-prone, and the observed result can be easily effected by subjective judgments. This paper will demonstrate the development of an automated safety code checking tool for Building Information Modeling (BIM), work breakdown structure, and project schedules in conjunction with safety criteria to reduce the potential for accidents on construction projects. The automated safety compliance rule checker code builds on existing applications for building code compliance checking, structural analysis, and constructability analysis etc. and also the advances in 4D simulations for scheduling. Preliminary results demonstrate a computer-based automated tool can assist in safety planning and execution of projects on a day to day basis.",
"title": ""
},
{
"docid": "70374d2cbf730fab13c3e126359b59e8",
"text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.",
"title": ""
},
{
"docid": "d3b0957b31f47620c0fa8e65a1cc086a",
"text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.",
"title": ""
},
{
"docid": "aa80e0ad489c03ec94a1835d6d4907a3",
"text": "Cloud computing is a term coined to a network that offers incredible processing power, a wide array of storage space and unbelievable speed of computation. Social media channels, corporate structures and individual consumers are all switching to the magnificent world of cloud computing. The flip side to this coin is that with cloud storage emerges the security issues of confidentiality, data integrity and data availability. Since the “cloud” is a mere collection of tangible super computers spread across the world, authentication and authorization for data access is more than a necessity. Our work attempts to overcome these security threats. The proposed methodology suggests the encryption of the files to be uploaded on the cloud. The integrity and confidentiality of the data uploaded by the user is ensured doubly by not only encrypting it but also providing access to the data only on successful authentication. KeywordsCloud computing, security, encryption, password based AES algorithm",
"title": ""
},
{
"docid": "2126c47fe320af2d908ec01a426419ce",
"text": "Stretching has long been used in many physical activities to increase range of motion (ROM) around a joint. Stretching also has other acute effects on the neuromuscular system. For instance, significant reductions in maximal voluntary strength, muscle power or evoked contractile properties have been recorded immediately after a single bout of static stretching, raising interest in other stretching modalities. Thus, the effects of dynamic stretching on subsequent muscular performance have been questioned. This review aimed to investigate performance and physiological alterations following dynamic stretching. There is a substantial amount of evidence pointing out the positive effects on ROM and subsequent performance (force, power, sprint and jump). The larger ROM would be mainly attributable to reduced stiffness of the muscle-tendon unit, while the improved muscular performance to temperature and potentiation-related mechanisms caused by the voluntary contraction associated with dynamic stretching. Therefore, if the goal of a warm-up is to increase joint ROM and to enhance muscle force and/or power, dynamic stretching seems to be a suitable alternative to static stretching. Nevertheless, numerous studies reporting no alteration or even performance impairment have highlighted possible mitigating factors (such as stretch duration, amplitude or velocity). Accordingly, ballistic stretching, a form of dynamic stretching with greater velocities, would be less beneficial than controlled dynamic stretching. Notwithstanding, the literature shows that inconsistent description of stretch procedures has been an important deterrent to reaching a clear consensus. In this review, we highlight the need for future studies reporting homogeneous, clearly described stretching protocols, and propose a clarified stretching terminology and methodology.",
"title": ""
},
{
"docid": "9f6da52c8ea3ba605ecbed71e020d31a",
"text": "With the exponential growth of information being transmitted as a result of various networks, the issues related to providing security to transmit information have considerably increased. Mathematical models were proposed to consolidate the data being transmitted and to protect the same from being tampered with. Work was carried out on the application of 1D and 2D cellular automata (CA) rules for data encryption and decryption in cryptography. A lot more work needs to be done to develop suitable algorithms and 3D CA rules for encryption and description of 3D chaotic information systems. Suitable coding for the algorithms are developed and the results are evaluated for the performance of the algorithms. Here 3D cellular automata encryption and decryption algorithms are used to provide security of data by arranging plain texts and images into layers of cellular automata by using the cellular automata neighbourhood system. This has resulted in highest order of security for transmitted data.",
"title": ""
},
{
"docid": "19067b3d0f951bad90c80688371532fc",
"text": "Research in Artificial Intelligence is breaking technology barriers every day. New algorithms and high performance computing are making things possible which we could only have imagined earlier. Though the enhancements in AI are making life easier for human beings day by day, there is constant fear that AI based systems will pose a threat to humanity. People in AI community have diverse set of opinions regarding the pros and cons of AI mimicking human behavior. Instead of worrying about AI advancements, we propose a novel idea of cognitive agents, including both human and machines, living together in a complex adaptive ecosystem, collaborating on human computation for producing essential social goods while promoting sustenance, survival and evolution of the agents’ life cycle. We highlight several research challenges and technology barriers in achieving this goal. We propose a governance mechanism around this ecosystem to ensure ethical behaviors of all cognitive agents. Along with a novel set of use-cases of Cogniculture , we discuss the road map ahead",
"title": ""
},
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "59323291555a82ef99013bd4510b3020",
"text": "This paper aims to classify and analyze recent as well as classic image registration techniques. Image registration is the process of super imposing images of the same scene taken at different times, location and by different sensors. It is a key enabling technology in medical image analysis for integrating and analyzing information from various modalities. Basically image registration finds temporal correspondences between the set of images and uses transformation model to infer features from these correspondences.The approaches for image registration can beclassified according to their nature vizarea-based and feature-based and dimensionalityvizspatial domain and frequency domain. The procedure of image registration by intensity based model, spatial domain transform, Rigid transform and Non rigid transform based on the above mentioned classification has been performed and the eminence of image is measured by the three quality parameters such as SNR, PSNR and MSE. The techniques have been implemented and inferred thatthe non-rigid transform exhibit higher perceptual quality and offer visually sharper image than other techniques.Problematic issues of image registration techniques and outlook for the future research are discussed. This work may be one of the comprehensive reference sources for the researchers involved in image registration.",
"title": ""
},
{
"docid": "68dc61e0c6b33729f08cdd73e8e86096",
"text": "Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the ‘negative’ (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the ‘positive’ case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the ‘positive’ and ‘negative’ samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.",
"title": ""
},
{
"docid": "a6a364819f397a8e28ac0b19480253cc",
"text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.",
"title": ""
},
{
"docid": "a6f08476ea81c50a36497bd65137ca16",
"text": "In this paper we tackle the inversion of large-scale dense matrices via conventional matrix factorizations (LU, Cholesky, LDL ) and the Gauss-Jordan method on hybrid platforms consisting of a multi-core CPU and a many-core graphics processor (GPU). Specifically, we introduce the different matrix inversion algorithms using a unified framework based on the notation from the FLAME project; we develop hybrid implementations for those matrix operations underlying the algorithms, alternative to those in existing libraries for singleGPU systems; and we perform an extensive experimental study on a platform equipped with state-of-the-art general-purpose architectures from Intel and a “Fermi” GPU from NVIDIA that exposes the efficiency of the different inversion approaches. Our study and experimental results show the simplicity and performance advantage of the GJE-based inversion methods, and the difficulties associated with the symmetric indefinite case.",
"title": ""
}
] |
scidocsrr
|
ad1d0433a6ca7d8d26521c8a6206608c
|
Actions speak as loud as words: predicting relationships from social behavior data
|
[
{
"docid": "cae43bdbf48e694b7fb509ea3b3392f1",
"text": "As user-generated content and interactions have overtaken the web as the default mode of use, questions of whom and what to trust have become increasingly important. Fortunately, online social networks and social media have made it easy for users to indicate whom they trust and whom they do not. However, this does not solve the problem since each user is only likely to know a tiny fraction of other users, we must have methods for inferring trust - and distrust - between users who do not know one another. In this paper, we present a new method for computing both trust and distrust (i.e., positive and negative trust). We do this by combining an inference algorithm that relies on a probabilistic interpretation of trust based on random graphs with a modified spring-embedding algorithm. Our algorithm correctly classifies hidden trust edges as positive or negative with high accuracy. These results are useful in a wide range of social web applications where trust is important to user behavior and satisfaction.",
"title": ""
},
{
"docid": "b12d3dfe42e5b7ee06821be7dcd11ab9",
"text": "Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains.",
"title": ""
}
] |
[
{
"docid": "5e0110f6ae9698e8dd92aad22f1d9fcf",
"text": "Social networking sites (SNS) are especially attractive for adolescents, but it has also been shown that these users can suffer from negative psychological consequences when using these sites excessively. We analyze the role of fear of missing out (FOMO) and intensity of SNS use for explaining the link between psychopathological symptoms and negative consequences of SNS use via mobile devices. In an online survey, 1468 Spanish-speaking Latin-American social media users between 16 and 18 years old completed the Hospital Anxiety and Depression Scale (HADS), the Social Networking Intensity scale (SNI), the FOMO scale (FOMOs), and a questionnaire on negative consequences of using SNS via mobile device (CERM). Using structural equation modeling, it was found that both FOMO and SNI mediate the link between psychopathology and CERM, but by different mechanisms. Additionally, for girls, feeling depressed seems to trigger higher SNS involvement. For boys, anxiety triggers higher SNS involvement.",
"title": ""
},
{
"docid": "0441fb016923cd0b7676d3219951c230",
"text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.",
"title": ""
},
{
"docid": "48623054af5217d48b05aed57a67ae66",
"text": "This paper proposes an ontology-based approach to analyzing and assessing the security posture for software products. It provides measurements of trust for a software product based on its security requirements and evidence of assurance, which are retrieved from an ontology built for vulnerability management. Our approach differentiates with the previous work in the following aspects: (1) It is a holistic approach emphasizing that the system assurance cannot be determined or explained by its component assurance alone. Instead, the software system as a whole determines its assurance level. (2) Our approach is based on widely accepted standards such as CVSS, CVE, CWE, CPE, and CAPEC. Our ontology integrated these standards seamlessly thus provides a solid foundation for security assessment. (3) Automated tools have been built to support our approach, delivering the environmental scores for software products.",
"title": ""
},
{
"docid": "0c0388754f2964f1db05df3b62cd7389",
"text": "Considerable research has been devoted to utilizing multimodal features for better understanding multimedia data. However, two core research issues have not yet been adequately addressed. First, given a set of features extracted from multiple media sources (e.g., extracted from the visual, audio, and caption track of videos), how do we determine the best modalities? Second, once a set of modalities has been identified, how do we best fuse them to map to semantics? In this paper, we propose a two-step approach. The first step finds <i>statistically independent modalities</i> from raw features. In the second step, we use <i>super-kernel fusion</i> to determine the optimal combination of individual modalities. We carefully analyze the tradeoffs between three design factors that affect fusion performance: <i>modality independence</i>, <i>curse of dimensionality</i>, and <i>fusion-model complexity</i>. Through analytical and empirical studies, we demonstrate that our two-step approach, which achieves a careful balance of the three design factors, can improve class-prediction accuracy over traditional techniques.",
"title": ""
},
{
"docid": "8dd6a3cbe9ddb4c50beb83355db5aa5a",
"text": "Fuzzy logic controllers have gained popularity in the past few decades with highly successful implementation in many fields. Fuzzy logic enables designers to control complex systems more effectively than traditional methods. Teaching students fuzzy logic in a laboratory can be a time-consuming and an expensive task. This paper presents a low-cost educational microcontroller-based tool for fuzzy logic controlled line following mobile robot. The robot is used in the second year of undergraduate teaching in an elective course in the department of computer engineering of the Near East University. Hardware details of the robot and the software implementing the fuzzy logic control algorithm are given in the paper. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20347",
"title": ""
},
{
"docid": "0da4b25ce3d4449147f7258d0189165f",
"text": "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set.",
"title": ""
},
{
"docid": "62e445cabbb5c79375f35d7b93f9a30d",
"text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.",
"title": ""
},
{
"docid": "3afea784f4a9eb635d444a503266d7cd",
"text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.",
"title": ""
},
{
"docid": "61c3f890943c34736564680dca3aae4a",
"text": "Secondary nocturnal enuresis accounts for about one quarter of patients with bed-wetting. Although a psychological cause is responsible in some children, various other causes are possible and should be considered. This article reviews the epidemiology, psychological and social impact, causes, investigation, management, and prognosis of secondary nocturnal enuresis.",
"title": ""
},
{
"docid": "a2246533e2973193586e2a3c8e672c10",
"text": "Krill Herd (KH) optimization algorithm was recently proposed based on herding behavior of krill individuals in the nature for solving optimization problems. In this paper, we develop Standard Krill Herd (SKH) algorithm and propose Fuzzy Krill Herd (FKH) optimization algorithm which is able to dynamically adjust the participation amount of exploration and exploitation by looking the progress of solving the problem in each step. In order to evaluate the proposed FKH algorithm, we utilize some standard benchmark functions and also Inventory Control Problem. Experimental results indicate the superiority of our proposed FKH optimization algorithm in comparison with the standard KH optimization algorithm.",
"title": ""
},
{
"docid": "991a388d1159667a5b2494ded71c5abe",
"text": "Organizations around the world have called for the responsible development of nanotechnology. The goals of this approach are to emphasize the importance of considering and controlling the potential adverse impacts of nanotechnology in order to develop its capabilities and benefits. A primary area of concern is the potential adverse impact on workers, since they are the first people in society who are exposed to the potential hazards of nanotechnology. Occupational safety and health criteria for defining what constitutes responsible development of nanotechnology are needed. This article presents five criterion actions that should be practiced by decision-makers at the business and societal levels-if nanotechnology is to be developed responsibly. These include (1) anticipate, identify, and track potentially hazardous nanomaterials in the workplace; (2) assess workers' exposures to nanomaterials; (3) assess and communicate hazards and risks to workers; (4) manage occupational safety and health risks; and (5) foster the safe development of nanotechnology and realization of its societal and commercial benefits. All these criteria are necessary for responsible development to occur. Since it is early in the commercialization of nanotechnology, there are still many unknowns and concerns about nanomaterials. Therefore, it is prudent to treat them as potentially hazardous until sufficient toxicology, and exposure data are gathered for nanomaterial-specific hazard and risk assessments. In this emergent period, it is necessary to be clear about the extent of uncertainty and the need for prudent actions.",
"title": ""
},
{
"docid": "b83eb2f78c4b48cf9b1ca07872d6ea1a",
"text": "Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated mid-dleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.",
"title": ""
},
{
"docid": "7697aa5665f4699f2000779db2b0d24f",
"text": "The majority of smart devices used nowadays (e.g., smartphones, laptops, tablets) is capable of both Wi-Fi and Bluetooth wireless communications. Both network interfaces are identified by a unique 48-bits MAC address, assigned during the manufacturing process and unique worldwide. Such addresses, fundamental for link-layer communications and contained in every frame transmitted by the device, can be easily collected through packet sniffing and later used to perform higher level analysis tasks (user tracking, crowd density estimation, etc.). In this work we propose a system to pair the Wi-Fi and Bluetooth MAC addresses belonging to a physical unique device, starting from packets captured through a network of wireless sniffers. We propose several algorithms to perform such a pairing and we evaluate their performance through experiments in a controlled scenario. We show that the proposed algorithms can pair the MAC addresses with good accuracy. The findings of this paper may be useful to improve the precision of indoor localization and crowd density estimation systems and open some questions on the privacy issues of Wi-Fi and Bluetooth enabled devices.",
"title": ""
},
{
"docid": "adf57fe7ec7ab1481561f7664110a1e8",
"text": "This paper presents a scalable 28-GHz phased-array architecture suitable for fifth-generation (5G) communication links based on four-channel ( $2\\times 2$ ) transmit/receive (TRX) quad-core chips in SiGe BiCMOS with flip-chip packaging. Each channel of the quad-core beamformer chip has 4.6-dB noise figure (NF) in the receive (RX) mode and 10.5-dBm output 1-dB compression point (OP1dB) in the transmit (TX) mode with 6-bit phase control and 14-dB gain control. The phase change with gain control is only ±3°, allowing orthogonality between the variable gain amplifier and the phase shifter. The chip has high RX linearity (IP1dB = −22 dBm/channel) and consumes 130 mW in the RX mode and 200 mW in the TX mode at P1dB per channel. Advantages of the scalable all-RF beamforming architecture and circuit design techniques are discussed in detail. 4- and 32-element phased-arrays are demonstrated with detailed data link measurements using a single or eight of the four-channel TRX core chips on a low-cost printed circuit board with microstrip antennas. The 32-element array achieves an effective isotropic radiated power (EIRP) of 43 dBm at P1dB, a 45-dBm saturated EIRP, and a record-level system NF of 5.2 dB when the beamformer loss and transceiver NF are taken into account and can scan to ±50° in azimuth and ±25° in elevation with < −12-dB sidelobes and without any phase or amplitude calibration. A wireless link is demonstrated using two 32-element phased-arrays with a state-of-the-art data rate of 1.0–1.6 Gb/s in a single beam using 16-QAM waveforms over all scan angles at a link distance of 300 m.",
"title": ""
},
{
"docid": "0496af98bbef3d4d6f5e7a67e9ef5508",
"text": "Cancer is second only to heart disease as a cause of death in the US, with a further negative economic impact on society. Over the past decade, details have emerged which suggest that different glycosylphosphatidylinositol (GPI)-anchored proteins are fundamentally involved in a range of cancers. This post-translational glycolipid modification is introduced into proteins via the action of the enzyme GPI transamidase (GPI-T). In 2004, PIG-U, one of the subunits of GPI-T, was identified as an oncogene in bladder cancer, offering a direct connection between GPI-T and cancer. GPI-T is a membrane-bound, multi-subunit enzyme that is poorly understood, due to its structural complexity and membrane solubility. This review is divided into three sections. First, we describe our current understanding of GPI-T, including what is known about each subunit and their roles in the GPI-T reaction. Next, we review the literature connecting GPI-T to different cancers with an emphasis on the variations in GPI-T subunit over-expression. Finally, we discuss some of the GPI-anchored proteins known to be involved in cancer onset and progression and that serve as potential biomarkers for disease-selective therapies. Given that functions for only one of GPI-T's subunits have been robustly assigned, the separation between healthy and malignant GPI-T activity is poorly defined.",
"title": ""
},
{
"docid": "e5f2e7b7dfdfaee33a2187a0a7183cfb",
"text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.",
"title": ""
},
{
"docid": "d60f812bb8036a2220dab8740f6a74c4",
"text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.",
"title": ""
},
{
"docid": "32bb9f12da68d89a897c8fc7937c0a7d",
"text": "In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.",
"title": ""
},
{
"docid": "549d486d6ff362bc016c6ce449e29dc9",
"text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.",
"title": ""
},
{
"docid": "939f05a2265c6ab21b273a8127806279",
"text": "Acne is a common inflammatory disease. Scarring is an unwanted end point of acne. Both atrophic and hypertrophic scar types occur. Soft-tissue augmentation aims to improve atrophic scars. In this review, we will focus on the use of dermal fillers for acne scar improvement. Therefore, various filler types are characterized, and available data on their use in acne scar improvement are analyzed.",
"title": ""
}
] |
scidocsrr
|
534d8debd1364fafb2acd2fe01e62619
|
Cost-Efficient Strategies for Restraining Rumor Spreading in Mobile Social Networks
|
[
{
"docid": "d056e5ea017eb3e5609dcc978e589158",
"text": "In this paper we study and evaluate rumor-like methods for combating the spread of rumors on a social network. We model rumor spread as a diffusion process on a network and suggest the use of an \"anti-rumor\" process similar to the rumor process. We study two natural models by which these anti-rumors may arise. The main metrics we study are the belief time, i.e., the duration for which a person believes the rumor to be true and point of decline, i.e., point after which anti-rumor process dominates the rumor process. We evaluate our methods by simulating rumor spread and anti-rumor spread on a data set derived from the social networking site Twitter and on a synthetic network generated according to the Watts and Strogatz model. We find that the lifetime of a rumor increases if the delay in detecting it increases, and the relationship is at least linear. Further our findings show that coupling the detection and anti-rumor strategy by embedding agents in the network, we call them beacons, is an effective means of fighting the spread of rumor, even if these beacons do not share information.",
"title": ""
}
] |
[
{
"docid": "37e644b7b2d47e6830e30ae191bc453c",
"text": "Technological forecasting is now poised to respond to the emerging needs of private and public sector organizations in the highly competitive global environment. The history of the subject and its variant forms, including impact assessment, national foresight studies, roadmapping, and competitive technological intelligence, shows how it has responded to changing institutional motivations. Renewed focus on innovation, attention to science-based opportunities, and broad social and political factors will bring renewed attention to technological forecasting in industry, government, and academia. Promising new tools are anticipated, borrowing variously from fields such as political science, computer science, scientometrics, innovation management, and complexity science. 2001 Elsevier Science Inc. Introduction Technological forecasting—its purpose, methods, terminology, and uses—will be shaped in the future, as in the past, by the needs of corporations and government agencies.1 These have a continual pressing need to anticipate and cope with the direction and rate of technological change. The future of technological forecasting will also depend on the views of the public and their elected representatives about technological progress, economic competition, and the government’s role in technological development. In the context of this article, “technological forecasting” (TF) includes several new forms—for example, national foresight studies, roadmapping, and competitive technological intelligence—that have evolved to meet the changing demands of user institutions. It also encompasses technology assessment (TA) or social impact analysis, which emphasizes the downstream effects of technology’s invention, innovation, and evolution. VARY COATES is associated with the Institute for Technology Assessment, Washington, DC. MAHMUD FAROQUE is with George Mason University, Fairfax, VA. RICHARD KLAVANS is with CRP, Philadelphia, PA. KOTY LAPID is with Softblock, Beer Sheba, Israel. HAROLD LINSTONE is with Portland State University. CARL PISTORIUS is with the University of Pretoria, South Africa. ALAN PORTER is with the Georgia Institute of Technology, Atlanta, GA. We also thank Joseph Coates and Joseph Martino for helpful critiques. 1 The term “technological forecasting” is used in this article to apply to all purposeful and systematic attempts to anticipate and understand the potential direction, rate, characteristics, and effects of technological change, especially invention, innovation, adoption, and use. No distinction is intended between “technological forecasting” “technology forecasting,” or “technology foresight,” except as specifically described in the text. Technological Forecasting and Social Change 67, 1–17 (2001) 2001 Elsevier Science Inc. All rights reserved. 0040-1625/01/$–see front matter 655 Avenue of the Americas, New York, NY 10010 PII S0040-1625(00)00122-0",
"title": ""
},
{
"docid": "fdbb5f67eb2f9b651c0d2e1cf8077923",
"text": "The periodical maintenance of railway systems is very important in terms of maintaining safe and comfortable transportation. In particular, the monitoring and diagnosis of faults in the pantograph catenary system are required to provide a transmission from the catenary line to the electric energy locomotive. Surface wear that is caused by the interaction between the pantograph and catenary and nonuniform distribution on the surface of a pantograph of the contact points can cause serious accidents. In this paper, a novel approach is proposed for image processing-based monitoring and fault diagnosis in terms of the interaction and contact points between the pantograph and catenary in a moving train. For this purpose, the proposed method consists of two stages. In the first stage, the pantograph catenary interaction has been modeled; the simulation results were given a failure analysis with a variety of scenarios. In the second stage, the contact points between the pantograph and catenary were detected and implemented in real time with image processing algorithms using actual video images. The pantograph surface for a fault analysis was divided into three regions: safe, dangerous, and fault. The fault analysis of the system was presented using the number of contact points in each region. The experimental results demonstrate the effectiveness, applicability, and performance of the proposed approach.",
"title": ""
},
{
"docid": "7c5ce3005c4529e0c34220c538412a26",
"text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.",
"title": ""
},
{
"docid": "062d366387e6161ba6faadc32c53e820",
"text": "Image processing has been proved to be effective tool for analysis in various fields and applications. Agriculture sector where the parameters like canopy, yield, quality of product were the important measures from the farmers' point of view. Many times expert advice may not be affordable, majority times the availability of expert and their services may consume time. Image processing along with availability of communication network can change the situation of getting the expert advice well within time and at affordable cost since image processing was the effective tool for analysis of parameters. This paper intends to focus on the survey of application of image processing in agriculture field such as imaging techniques, weed detection and fruit grading. The analysis of the parameters has proved to be accurate and less time consuming as compared to traditional methods. Application of image processing can improve decision making for vegetation measurement, irrigation, fruit sorting, etc.",
"title": ""
},
{
"docid": "7dfbb5e01383b5f50dbeb87d55ceb719",
"text": "In recent years, a number of network forensics techniques have been proposed to investigate the increasing number of cybercrimes. Network forensics techniques assist in tracking internal and external network attacks by focusing on inherent network vulnerabilities and communication mechanisms. However, investigation of cybercrime becomes more challenging when cyber criminals erase the traces in order to avoid detection. Therefore, network forensics techniques employ mechanisms to facilitate investigation by recording every single packet and event that is disseminated into the network. As a result, it allows identification of the origin of the attack through reconstruction of the recorded data. In the current literature, network forensics techniques are studied on the basis of forensic tools, process models and framework implementations. However, a comprehensive study of cybercrime investigation using network forensics frameworks along with a critical review of present network forensics techniques is lacking. In other words, our study is motivated by the diversity of digital evidence and the difficulty of addressing numerous attacks in the network using network forensics techniques. Therefore, this paper reviews the fundamental mechanism of network forensics techniques to determine how network attacks are identified in the network. Through an extensive review of related literature, a thematic taxonomy is proposed for the classification of current network forensics techniques based on its implementation as well as target data sets involved in the conducting of forensic investigations. The critical aspects and significant features of the current network forensics techniques are investigated using qualitative analysis technique. We derive significant parameters from the literature for discussing the similarities and differences in existing network forensics techniques. The parameters include framework nature, mechanism, target dataset, target instance, forensic processing, time of investigation, execution definition, and objective function. Finally, open research challenges are discussed in network forensics to assist researchers in selecting the appropriate domains for further research and obtain ideas for exploring optimal techniques for investigating cyber-crimes. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fcf8649ff7c2972e6ef73f837a3d3f4d",
"text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.",
"title": ""
},
{
"docid": "2dfad4f4b0d69085341dfb64d6b37d54",
"text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.",
"title": ""
},
{
"docid": "3654827519075eac6bfe5ee442c6d4b2",
"text": "We examined the relations among phonological awareness, music perception skills, and early reading skills in a population of 100 4- and 5-year-old children. Music skills were found to correlate significantly with both phonological awareness and reading development. Regression analyses indicated that music perception skills contributed unique variance in predicting reading ability, even when variance due to phonological awareness and other cognitive abilities (math, digit span, and vocabulary) had been accounted for. Thus, music perception appears to tap auditory mechanisms related to reading that only partially overlap with those related to phonological awareness, suggesting that both linguistic and nonlinguistic general auditory mechanisms are involved in reading.",
"title": ""
},
{
"docid": "7843fb4bbf2e94a30c18b359076899ab",
"text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "627aee14031293785224efdb7bac69f0",
"text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.",
"title": ""
},
{
"docid": "94b84ed0bb69b6c4fc7a268176146eea",
"text": "We consider the problem of representing image matrices with a set of basis functions. One common solution for that problem is to first transform the 2D image matrices into 1D image vectors and then to represent those 1D image vectors with eigenvectors, as done in classical principal component analysis. In this paper, we adopt a natural representation for the 2D image matrices using eigenimages, which are 2D matrices with the same size of original images and can be directly computed from original 2D image matrices. We discuss how to compute those eigenimages effectively. Experimental result on ORL image database shows the advantages of eigenimages method in representing the 2D images.",
"title": ""
},
{
"docid": "2e5ce96ba3c503704a9152ae667c24ec",
"text": "We use methods of classical and quantum mechanics for mathematical modeling of price dynamics at the financial market. The Hamiltonian formalism on the price/price-change phase space is used to describe the classical-like evolution of prices. This classical dynamics of prices is determined by ”hard” conditions (natural resources, industrial production, services and so on). These conditions as well as ”hard” relations between traders at the financial market are mathematically described by the classical financial potential. At the real financial market ”hard” conditions are not the only source of price changes. The information exchange and market psychology play important (and sometimes determining) role in price dynamics. We propose to describe this ”soft” financial factors by using the pilot wave (Bohmian) model of quantum mechanics. The theory of financial mental (or psychological) waves is used to take into account market psychology. The real trajectories of prices are determined (by the financial analogue of the second Newton law) by two financial potentials: classical-like (”hard” market conditions) and quantum-like (”soft” market conditions).",
"title": ""
},
{
"docid": "fa42192f3ffd08332e35b98019e622ff",
"text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
},
{
"docid": "a757624e5fd2d4a364f484d55a430702",
"text": "The main challenge in P2P computing is to design and implement a robust and scalable distributed system composed of inexpensive, individually unreliable computers in unrelated administrative domains. The participants in a typical P2P system might include computers at homes, schools, and businesses, and can grow to several million concurrent participants.",
"title": ""
},
{
"docid": "6149a6aaa9c39a1e02ab8fbe64fcb62b",
"text": "The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm.",
"title": ""
},
{
"docid": "6524efda795834105bae7d65caf15c53",
"text": "PURPOSE\nThis paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.\n\n\nMETHOD\nOur qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.\n\n\nRESULTS\nAt the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.\n\n\nCONCLUSION\nParticipants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.",
"title": ""
},
{
"docid": "1168c9e6ce258851b15b7e689f60e218",
"text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).",
"title": ""
},
{
"docid": "71efff25f494a8b7a83099e7bdd9d9a8",
"text": "Background: Problems with intubation of the ampulla Vateri during diagnostic and therapeutic endoscopic maneuvers are a well-known feature. The ampulla Vateri was analyzed three-dimensionally to determine whether these difficulties have a structural background. Methods: Thirty-five human greater duodenal papillae were examined by light and scanning electron microscopy as well as immunohistochemically. Results: Histologically, highly vascularized finger-like mucosal folds project far into the lumen of the ampulla Vateri. The excretory ducts of seromucous glands containing many lysozyme-secreting Paneth cells open close to the base of the mucosal folds. Scanning electron microscopy revealed large mucosal folds inside the ampulla that continued into the pancreatic and bile duct, comparable to valves arranged in a row. Conclusions: Mucosal folds form pocket-like valves in the lumen of the ampulla Vateri. They allow a unidirectional flow of secretions into the duodenum and prevent reflux from the duodenum into the ampulla Vateri. Subepithelial mucous gland secretions functionally clean the valvular crypts and protect the epithelium. The arrangement of pocket-like mucosal folds may explain endoscopic difficulties experienced when attempting to penetrate the papilla of Vater during endoscopic retrograde cholangiopancreaticographic procedures.",
"title": ""
}
] |
scidocsrr
|
39617ab96f7fadab45c84dec7c02a77e
|
A Self-Powered Insole for Human Motion Recognition
|
[
{
"docid": "8e02a76799f72d86e7240384bea563fd",
"text": "We have developed the suspended-load backpack, which converts mechanical energy from the vertical movement of carried loads (weighing 20 to 38 kilograms) to electricity during normal walking [generating up to 7.4 watts, or a 300-fold increase over previous shoe devices (20 milliwatts)]. Unexpectedly, little extra metabolic energy (as compared to that expended carrying a rigid backpack) is required during electricity generation. This is probably due to a compensatory change in gait or loading regime, which reduces the metabolic power required for walking. This electricity generation can help give field scientists, explorers, and disaster-relief workers freedom from the heavy weight of replacement batteries and thereby extend their ability to operate in remote areas.",
"title": ""
}
] |
[
{
"docid": "f4401e483c519e1f2d33ee18ea23b8d7",
"text": "Cultivation of mindfulness, the nonjudgmental awareness of experiences in the present moment, produces beneficial effects on well-being and ameliorates psychiatric and stress-related symptoms. Mindfulness meditation has therefore increasingly been incorporated into psychotherapeutic interventions. Although the number of publications in the field has sharply increased over the last two decades, there is a paucity of theoretical reviews that integrate the existing literature into a comprehensive theoretical framework. In this article, we explore several components through which mindfulness meditation exerts its effects: (a) attention regulation, (b) body awareness, (c) emotion regulation (including reappraisal and exposure, extinction, and reconsolidation), and (d) change in perspective on the self. Recent empirical research, including practitioners' self-reports and experimental data, provides evidence supporting these mechanisms. Functional and structural neuroimaging studies have begun to explore the neuroscientific processes underlying these components. Evidence suggests that mindfulness practice is associated with neuroplastic changes in the anterior cingulate cortex, insula, temporo-parietal junction, fronto-limbic network, and default mode network structures. The authors suggest that the mechanisms described here work synergistically, establishing a process of enhanced self-regulation. Differentiating between these components seems useful to guide future basic research and to specifically target areas of development in the treatment of psychological disorders.",
"title": ""
},
{
"docid": "f052fae696370910cc59f48552ddd889",
"text": "Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is included.",
"title": ""
},
{
"docid": "cce2d168e49620ead88953617cce52b0",
"text": "We analyze state-of-the-art deep learning models for three tasks: question answering on (1) images, (2) tables, and (3) passages of text. Using the notion of attribution (word importance), we find that these deep networks often ignore important question terms. Leveraging such behavior, we perturb questions to craft a variety of adversarial examples. Our strongest attacks drop the accuracy of a visual question answering model from 61.1% to 19%, and that of a tabular question answering model from 33.5% to 3.3%. Additionally, we show how attributions can strengthen attacks proposed by Jia and Liang (2017) on paragraph comprehension models. Our results demonstrate that attributions can augment standard measures of accuracy and empower investigation of model performance. When a model is accurate but for the wrong reasons, attributions can surface erroneous logic in the model that indicates inadequacies in the test data.",
"title": ""
},
{
"docid": "85f2e049dc90bf08ecb0d34899d8b3c5",
"text": "Here is little doubt that the Internet represents the spearhead of the industrial revolution. I love new technologies and gadgets that promise new and better ways of doing things. I have many such gadgets myself and I even manage to use a few of them (though not without some pain).A new piece of technology is like a new relationship, fun and exciting at first, but eventually it requires some hard work to maintain, usually in the form of time and energy. I doubt technology’s promise to improve the quality of life and I am still surprised how time-distorting and dissociating the computer and the Internet can be for me, along with the thousands of people I’ve interviewed, studied and treated in my clinical practice. It seems clear that the Internet can be used and abused in a compulsive fashion, and that there are numerous psychological factors that contribute to the Internet’s power and appeal. It appears that the very same features that drive the potency of the Net are potentially habit-forming. This study examined the self-reported Internet behavior of nearly 18,000 people who answered a survey on the ABCNEWS.com web site. Results clearly support the psychoactive nature of the Internet, and the potential for compulsive use and abuse of the Internet for certain individuals. Introduction Technology, and most especially, computers and the Internet, seem to be at best easily overused/abused, and at worst, addictive. The combination of available stimulating content, ease of access, convenience, low cost, visual stimulation, autonomy, and anonymity—all contribute to a highly psychoactive experience. By psychoactive, that is Running Head: Virtual Addiction to say mood altering, and potentially behaviorally impacting. In other words these technologies affect the manner in which we live and love. It is my contention that some of these effects are indeed less than positive, and may contribute to various negative psychological effects. The Internet and other digital technologies are only the latest in a series of “improvements” to our world which may have unintended negative effects. The experience of problems with new and unknown technologies is far from new; we have seen countless examples of newer and better things that have had unintended and unexpected deleterious effects. Remember Thalidomide, PVC/PCB’s, Atomic power, fossil fuels, even television, along with other seemingly innocuous conveniences which have been shown to be conveniently helpful, but on other levels harmful. Some of these harmful effects are obvious and tragic, while others are more subtle and insidious. Even seemingly innocuous advances such as the elevator, remote controls, credit card gas pumps, dishwashers, and drive-through everything, have all had unintended negative effects. They all save time and energy, but the energy they save may dissuade us from using our physical bodies as they were designed to be used. In short we have convenience ourselves to a sedentary lifestyle. Technology is amoral; it is not inherently good or evil, but it is impact on the manner in which we live our lives. American’s love technology and for some of us this trust and blind faith almost parallels a religious fanaticism. Perhaps most of all, we love it Running Head: Virtual Addiction because of the hope for the future it promises; it is this promise of a better today and a longer tomorrow which captivates us to attend to the call for new better things to come. We live in the age were computer and digital technology are always on the cusp of great things-Newer, better ways of doing things (which in some ways is true). The old becomes obsolete within a year or two. Newer is always better. Computers and the Internet purport to make our lives easier, simpler, and therefore more fulfilling, but it may not be that simple. People have become physically and psychologically dependent on many behaviors and substances for centuries. This compulsive pattern does not reflect a casual interest, but rather consists of a driven pattern of use that can frequently escalate to negatively impact our lives. The key life-areas that seem to be impacted are marriages and relationships, employment, health, and legal/financial status. The fact that substances, such as alcohol and other mood-altering drugs can create a physical and/or psychological dependence is well known and accepted. And certain behaviors such as gambling, eating, work, exercise, shopping, and sex have gained more recent acceptance with regard to their addictive potential. More recently however, there has been an acknowledgement that the compulsive performance of these behaviors may mimic the compulsive process found with drugs, alcohol and other substances. This same process appears to also be found with certain aspects of the Internet. Running Head: Virtual Addiction The Internet can and does produce clear alterations in mood; nearly 30 percent of Internet users admit to using the Net to alter their mood so as to relieve a negative mood state. In other words, they use the Internet like a drug (Greenfield, 1999). In addressing the phenomenon of Internet behavior, initial behavioral research (Young, 1996, 1998) focused on conceptual definitions of Internet use and abuse, and demonstrated similar patterns of abuse as found in compulsive gambling. There have been further recent studies on the nature and effects of the Internet. Cooper, Scherer, Boies, and Gordon (1998) examined sexuality on the Internet utilizing an extensive online survey of 9,177 Web users, and Greenfield (1999) surveyed nearly 18,000 Web users on ABCNEWS.com to examine Internet use and abuse behavior. The later study did yield some interesting trends and patterns, but also raised further areas that require clarification. There has been very little research that actually examined and measured specific behavior related to Internet use. The Carnegie Mellon University study (Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis, 1998) did attempt to examine and verify actual Internet use among 173 people in 73 households. This initial study did seem to demonstrate that there may be some deleterious effects from heavy Internet use, which appeared to increase some measures of social isolation and depression. What seems to be abundantly clear from the limited research to date is that we know very little about the human/Internet interface. Theoretical suppositions abound, but we are only just beginning to understand the nature and implications of Internet use and Running Head: Virtual Addiction abuse. There is an abundance of clinical, legal, and anecdotal evidence to suggest that there is something unique about being online that seems to produce a powerful impact on people. It is my belief that as we expand our analysis of this new and exciting area we will likely discover that there are many subcategories of Internet abuse, some of which will undoubtedly exist as concomitant disorders alongside of other addictions including sex, gambling, and compulsive shopping/spending. There are probably two types of Internet based problems: the first is defined as a primary problem where the Internet itself becomes the focus on the compulsive pattern, and secondary, where a preexisting problem (or compulsive behavior) is exacerbated via the use of the Internet. In a secondary problem, necessity is no longer the mother of invention, but rather convenience is. The Internet simply makes everything easier to acquire, and therefore that much more easily abused. The ease of access, availability, low cost, anonymity, timelessness, disinhibition, and loss of boundaries all appear to contribute to the total Internet experience. This has particular relevance when it comes to well-established forms of compulsive consumer behavior such as gambling, shopping, stock trading, and compulsive sexual behavior where traditional modalities of engaging in these behaviors pale in comparison to the speed and efficiency of the Internet. There has been considerable debate regarding the terms and definitions in describing pathological Internet behavior. Many terms have been used, including Internet abuse, Internet addiction, and compulsive Internet use. The concern over terminology Running Head: Virtual Addiction seems spurious to me, as it seems irrelevant as to what the addictive process is labeled. The underlying neurochemical changes (probably Dopamine) that occur during any pleasurable act have proven themselves to be potentially habit-forming on a brainbehavior level. The net effect is ultimately the same with regard to potential life impact, which in the case of compulsive behavior can be quite large. Any time there is a highly pleasurable human behavior that can be acquired without human interface (as can be accomplished on the Net) there seems to be greater potential for abuse. The ease of purchasing a stock, gambling, or shopping online allows for a boundless and disinhibited experience. Without the normal human interaction there is a far greater likelihood of abusive and/or compulsive behavior in these areas. Research in the field of Internet behavior is in its relative infancy. This is in part due to the fact that the depth and breadth of the Internet and World Wide Web are changing at exponential rates. With thousands of new subscribers a day and approaching (perhaps exceeding) 200 million worldwide users, the Internet represents a communications, social, and economic revolution. The Net now serves at the pinnacle of the digital industrial revolution, and with any revolution come new problems and difficulties.",
"title": ""
},
{
"docid": "a2314ce56557135146e43f0d4a02782d",
"text": "This paper proposes a carrier-based pulse width modulation (CB-PWM) method with synchronous switching technique for a Vienna rectifier. In this paper, a Vienna rectifier is one of the 3-level converter topologies. It is similar to a 3-level T-type topology the used back-to-back switches. When CB-PWM switching method is used, a Vienna rectifier is operated with six PWM signals. On the other hand, when the back-to-back switches are synchronized, PWM signals can be reduced to three from six. However, the synchronous switching method has a problem that the current distortion around zero-crossing point is worse than one of the conventional CB-PWM switching method. To improve current distortions, this paper proposes a reactive current injection technique. The performance and effectiveness of the proposed synchronous switching method are verified by simulation with a 5-kW Vienna rectifier.",
"title": ""
},
{
"docid": "faaa921bce23eeca714926acb1901447",
"text": "This paper provides an overview along with our findings of the Chinese Spelling Check shared task at NLPTEA 2017. The goal of this task is to develop a computerassisted system to automatically diagnose typing errors in traditional Chinese sentences written by students. We defined six types of errors which belong to two categories. Given a sentence, the system should detect where the errors are, and for each detected error determine its type and provide correction suggestions. We designed, constructed, and released a benchmark dataset for this task.",
"title": ""
},
{
"docid": "b6f0c5a136de9b85899814a436e7a497",
"text": "The 'ferrule effect' is a long standing, accepted concept in dentistry that is a foundation principle for the restoration of teeth that have suffered advanced structure loss. A review of the literature based on a search in PubMed was performed looking at the various components of the ferrule effect, with particular attention to some of the less explored dimensions that influence the effectiveness of the ferrule when restoring severely broken down teeth. These include the width of the ferrule, the effect of a partial ferrule, the influence of both, the type of the restored tooth and the lateral loads present as well as the well established 2 mm ferrule height rule. The literature was collaborated and a classification based on risk assessment was derived from the available evidence. The system categorises teeth according to the effectiveness of ferrule effect that can be achieved based on the remaining amount of sound tooth structure. Furthermore, risk assessment for failure can be performed so that the practitioner and patient can better understand the prognosis of restoring a particular tooth. Clinical recommendations were extrapolated and presented as guidelines so as to improve the predictability and outcome of treatment when restoring structurally compromised teeth. The evidence relating to restoring the endodontic treated tooth with extensive destruction is deficient. This article aims to rethink ferrule by looking at other aspects of this accepted concept, and proposes a paradigm shift in the way it is thought of and utilised.",
"title": ""
},
{
"docid": "0a1f6c27cd13735858e7a6686fc5c2c9",
"text": "We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer’s policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives.",
"title": ""
},
{
"docid": "856a6fa093e0cf6e0512d83e1382d3c9",
"text": "00Month2017 CORRIGENDUM: ACMG recommendations for reporting of incidental findings in clinical exome and genome sequencing Robert C. Green MD, MPH, Jonathan S. Berg MD, PhD, Wayne W. Grody MD, PhD, Sarah S. Kalia ScM, CGC, Bruce R. Korf MD, PhD, Christa L. Martin PhD, FACMG, Amy L. McGuire JD, PhD, Robert L. Nussbaum MD, Julianne M. O’Daniel MS, CGC, Kelly E. Ormond MS, CGC, Heidi L. Rehm PhD, FACMG, Michael S. Watson PhD, FACMG, Marc S. Williams MD, FACMG & Leslie G. Biesecker MD Genet Med (2013) 15, 565–574 doi:10.1038/gim.2013.73 In the published version of this paper, on page 567, on the 16th line in the last paragraph of the left column, the abbreviation of Expected Pathogenic is incorrect. The correct sentence should read, “For the purposes of these recommendations, variants fitting these descriptions were labeled as Known Pathogenic (KP) and Expected Pathogenic (EP), respectively.”",
"title": ""
},
{
"docid": "665f109e8263b687764de476befcbab9",
"text": "In this work we analyze the behavior on a company-internal social network site to determine which interaction patterns signal closeness between colleagues. Regression analysis suggests that employee behavior on social network sites (SNSs) reveals information about both professional and personal closeness. While some factors are predictive of general closeness (e.g. content recommendations), other factors signal that employees feel personal closeness towards their colleagues, but not professional closeness (e.g. mutual profile commenting). This analysis contributes to our understanding of how SNS behavior reflects relationship multiplexity: the multiple facets of our relationships with SNS connections.",
"title": ""
},
{
"docid": "91b924c8dbb22ca4593150c5fadfd38b",
"text": "This paper investigates the power allocation problem of full-duplex cooperative non-orthogonal multiple access (FD-CNOMA) systems, in which the strong users relay data for the weak users via a full duplex relaying mode. For the purpose of fairness, our goal is to maximize the minimum achievable user rate in a NOMA user pair. More specifically, we consider the power optimization problem for two different relaying schemes, i.e., the fixed relaying power scheme and the adaptive relaying power scheme. For the fixed relaying scheme, we demonstrate that the power allocation problem is quasi-concave and a closed-form optimal solution is obtained. Then, based on the derived results of the fixed relaying scheme, the optimal power allocation policy for the adaptive relaying scheme is also obtained by transforming the optimization objective function as a univariate function of the relay transmit power $P_R$. Simulation results show that the proposed FD- CNOMA scheme with adaptive relaying can always achieve better or at least the same performance as the conventional NOMA scheme. In addition, there exists a switching point between FD-CNOMA and half- duplex cooperative NOMA.",
"title": ""
},
{
"docid": "159e040b0e74ad1b6124907c28e53daf",
"text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ",
"title": ""
},
{
"docid": "15486c4dc2dfc0f2f5ccfc0cf6197af4",
"text": "Nostalgia is a frequently experienced complex emotion, understood by laypersons in the United Kingdom and United States of America to (a) refer prototypically to fond, self-relevant, social memories and (b) be more pleasant (e.g., happy, warm) than unpleasant (e.g., sad, regretful). This research examined whether people across cultures conceive of nostalgia in the same way. Students in 18 countries across 5 continents (N = 1,704) rated the prototypicality of 35 features of nostalgia. The samples showed high levels of agreement on the rank-order of features. In all countries, participants rated previously identified central (vs. peripheral) features as more prototypical of nostalgia, and showed greater interindividual agreement regarding central (vs. peripheral) features. Cluster analyses revealed subtle variation among groups of countries with respect to the strength of these pancultural patterns. All except African countries manifested the same factor structure of nostalgia features. Additional exemplars generated by participants in an open-ended format did not entail elaboration of the existing set of 35 features. Findings identified key points of cross-cultural agreement regarding conceptions of nostalgia, supporting the notion that nostalgia is a pancultural emotion.",
"title": ""
},
{
"docid": "dc2d5f9bfe41246ae9883aa6c0537c40",
"text": "Phosphatidylinositol 3-kinases (PI3Ks) are crucial coordinators of intracellular signalling in response to extracellular stimuli. Hyperactivation of PI3K signalling cascades is one of the most common events in human cancers. In this Review, we discuss recent advances in our knowledge of the roles of specific PI3K isoforms in normal and oncogenic signalling, the different ways in which PI3K can be upregulated, and the current state and future potential of targeting this pathway in the clinic.",
"title": ""
},
{
"docid": "5b763dbb9f06ff67e44b5d38920e92bf",
"text": "With the growing popularity of the internet, everything is available at our doorstep and convenience. The rapid increase in e-commerce applications has resulted in the increased usage of the credit card for offline and online payments. Though there are various benefits of using credit cards such as convenience, instant cash, but when it comes to security credit card holders, banks, and the merchants are affected when the card is being stolen, lost or misused without the knowledge of the cardholder (Fraud activity). Streaming analytics is a time-based processing of data and it is used to enable near real-time decision making by inspecting, correlating and analyzing the data even as it is streaming into applications and database from myriad different sources. We are making use of streaming analytics to detect and prevent the credit card fraud. Rather than singling out specific transactions, our solution analyses the historical transaction data to model a system that can detect fraudulent patterns. This model is then used to analyze transactions in real-time.",
"title": ""
},
{
"docid": "fd5b9187c6720c3408b5c2324b03905d",
"text": "Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at https://github.com/bairdzhang/smallhardface.",
"title": ""
},
{
"docid": "221b5ba25bff2522ab3ca65ffc94723f",
"text": "This paper describes the design and implementation of HERD, a key-value system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems, HERD focuses its design on reducing network round trips while using efficient RDMA primitives; the result is substantially lower latency, and throughput that saturates modern, commodity RDMA hardware.\n HERD has two unconventional decisions: First, it does not use RDMA reads, despite the allure of operations that bypass the remote CPU entirely. Second, it uses a mix of RDMA and messaging verbs, despite the conventional wisdom that the messaging primitives are slow. A HERD client writes its request into the server's memory; the server computes the reply. This design uses a single round trip for all requests and supports up to 26 million key-value operations per second with 5μs average latency. Notably, for small key-value items, our full system throughput is similar to native RDMA read throughput and is over 2X higher than recent RDMA-based key-value systems. We believe that HERD further serves as an effective template for the construction of RDMA-based datacenter services.",
"title": ""
},
{
"docid": "a6bc752bd6a4fc070fa01a5322fb30a1",
"text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classi cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classi cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.",
"title": ""
},
{
"docid": "449f984469b40fe10f7a2e0e3a359d1d",
"text": "The correlation of phenotypic outcomes with genetic variation and environmental factors is a core pursuit in biology and biomedicine. Numerous challenges impede our progress: patient phenotypes may not match known diseases, candidate variants may be in genes that have not been characterized, model organisms may not recapitulate human or veterinary diseases, filling evolutionary gaps is difficult, and many resources must be queried to find potentially significant genotype-phenotype associations. Non-human organisms have proven instrumental in revealing biological mechanisms. Advanced informatics tools can identify phenotypically relevant disease models in research and diagnostic contexts. Large-scale integration of model organism and clinical research data can provide a breadth of knowledge not available from individual sources and can provide contextualization of data back to these sources. The Monarch Initiative (monarchinitiative.org) is a collaborative, open science effort that aims to semantically integrate genotype-phenotype data from many species and sources in order to support precision medicine, disease modeling, and mechanistic exploration. Our integrated knowledge graph, analytic tools, and web services enable diverse users to explore relationships between phenotypes and genotypes across species.",
"title": ""
},
{
"docid": "e0ec608baa5af1c35672efbccbd618df",
"text": "The “similarity-attraction” effect stands as one of the most well-known findings in social psychology. However, some research contends that perceived but not actual similarity influences attraction. The current study is the first to examine the effects of actual and perceived similarity simultaneously during a face-to-face initial romantic encounter. Participants attending a speed-dating event interacted with ∼12 members of the opposite sex for 4 min each. Actual and perceived similarity for each pair were calculated from questionnaire responses assessed before the event and after each date. Data revealed that perceived, but not actual, similarity significantly predicted romantic liking in this speed-dating context. Furthermore, perceived similarity was a far weaker predictor of attraction when assessed using specific traits rather than generally. Over the past 60 years, researchers have examined thoroughly the role that similarity between partners plays in predicting interpersonal attraction. Until recently, the general consensus has been that participants report stronger attraction to objectively similar others (i.e., actual similarity) than to those with whom they share fewer traits, beliefs, and/or attitudes. The similarity-attraction effect, commonly dubbed “Byrne’s law of attraction” or “Byrne’s law of similarity,” is a central Natasha D. Tidwell, Department of Psychology, Texas A&M University; Paul W. Eastwick, Department of Psychology, Texas A&M University; Eli J. Finkel, Department of Psychology, Northwestern University. We thank Jacob Matthews for his masterful programming of the Northwestern Speed-dating Study and the Northwestern Speed-Dating Team for conducting the studies themselves. We also thank David Kenny for his assistance with the social relations model analyses. Correspondence should be addressed to Natasha D. Tidwell, Texas A&M University, Department of Psychology, 4235 TAMU, College Station, TX 778434235, e-mail: ndtidwell@gmail.com or Paul W. Eastwick, Texas A&M University, Department of Psychology, 4235 TAMU, College Station, TX 77843-4235, e-mail: eastwick@tamu.edu. feature of textbook reviews of attraction and relationship initiation.1 Research on the actual similarity-attraction effect has most frequently examined similarity of attitudes, finding that participants are more likely to become attracted to a stranger with whom they share many common attitudes than to one with whom they share few (Byrne, 1961; Byrne, Ervin, & Lamberth, 1970). Scholars have also found that actual similarity of personality traits predicts initial attraction, but the results are not as robust as those for attitude similarity (Klohnen & Luo, 2003). Furthermore, some research has suggested that actual similarity in external qualities (e.g., age, hairstyle) is more predictive of 1. Researchers have also found that actual similarity predicts satisfaction and stability in existing relationships (e.g., Gaunt, 2006; Luo et al., 2008; Luo & Klohnen, 2005), suggesting that Byrne’s law of attraction may extend well beyond initial attraction per se. Although we review prior work on similarity in both initial attraction and established relationship contexts below, the present data specifically examine the association between similarity and attraction in an initial face-toface encounter.",
"title": ""
}
] |
scidocsrr
|
4c5c9a90a4890c72422be643dbd864ce
|
Operation of Compressor and Electronic Expansion Valve via Different Controllers
|
[
{
"docid": "8b3ad3d48da22c529e65c26447265372",
"text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.",
"title": ""
}
] |
[
{
"docid": "fb11b937a3c07fd4b76cda1ed1eadc07",
"text": "Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.",
"title": ""
},
{
"docid": "af628819a5392543266668b94c579a96",
"text": "Elephantopus scaber is an ethnomedicinal plant used by the Zhuang people in Southwest China to treat headaches, colds, diarrhea, hepatitis, and bronchitis. A new δ -truxinate derivative, ethyl, methyl 3,4,3',4'-tetrahydroxy- δ -truxinate (1), was isolated from the ethyl acetate extract of the entire plant, along with 4 known compounds. The antioxidant activity of these 5 compounds was determined by ABTS radical scavenging assay. Compound 1 was also tested for its cytotoxicity effect against HepG2 by MTT assay (IC50 = 60 μ M), and its potential anti-inflammatory, antibiotic, and antitumor bioactivities were predicted using target fishing method software.",
"title": ""
},
{
"docid": "bdefc8bcd92aefe966d4fcd98ab1fdbb",
"text": "The automatic identification system (AIS) tracks vessel movement by means of electronic exchange of navigation data between vessels, with onboard transceiver, terrestrial, and/or satellite base stations. The gathered data contain a wealth of information useful for maritime safety, security, and efficiency. Because of the close relationship between data and methodology in marine data mining and the importance of both of them in marine intelligence research, this paper surveys AIS data sources and relevant aspects of navigation in which such data are or could be exploited for safety of seafaring, namely traffic anomaly detection, route estimation, collision prediction, and path planning.",
"title": ""
},
{
"docid": "f14272db4779239dc7d392ef7dfac52d",
"text": "3 The Rotating Calipers Algorithm 3 3.1 Computing the Initial Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Updating the Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.1 Distinct Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.2 Duplicate Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.3 Multiple Polygon Edges Attain Minimum Angle . . . . . . . . . . . . . . . . . . . . . 8 3.2.4 The General Update Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10",
"title": ""
},
{
"docid": "19d4662287a5c3ce1cef85fa601b74ba",
"text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.",
"title": ""
},
{
"docid": "33d530e8e74cd5b5dbfa2035a7608664",
"text": "This paper presents an area-efficient ultra-low-power 32 kHz clock source for low power wireless communication systems using a temperature-compensated charge-pump-based digitally controlled oscillator (DCO). A highly efficient digital calibration method is proposed to achieve frequency stability over process variation and temperature drifts. This calibration method locks the DCO's output frequency to the reference clock of the wireless communication system during its active state. The introduced calibration scheme offers high jitter immunity and short locking periods overcoming frequency calibration errors for typical ultra-low-power DCO's. The circuit area of the proposed ultra-low-power clock source is 100μm × 140μm in a 130nm RF CMOS technology. In measurements the proposed ultra-low-power clock source achieves a frequency stability of 10 ppm/°C from 10 °C to 100 °C for temperature drifts of less than 1 °C/s with 80nW power consumption.",
"title": ""
},
{
"docid": "5b16933905d36ba54ab74743251d7ca7",
"text": "The explosive growth of the user-generated content on the Web has offered a rich data source for mining opinions. However, the large number of diverse review sources challenges the individual users and organizations on how to use the opinion information effectively. Therefore, automated opinion mining and summarization techniques have become increasingly important. Different from previous approaches that have mostly treated product feature and opinion extraction as two independent tasks, we merge them together in a unified process by using probabilistic models. Specifically, we treat the problem of product feature and opinion extraction as a sequence labeling task and adopt Conditional Random Fields models to accomplish it. As part of our work, we develop a computational approach to construct domain specific sentiment lexicon by combining semi-structured reviews with general sentiment lexicon, which helps to identify the sentiment orientations of opinions. Experimental results on two real world datasets show that the proposed method is effective.",
"title": ""
},
{
"docid": "4b5c5b76d7370a82f96f36659cd63850",
"text": "For force control of robot and collision detection with humans, robots that has joint torque sensors have been developed. However, existing torque sensors cannot measure correct torque because of crosstalk error. In order to solve this problem, we proposed a novel torque sensor that can measure the pure torque without crosstalk. The hexaform of the proposed sensor with truss structure increase deformation of the sensor and restoration, and the Wheatstone bridge circuit of strain gauge removes crosstalk error. Sensor performance is verified with FEM analysis.",
"title": ""
},
{
"docid": "f09733894d94052707ed768aea8d26e6",
"text": "The aim of this paper is to investigate the rules and constraints of code-switching (CS) in Hindi-English mixed language data. In this paper, we’ll discuss how we collected the mixed language corpus. This corpus is primarily made up of student interview speech. The speech was manually transcribed and verified by bilingual speakers of Hindi and English. The code-switching cases in the corpus are discussed and the reasons for code-switching are explained.",
"title": ""
},
{
"docid": "2ca050b562ed14688dd9d68b454928e0",
"text": "Electronic waste (e-waste) is one of the fastest-growing pollution problems worldwide given the presence if a variety of toxic substances which can contaminate the environment and threaten human health, if disposal protocols are not meticulously managed. This paper presents an overview of toxic substances present in e-waste, their potential environmental and human health impacts together with management strategies currently being used in certain countries. Several tools including life cycle assessment (LCA), material flow analysis (MFA), multi criteria analysis (MCA) and extended producer responsibility (EPR) have been developed to manage e-wastes especially in developed countries. The key to success in terms of e-waste management is to develop eco-design devices, properly collect e-waste, recover and recycle material by safe methods, dispose of e-waste by suitable techniques, forbid the transfer of used electronic devices to developing countries, and raise awareness of the impact of e-waste. No single tool is adequate but together they can complement each other to solve this issue. A national scheme such as EPR is a good policy in solving the growing e-waste problems.",
"title": ""
},
{
"docid": "ced98c32f887001d40e783ab7b294e1a",
"text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.",
"title": ""
},
{
"docid": "e40a60aec86433eaac618e6b391e2a57",
"text": "Marine microalgae have been used for a long time as food for humans, such as Arthrospira (formerly, Spirulina), and for animals in aquaculture. The biomass of these microalgae and the compounds they produce have been shown to possess several biological applications with numerous health benefits. The present review puts up-to-date the research on the biological activities and applications of polysaccharides, active biocompounds synthesized by marine unicellular algae, which are, most of the times, released into the surrounding medium (exo- or extracellular polysaccharides, EPS). It goes through the most studied activities of sulphated polysaccharides (sPS) or their derivatives, but also highlights lesser known applications as hypolipidaemic or hypoglycaemic, or as biolubricant agents and drag-reducers. Therefore, the great potentials of sPS from marine microalgae to be used as nutraceuticals, therapeutic agents, cosmetics, or in other areas, such as engineering, are approached in this review.",
"title": ""
},
{
"docid": "390b0dbd01e88fec7f7a4b59cb753978",
"text": "In this paper, we propose a segmentation method based on normalized cut and superpixels. The method relies on color and texture cues for fast computation and efficient use of memory. The method is used for food image segmentation as part of a mobile food record system we have developed for dietary assessment and management. The accurate estimate of nutrients relies on correctly labelled food items and sufficiently well-segmented regions. Our method achieves competitive results using the Berkeley Segmentation Dataset and outperforms some of the most popular techniques in a food image dataset.",
"title": ""
},
{
"docid": "9e5aa162d1eecefe11abe5ecefbc11e3",
"text": "Efficient algorithms for 3D character control in continuous control setting remain an open problem in spite of the remarkable recent advances in the field. We present a sampling-based model-predictive controller that comes in the form of a Monte Carlo tree search (MCTS). The tree search utilizes information from multiple sources including two machine learning models. This allows rapid development of complex skills such as 3D humanoid locomotion with less than a million simulation steps, in less than a minute of computing on a modest personal computer. We demonstrate locomotion of 3D characters with varying topologies under disturbances such as heavy projectile hits and abruptly changing target direction. In this paper we also present a new way to combine information from the various sources such that minimal amount of information is lost. We furthermore extend the neural network, involved in the algorithm, to represent stochastic policies. Our approach yields a robust control algorithm that is easy to use. While learning, the algorithm runs in near real-time, and after learning the sampling budget can be reduced for real-time operation.",
"title": ""
},
{
"docid": "b5df59d926ca4778c306b255d60870a1",
"text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.",
"title": ""
},
{
"docid": "e777bb21d57393a4848fcb04c6d5b913",
"text": "A 2.5 GHz fully integrated voltage controlled oscillator (VCO) for wireless application has been designed in a 0.35μm CMOS technology. A method for compensating the effect of temperature on the carrier oscillation frequency has been presented in this work. We compare also different VCOs topologies in order to select one with low phase noise, low supply sensitivity and large tuning frequency. Good results are obtained with a simple NMOS –Gm VCO. This proposed VCO has a wide operating range from 300 MHz with a good linearity between the output frequency and the control input voltage, with a temperature coefficient of -5 ppm/°C from 20°C to 120°C range. The phase noise is about -135.2dBc/Hz at 1MHz from the carrier with a power consumption of 5mW.",
"title": ""
},
{
"docid": "cb641fc639b86abadec4f85efc226c14",
"text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.",
"title": ""
},
{
"docid": "29ce7251e5237b0666cef2aee7167126",
"text": "Chinese characters have a huge set of character categories, more than 20, 000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a compact set of about 500 fundamental and structural radicals. This paper introduces a novel radical analysis network (RAN) to recognize printed Chinese characters by identifying radicals and analyzing two-dimensional spatial structures among them. The proposed RAN first extracts visual features from input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, aiming at generating captions of Chinese characters by detecting radicals and two-dimensional structures through a spatial attention mechanism. The manner of treating a Chinese character as a composition of radicals rather than a single character class largely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes, namely zero-shot learning.",
"title": ""
},
{
"docid": "33285ad9f7bc6e33b48e3f1e27a1ccc9",
"text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.",
"title": ""
},
{
"docid": "6844deb3346756b1858778a4cec26098",
"text": "Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis [1]. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled SideChannel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method, is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks [2] against de-synchronized traces and use Data Augmentation techniques also during Non-Profiled side-channel attacks. Additionally, the present work shows that in some conditions, this method can outperform classic Non-Profiled attacks as Correlation Power Analysis. We also highlight that it is possible to target masked implementations without leakages combination pre-preprocessing and with less assumptions than classic high-order attacks. To illustrate these properties, we present a series of experiments performed on simulated data and real traces collected from the ChipWhisperer board and from the ASCAD database [3]. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.",
"title": ""
}
] |
scidocsrr
|
8c6fd0aedbea7938ae0b08297b62d4a7
|
Screening for Depression Patients in Family Medicine
|
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
}
] |
[
{
"docid": "5945132041b353b72af11e88b6ba5b97",
"text": "Oblivious RAM (ORAM) protocols are powerful techniques that hide a client’s data as well as access patterns from untrusted service providers. We present an oblivious cloud storage system, ObliviSync, that specifically targets one of the most widely-used personal cloud storage paradigms: synchronization and backup services, popular examples of which are Dropbox, iCloud Drive, and Google Drive. This setting provides a unique opportunity because the above privacy properties can be achieved with a simpler form of ORAM called write-only ORAM, which allows for dramatically increased efficiency compared to related work. Our solution is asymptotically optimal and practically efficient, with a small constant overhead of approximately 4x compared with non-private file storage, depending only on the total data size and parameters chosen according to the usage rate, and not on the number or size of individual files. Our construction also offers protection against timing-channel attacks, which has not been previously considered in ORAM protocols. We built and evaluated a full implementation of ObliviSync that supports multiple simultaneous read-only clients and a single concurrent read/write client whose edits automatically and seamlessly propagate to the readers. We show that our system functions under high work loads, with realistic file size distributions, and with small additional latency (as compared to a baseline encrypted file system) when paired with Dropbox as the synchronization service.",
"title": ""
},
{
"docid": "38666c5299ee67e336dc65f23f528a56",
"text": "Different modalities of magnetic resonance imaging (MRI) can indicate tumor-induced tissue changes from different perspectives, thus benefit brain tumor segmentation when they are considered together. Meanwhile, it is always interesting to examine the diagnosis potential from single modality, considering the cost of acquiring multi-modality images. Clinically, T1-weighted MRI is the most commonly used MR imaging modality, although it may not be the best option for contouring brain tumor. In this paper, we investigate whether synthesizing FLAIR images from T1 could help improve brain tumor segmentation from the single modality of T1. This is achieved by designing a 3D conditional Generative Adversarial Network (cGAN) for FLAIR image synthesis and a local adaptive fusion method to better depict the details of the synthesized FLAIR images. The proposed method can effectively handle the segmentation task of brain tumors that vary in appearance, size and location across samples.",
"title": ""
},
{
"docid": "28b2bbcfb8960ff40f2fe456a5b00729",
"text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation",
"title": ""
},
{
"docid": "07af60525d625fd50e75f61dca4107db",
"text": "Spell checking is a well-known task in Natural Language Processing. Nowadays, spell checkers are an important component of a number of computer software such as web browsers, word processors and others. Spelling error detection and correction is the process that will check the spelling of words in a document, and in occurrence of any error, list out the correct spelling in the form of suggestions. This survey paper covers different spelling error detection and correction techniques in various languages. KeywordsNLP, Spell Checker, Spelling Errors, Error detection techniques, Error correction techniques.",
"title": ""
},
{
"docid": "d6c34d138692851efdbb807a89d0fcca",
"text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.",
"title": ""
},
{
"docid": "a1fcf0d2b9a619c0a70b210c70cf4bfd",
"text": "This paper demonstrates a reliable navigation of a mobile robot in outdoor environment. We fuse differential GPS and odometry data using the framework of extended Kalman filter to localize a mobile robot. And also, we propose an algorithm to detect curbs through the laser range finder. An important feature of road environment is the existence of curbs. The mobile robot builds the map of the curbs of roads and the map is used for tracking and localization. The navigation system for the mobile robot consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The control station receives and displays the image data and the teleoperator commands the mobile robot based on the image data. Since the image data does not contain enough data for reliable navigation, a hybrid strategy for reliable mobile robot in outdoor environment is suggested. When the mobile robot is faced with unexpected obstacles or the situation that, if it follows the command, it can happen to collide, it sends a warning message to the teleoperator and changes the mode from teleoperated to autonomous to avoid the obstacles by itself. After avoiding the obstacles or the collision situation, the mode of the mobile robot is returned to teleoperated mode. We have been able to confirm that the appropriate change of navigation mode can help the teleoperator perform reliable navigation in outdoor environment through experiments in the road.",
"title": ""
},
{
"docid": "0ce06f95b1dafcac6dad4413c8b81970",
"text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.",
"title": ""
},
{
"docid": "b1f3f0dac49d6613f381b30ebf5b0ad7",
"text": "In the current Web scenario a video browsing tool that produces on-the-fly storyboards is more and more a need. Video summary techniques can be helpful but, due to their long processing time, they are usually unsuitable for on-the-fly usage. Therefore, it is common to produce storyboards in advance, penalizing users customization. The lack of customization is more and more critical, as users have different demands and might access the Web with several different networking and device technologies. In this paper we propose STIMO, a summarization technique designed to produce on-the-fly video storyboards. STIMO produces still and moving storyboards and allows advanced users customization (e.g., users can select the storyboard length and the maximum time they are willing to wait to get the storyboard). STIMO is based on a fast clustering algorithm that selects the most representative video contents using HSV frame color distribution. Experimental results show that STIMO produces storyboards with good quality and in a time that makes on-the-fly usage possible.",
"title": ""
},
{
"docid": "16afaad8bfdc64f9d97e9829f2029bc6",
"text": "The combination of limited individual information and costly information acquisition in markets for experience goods leads us to believe that significant peer effects drive demand in these markets. In this paper we model the effects of peers on the demand patterns of products in the market experience goods microfunding. By analyzing data from an online crowdfunding platform from 2006 to 2010 we are able to ascertain that peer effects, and not network externalities, influence consumption.",
"title": ""
},
{
"docid": "c6283ee48fd5115d28e4ea0812150f25",
"text": "Stochastic regular bi-languages has been recently proposed to model the joint probability distributions appearing in some statistical approaches of Spoken Dialog Systems. To this end a deterministic and probabilistic finite state biautomaton was defined to model the distribution probabilities for the dialog model. In this work we propose and evaluate decision strategies over the defined probabilistic finite state bi-automaton to select the best system action at each step of the interaction. To this end the paper proposes some heuristic decision functions that consider both action probabilities learn from a corpus and number of known attributes at running time. We compare either heuristics based on a single next turn or based on entire paths over the automaton. Experimental evaluation was carried out to test the model and the strategies over the Let’s Go Bus Information system. The results obtained show good system performances. They also show that local decisions can lead to better system performances than best path-based decisions due to the unpredictability of the user behaviors.",
"title": ""
},
{
"docid": "dffe5305558e10a0ceba499f3a01f4d8",
"text": "A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or non-linear transformation, PMvGE can treat both simultaneously. By combining Mercer’s theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihoodbased estimator enables efficient computation of non-linear transformations of data vectors in largescale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "83e0fdbaa10c01aecdbe9cf853511230",
"text": "We use an online travel context to test three aspects of communication",
"title": ""
},
{
"docid": "58af6565b74f68371a1c61eab44a72c5",
"text": "Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.",
"title": ""
},
{
"docid": "ea937e1209c270a7b6ab2214e0989fed",
"text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.",
"title": ""
},
{
"docid": "fa9571673fe848d1d119e2d49f21d28d",
"text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.",
"title": ""
},
{
"docid": "b54ca99ae8818517d5c04100bad0f3b4",
"text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo starkeynature@hotmail.com Liqun Qi liqun.qi@polyu.edu.hk Naihua Xiu nhxiu@bjtu.edu.cn 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy",
"title": ""
},
{
"docid": "fbcad29c075e8d58b9f6df5ee70aa0be",
"text": "We present a motion planning framework for autonomous on-road driving considering both the uncertainty caused by an autonomous vehicle and other traffic participants. The future motion of traffic participants is predicted using a local planner, and the uncertainty along the predicted trajectory is computed based on Gaussian propagation. For the autonomous vehicle, the uncertainty from localization and control is estimated based on a Linear-Quadratic Gaussian (LQG) framework. Compared with other safety assessment methods, our framework allows the planner to avoid unsafe situations more efficiently, thanks to the direct uncertainty information feedback to the planner. We also demonstrate our planner's ability to generate safer trajectories compared to planning only with a LQG framework.",
"title": ""
},
{
"docid": "6fb8b461530af2c56ec0fac36dd85d3a",
"text": "Psoriatic arthritis is one of the spondyloarthritis. It is a disease of clinical heterogenicity, which may affect peripheral joints, as well as axial spine, with presence of inflammatory lesions in soft tissue, in a form of dactylitis and enthesopathy. Plain radiography remains the basic imaging modality for PsA diagnosis, although early inflammatory changes affecting soft tissue and bone marrow cannot be detected with its use, or the image is indistinctive. Typical radiographic features of PsA occur in an advanced disease, mainly within the synovial joints, but also in fibrocartilaginous joints, such as sacroiliac joints, and additionally in entheses of tendons and ligaments. Moll and Wright classified PsA into 5 subtypes: asymmetric oligoarthritis, symmetric polyarthritis, arthritis mutilans, distal interphalangeal arthritis of the hands and feet and spinal column involvement. In this part of the paper we discuss radiographic features of the disease. The next one will address magnetic resonance imaging and ultrasonography.",
"title": ""
}
] |
scidocsrr
|
d3b2ea56837b774bdd1ba56a171bd547
|
Automating image segmentation verification and validation by learning test oracles
|
[
{
"docid": "a5fc5e1bf35863d030b20c219732bc2b",
"text": "Measures of overlap of labelled regions of images, such as the Dice and Tanimoto coefficients, have been extensively used to evaluate image registration and segmentation algorithms. Modern studies can include multiple labels defined on multiple images yet most evaluation schemes report one overlap per labelled region, simply averaged over multiple images. In this paper, common overlap measures are generalized to measure the total overlap of ensembles of labels defined on multiple test images and account for fractional labels using fuzzy set theory. This framework allows a single \"figure-of-merit\" to be reported which summarises the results of a complex experiment by image pair, by label or overall. A complementary measure of error, the overlap distance, is defined which captures the spatial extent of the nonoverlapping part and is related to the Hausdorff distance computed on grey level images. The generalized overlap measures are validated on synthetic images for which the overlap can be computed analytically and used as similarity measures in nonrigid registration of three-dimensional magnetic resonance imaging (MRI) brain images. Finally, a pragmatic segmentation ground truth is constructed by registering a magnetic resonance atlas brain to 20 individual scans, and used with the overlap measures to evaluate publicly available brain segmentation algorithms",
"title": ""
},
{
"docid": "892cfde6defce89783f0c290df4822f2",
"text": "Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of applications that do not have test oracles, i.e., for which it is difficult or impossible to know what the correct output should be for arbitrary input. In metamorphic testing, existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can easily be computed based on the original output. That is, if input x produces output f(x), then we create input x' such that we can predict f(x') based on f(x); if the application does not produce the expected output, then a defect must exist, and either f(x), or f(x') (or both) is wrong.\n In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, or practically impossible for input that is not in human-readable format. Similarly, comparing the outputs can be error-prone for large result sets, especially when slight variations in the results are not actually indicative of errors (i.e., are false positives), for instance when there is non-determinism in the application and multiple outputs can be considered correct.\n In this paper, we present an approach called Automated Metamorphic System Testing. This involves the automation of metamorphic testing at the system level by checking that the metamorphic properties of the entire application hold after its execution. The tester is able to easily set up and conduct metamorphic tests with little manual intervention, and testing can continue in the field with minimal impact on the user. Additionally, we present an approach called Heuristic Metamorphic Testing which seeks to reduce false positives and address some cases of non-determinism. We also describe an implementation framework called Amsterdam, and present the results of empirical studies in which we demonstrate the effectiveness of the technique on real-world programs without test oracles.",
"title": ""
},
{
"docid": "d4aaea0107cbebd7896f4cb57fa39c05",
"text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs",
"title": ""
}
] |
[
{
"docid": "69d42340c09303b69eafb19de7170159",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "47501c171c7b3f8e607550c958852be1",
"text": "Fundus images provide an opportunity for early detection of diabetes. Generally, retina fundus images of diabetic patients exhibit exudates, which are lesions indicative of Diabetic Retinopathy (DR). Therefore, computational tools can be considered to be used in assisting ophthalmologists and medical doctor for the early screening of the disease. Hence in this paper, we proposed visualisation of exudates in fundus images using radar chart and Color Auto Correlogram (CAC) technique. The proposed technique requires that the Optic Disc (OD) from the fundus image be removed. Next, image normalisation was performed to standardise the colors in the fundus images. The exudates from the modified image are then extracted using Artificial Neural Network (ANN) and visualised using radar chart and CAC technique. The proposed technique was tested on 149 images of the publicly available MESSIDOR database. Experimental results suggest that the method has potential to be used for early indication of DR, by visualising the overlap between CAC features of the fundus images.",
"title": ""
},
{
"docid": "07e91583f63660a6b4aa4bb2063bd2b7",
"text": "ScanSAR interferometry is an attractive option for efficient topographic mapping of large areas and for monitoring of large-scale motions. Only ScanSAR interferometry made it possible to map almost the entire landmass of the earth in the 11-day Shuttle Radar Topography Mission. Also the operational satellites RADARSAT and ENVISAT offer ScanSAR imaging modes and thus allow for repeat-pass ScanSAR interferometry. This paper gives a complete description of ScanSAR and burst-mode interferometric signal properties and compares different processing algorithms. The problems addressed are azimuth scanning pattern synchronization, spectral shift filtering in the presence of high squint, Doppler centroid estimation, different phase-preserving ScanSAR processing algorithms, ScanSAR interferogram formation, coregistration, and beam alignment. Interferograms and digital elevation models from RADARSAT ScanSAR Narrow modes are presented. The novel “pack-and-go” algorithm for efficient burst-mode range processing and a new time-variant fast interpolator for interferometric coregistration are introduced.",
"title": ""
},
{
"docid": "8cf10c84e6e389c0c10238477c619175",
"text": "Based on self-determination theory, this study proposes and tests a motivational model of intraindividual changes in teacher burnout (emotional exhaustion, depersonalization, and reduced personal accomplishment). Participants were 806 French-Canadian teachers in public elementary and high schools. Results show that changes in teachers’ perceptions of classroom overload and students’ disruptive behavior are negatively related to changes in autonomous motivation, which in turn negatively predict changes in emotional exhaustion. Results also indicate that changes in teachers’ perceptions of students’ disruptive behaviors and school principal’s leadership behaviors are related to changes in self-efficacy, which in turn negatively predict changes in three burnout components. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "98e9d8fb4a04ad141b3a196fe0a9c08b",
"text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.",
"title": ""
},
{
"docid": "5507f3199296478abbc6e106943a53ba",
"text": "Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.",
"title": ""
},
{
"docid": "7275ce89ea2f5ab8eb8b6651e2487dcb",
"text": "A major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology. In this paper, we propose a sentence rewriting based semantic parsing method, which can effectively resolve the mismatch problem by rewriting a sentence into a new form which has the same structure with its target logical form. Specifically, we propose two sentence-rewriting methods for two common types of mismatch: a dictionary-based method for 1N mismatch and a template-based method for N-1 mismatch. We evaluate our sentence rewriting based semantic parser on the benchmark semantic parsing dataset – WEBQUESTIONS. Experimental results show that our system outperforms the base system with a 3.4% gain in F1, and generates logical forms more accurately and parses sentences more robustly.",
"title": ""
},
{
"docid": "f0c9db6cab187463162c8bba71ea011a",
"text": "Traditional Network-on-Chips (NoCs) employ simple arbitration strategies, such as round-robin or oldest-first, to decide which packets should be prioritized in the network. This is counter-intuitive since different packets can have very different effects on system performance due to, e.g., different level of memory-level parallelism (MLP) of applications. Certain packets may be performance-critical because they cause the processor to stall, whereas others may be delayed for a number of cycles with no effect on application-level performance as their latencies are hidden by other outstanding packets'latencies. In this paper, we define slack as a key measure that characterizes the relative importance of a packet. Specifically, the slack of a packet is the number of cycles the packet can be delayed in the network with no effect on execution time. This paper proposes new router prioritization policies that exploit the available slack of interfering packets in order to accelerate performance-critical packets and thus improve overall system performance. When two packets interfere with each other in a router, the packet with the lower slack value is prioritized. We describe mechanisms to estimate slack, prevent starvation, and combine slack-based prioritization with other recently proposed application-aware prioritization mechanisms.\n We evaluate slack-based prioritization policies on a 64-core CMP with an 8x8 mesh NoC using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 21.0% over the commonlyused round-robin policy. Averaged over 56 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 10.3%, while also reducing application-level unfairness by 30.8%.",
"title": ""
},
{
"docid": "1bbd0eca854737c94e62442ee4cedac8",
"text": "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40% vs the best previous precision of 74.00% for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14% accuracy on CUB-2011.",
"title": ""
},
{
"docid": "f9f54cf8c057d2d9f9b559eb62a94e38",
"text": "The proliferation of malware has presented a serious threat to the security of computer systems. Traditional signature-based anti-virus systems fail to detect polymorphic/metamorphic and new, previously unseen malicious executables. Data mining methods such as Naive Bayes and Decision Tree have been studied on small collections of executables. In this paper, resting on the analysis of Windows APIs called by PE files, we develop the Intelligent Malware Detection System (IMDS) using Objective-Oriented Association (OOA) mining based classification. IMDS is an integrated system consisting of three major modules: PE parser, OOA rule generator, and rule based classifier. An OOA_Fast_FP-Growth algorithm is adapted to efficiently generate OOA rules for classification. A comprehensive experimental study on a large collection of PE files obtained from the anti-virus laboratory of KingSoft Corporation is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our IMDS system outperform popular anti-virus software such as Norton AntiVirus and McAfee VirusScan, as well as previous data mining based detection systems which employed Naive Bayes, Support Vector Machine (SVM) and Decision Tree techniques. Our system has already been incorporated into the scanning tool of KingSoft’s Anti-Virus software.",
"title": ""
},
{
"docid": "b18f5df68581789312d48c65ba7afb9d",
"text": "In this study, an efficient addressing scheme for radix-4 FFT processor is presented. The proposed method uses extra registers to buffer and reorder the data inputs of the butterfly unit. It avoids the modulo-r addition in the address generation; hence, the critical path is significantly shorter than the conventional radix-4 FFT implementations. A significant property of the proposed method is that the critical path of the address generator is independent from the FFT transform length N, making it extremely efficient for large FFT transforms. For performance evaluation, the new FFT architecture has been implemented by FPGA (Altera Stratix) hardware and also synthesized by CMOS 0.18µm technology. The results confirm the speed and area advantages for large FFTs. Although only radix-4 FFT address generation is presented in the paper, it can be used for higher radix FFT.",
"title": ""
},
{
"docid": "b68e09f879e51aad3ed0ce8b696da957",
"text": "The status of current model-driven engineering technologies has matured over the last years whereas the infrastructure supporting model management is still in its infancy. Infrastructural means include version control systems, which are successfully used for the management of textual artifacts like source code. Unfortunately, they are only limited suitable for models. Consequently, dedicated solutions emerge. These approaches are currently hard to compare, because no common quality measure has been established yet and no structured test cases are available. In this paper, we analyze the challenges coming along with merging different versions of one model and derive a first categorization of typical changes and the therefrom resulting conflicts. On this basis we create a set of test cases on which we apply state-of-the-art versioning systems and report our experiences.",
"title": ""
},
{
"docid": "a4197ab8a70142ac331599c506996bc9",
"text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.",
"title": ""
},
{
"docid": "d638bf6a0ec3354dd6ba90df0536aa72",
"text": "Selected elements of dynamical system (DS) theory approach to nonlinear time series analysis are introduced. Key role in this concept plays a method of time delay. The method enables us reconstruct phase space trajectory of DS without knowledge of its governing equations. Our variant is tested and compared with wellknown TISEAN package for Lorenz and Hénon systems. Introduction There are number of methods of nonlinear time series analysis (e.g. nonlinear prediction or noise reduction) that work in a phase space (PS) of dynamical systems. We assume that a given time series of some variable is generated by a dynamical system. A specific state of the system can be represented by a point in the phase space and time evolution of the system creates a trajectory in the phase space. From this point of view we consider our time series to be a projection of trajectory of DS to one (or more – when we have more simultaneously measured variables) coordinates of phase space. This view was enabled due to formulation of embedding theorem [1], [2] at the beginning of the 1980s. It says that it is possible to reconstruct the phase space from the time series. One of the most frequently used methods of phase space reconstruction is the method of time delay. The main task while using this method is to determine values of time delay τ and embedding dimension m. We tested individual steps of this method on simulated data generated by Lorenz and Hénon systems. We compared results computed by our own programs with outputs of program package TISEAN created by R. Hegger, H. Kantz, and T. Schreiber [3]. Method of time delay The most frequently used method of PS reconstruction is the method of time delay. If we have a time series of a scalar variable we construct a vector ( ) , ,..., 1 , N i t x i = in phase space in time ti as following: ( ) ( ) ( ) ( ) ( ) ( ) [ ], 1 ,..., 2 , , τ τ τ − + + + = m t x t x t x t x t i i i i i X where i goes from 1 to N – (m – 1)τ, τ is time delay, m is a dimension of reconstructed space (embedding dimension) and M = N – (m – 1)τ is number of points (states) in the phase space. According to embedding theorem, when this is done in a proper way, dynamics reconstructed using this formula is equivalent to the dynamics on an attractor in the origin phase space in the sense that characteristic invariants of the system are conserved. The time delay method and related aspects are described in literature, e.g. [4]. We estimated the two parameters—time delay and embedding dimension—using algorithms below. Choosing a time delay To determine a suitable time delay we used average mutual information (AMI), a certain generalization of autocorrelation function. Average mutual information between sets of measurements A and B is defined [5]:",
"title": ""
},
{
"docid": "57334078030a2b2d393a7c236d6a3a1c",
"text": "Neural Architecture Search (NAS) aims at finding one “single” architecture that achieves the best accuracy for a given task such as image recognition. In this paper, we study the instance-level variation, and demonstrate that instance-awareness is an important yet currently missing component of NAS. Based on this observation, we propose InstaNAS for searching toward instance-level architectures; the controller is trained to search and form a “distribution of architectures” instead of a single final architecture. Then during the inference phase, the controller selects an architecture from the distribution, tailored for each unseen image to achieve both high accuracy and short latency. The experimental results show that InstaNAS reduces the inference latency without compromising classification accuracy. On average, InstaNAS achieves 48.9% latency reduction on CIFAR-10 and 40.2% latency reduction on CIFAR-100 with respect to MobileNetV2 architecture.",
"title": ""
},
{
"docid": "51f47a5e873f7b24cd15aff4ceb8d35c",
"text": "We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.",
"title": ""
},
{
"docid": "148b7445ec2cd811d64fd81c61c20e02",
"text": "Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.",
"title": ""
},
{
"docid": "eab514f5951a9e2d3752002c7ba799d8",
"text": "In industrial fabric productions, automated real time systems are needed to find out the minor defects. It will save the cost by not transporting defected products and also would help in making compmay image of quality fabrics by sending out only undefected products. A real time fabric defect detection system (FDDS), implementd on an embedded DSP platform is presented here. Textural features of fabric image are extracted based on gray level co-occurrence matrix (GLCM). A sliding window technique is used for defect detection where window moves over the whole image computing a textural energy from the GLCM of the fabric image. The energy values are compared to a reference and the deviations beyond a threshold are reported as defects and also visually represented by a window. The implementation is carried out on a TI TMS320DM642 platform and programmed using code composer studio software. The real time output of this implementation was shown on a monitor. KeywordsFabric Defects, Texture, Grey Level Co-occurrence Matrix, DSP Kit, Energy Computation, Sliding Window, FDDS",
"title": ""
},
{
"docid": "5ae1191a27958704ab5f33749c6b30b5",
"text": "Much of Bluetooth’s data remains confidential in practice due to the difficulty of eavesdropping it. We present mechanisms for doing so, therefore eliminating the data confidentiality properties of the protocol. As an additional security measure, devices often operate in “undiscoverable mode” in order to hide their identity and provide access control. We show how the full MAC address of such master devices can be obtained, therefore bypassing the access control of this feature. Our work results in the first open-source Bluetooth sniffer.",
"title": ""
},
{
"docid": "657087aaadc0537e9fb19c422c27b485",
"text": "Swarms of embedded devices provide new challenges for privacy and security. We propose Permissioned Blockchains as an effective way to secure and manage these systems of systems. A long view of blockchain technology yields several requirements absent in extant blockchain implementations. Our approach to Permissioned Blockchains meets the fundamental requirements for longevity, agility, and incremental adoption. Distributed Identity Management is an inherent feature of our Permissioned Blockchain and provides for resilient user and device identity and attribute management.",
"title": ""
}
] |
scidocsrr
|
419e64d3afee302db4f7fabe52be4e3b
|
Offline signature verification using classifier combination of HOG and LBP features
|
[
{
"docid": "7489989ecaa16bc699949608f9ffc8a1",
"text": "A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "7e1f0cd43cdc9685474e19b7fd65791b",
"text": "Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.",
"title": ""
},
{
"docid": "dc2770a8318dd4aa1142efebe5547039",
"text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.",
"title": ""
},
{
"docid": "f2707d7fcd5d8d9200d4cc8de8ff1042",
"text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.",
"title": ""
},
{
"docid": "f9876540ce148d7b27bab53839f1bf19",
"text": "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "c39836282acc36e77c95e732f4f1c1bc",
"text": "In this paper, a new dataset, HazeRD, is proposed for benchmarking dehazing algorithms under more realistic haze conditions. HazeRD contains fifteen real outdoor scenes, for each of which five different weather conditions are simulated. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. All images are of high resolution, typically six to eight megapixels. We test the performance of several state-of-the-art dehazing techniques on HazeRD. The results exhibit a significant difference among algorithms across the different datasets, reiterating the need for more realistic datasets such as ours and for more careful benchmarking of the methods.",
"title": ""
},
{
"docid": "49680e94843e070a5ed0179798f66f33",
"text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.",
"title": ""
},
{
"docid": "c9c44cc22c71d580f4b2a24cd91ac274",
"text": "One of the first steps in the utterance interpretation pipeline of many task-oriented conversational AI systems is to identify user intents and the corresponding slots. Neural sequence labeling models have achieved very high accuracy on these tasks when trained on large amounts of training data. However, collecting this data is very time-consuming and therefore it is unfeasible to collect large amounts of data for many languages. For this reason, it is desirable to make use of existing data in a high-resource language to train models in low-resource languages. In this paper, we investigate the performance of three different methods for cross-lingual transfer learning, namely (1) translating the training data, (2) using cross-lingual pre-trained embeddings, and (3) a novel method of using a multilingual machine translation encoder as contextual word representations. We find that given several hundred training examples in the the target language, the latter two methods outperform translating the training data. Further, in very low-resource settings, we find that multilingual contextual word representations give better results than using crosslingual static embeddings. We release a dataset of around 57k annotated utterances in English (43k), Spanish (8.6k) and Thai (5k) for three task oriented domains at https://fb.me/multilingual_task_oriented_data.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "49d6b3f314b61ace11afc5eea7b652e3",
"text": "Euler diagrams visually represent containment, intersection and exclusion using closed curves. They first appeared several hundred years ago, however, there has been a resurgence in Euler diagram research in the twenty-first century. This was initially driven by their use in visual languages, where they can be used to represent logical expressions diagrammatically. This work lead to the requirement to automatically generate Euler diagrams from an abstract description. The ability to generate diagrams has accelerated their use in information visualization, both in the standard case where multiple grouping of data items inside curves is required and in the area-proportional case where the area of curve intersections is important. As a result, examining the usability of Euler diagrams has become an important aspect of this research. Usability has been investigated by empirical studies, but much research has concentrated on wellformedness, which concerns how curves and other features of the diagram interrelate. This work has revealed the drawability of Euler diagrams under various wellformedness properties and has developed embedding methods that meet these properties. Euler diagram research surveyed in this paper includes theoretical results, generation techniques, transformation methods and the development of automated reasoning systems for Euler diagrams. It also overviews application areas and the ways in which Euler diagrams have been extended.",
"title": ""
},
{
"docid": "db1cdc2a4e3fe26146a1f9c8b0926f9e",
"text": "Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.",
"title": ""
},
{
"docid": "681641e2593cad85fb1633d1027a9a4f",
"text": "Overview Aggressive driving is a major concern of the American public, ranking at or near the top of traffic safety issues in national surveys of motorists. However, the concept of aggressive driving is not well defined, and its overall impact on traffic safety has not been well quantified due to inadequacies and limitation of available data. This paper reviews published scientific literature on aggressive driving; discusses various definitions of aggressive driving; cites several specific behaviors that are typically associated with aggressive driving; and summarizes past research on the individuals or groups most likely to behave aggressively. Since adequate data to precisely quantify the percentage of fatal crashes that involve aggressive driving do not exist, in this review, we have quantified the number of fatal crashes in which one or more driver actions typically associated with aggressive driving were reported. We found these actions were reported in 56 percent of fatal crashes from 2003 through 2007, with excessive speed being the number one factor. Ideally, an estimate of the prevalence of aggressive driving would include only instances in which such actions were performed intentionally; however, available data on motor vehicle crashes do not contain such information, thus it is important to recognize that this 56 percent may to some degree overestimate the contribution of aggressive driving to fatal crashes. On the other hand, it is likely that aggressive driving contributes to at least some crashes in which it is not reported due to lack of evidence. Despite the clear limitations associated with our attempt to estimate the contribution of potentially-aggressive driver actions to fatal crashes, it is clear that aggressive driving poses a serious traffic safety threat. In addition, our review further indicated that the \" Do as I say, not as I do \" culture, previously reported in the Foundation's Traffic Safety Culture Index, very much applies to aggressive driving.",
"title": ""
},
{
"docid": "237437eae6a6154fb3b32c4c6c1fed07",
"text": "Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "462afb864b255f94deefb661174a598b",
"text": "Due to the heterogeneous and resource-constrained characters of Internet of Things (IoT), how to guarantee ubiquitous network connectivity is challenging. Although LTE cellular technology is the most promising solution to provide network connectivity in IoTs, information diffusion by cellular network not only occupies its saturating bandwidth, but also costs additional fees. Recently, NarrowBand-IoT (NB-IoT), introduced by 3GPP, is designed for low-power massive devices, which intends to refarm wireless spectrum and increase network coverage. For the sake of providing high link connectivity and capacity, we stimulate effective cooperations among user equipments (UEs), and propose a social-aware group formation framework to allocate resource blocks (RBs) effectively following an in-band NB-IoT solution. Specifically, we first introduce a social-aware multihop device-to-device (D2D) communication scheme to upload information toward the eNodeB within an LTE, so that a logical cooperative D2D topology can be established. Then, we formulate the D2D group formation as a scheduling optimization problem for RB allocation, which selects the feasible partition for the UEs by jointly considering relay method selection and spectrum reuse for NB-IoTs. Since the formulated optimization problem has a high computational complexity, we design a novel heuristic with a comprehensive consideration of power control and relay selection. Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.",
"title": ""
},
{
"docid": "3440de9ea0f76ba39949edcb5e2a9b54",
"text": "This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi cient to analyze all types of crime. ■ Current mapping technologies have sig nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood).",
"title": ""
},
{
"docid": "e4e97569f53ddde763f4f28559c96ba6",
"text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"title": ""
},
{
"docid": "5f4e761af11ace5a4d6819431893a605",
"text": "The high power density converter is required due to the strict demands of volume and weight in more electric aircraft, which makes SiC extremely attractive for this application. In this work, a prototype of 50 kW SiC high power density converter with the topology of two-level three-phase voltage source inverter is demonstrated. This converter is driven at high switching speed based on the optimization in switching characterization. It operates at a switching frequency up to 100 kHz and a low dead time of 250 ns. And the converter efficiency is measured to be 99% at 40 kHz and 97.8% at 100 kHz.",
"title": ""
},
{
"docid": "6cf4315ecce8a06d9354ca2f2684113c",
"text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.",
"title": ""
},
{
"docid": "09168164e47fd781e4abeca45fb76c35",
"text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].",
"title": ""
}
] |
scidocsrr
|
c7e75410ac860e6c15d26fac2db620a2
|
Vertical Versus Shared Leadership as Predictors of the Effectiveness of Change Management Teams : An Examination of Aversive , Directive , Transactional , Transformational , and Empowering Leader Behaviors
|
[
{
"docid": "54850f62bf84e01716bc009f68aac3d7",
"text": "© 1966 by the Massachusetts Institute of Technology. From Leadership and Motivation, Essays of Douglas McGregor, edited by W. G. Bennis and E. H. Schein (Cambridge, MA: MIT Press, 1966): 3–20. Reprinted with permission. I t has become trite to say that the most significant developments of the next quarter century will take place not in the physical but in the social sciences, that industry—the economic organ of society—has the fundamental know-how to utilize physical science and technology for the material benefit of mankind, and that we must now learn how to utilize the social sciences to make our human organizations truly effective. Many people agree in principle with such statements; but so far they represent a pious hope—and little else. Consider with me, if you will, something of what may be involved when we attempt to transform the hope into reality.",
"title": ""
},
{
"docid": "a6872c1cab2577547c9a7643a6acd03e",
"text": "Current theories and models of leadership seek to explain the influence of the hierarchical superior upon the satisfaction and performance of subordinates. While disagreeing with one another in important respects, these theories and models share an implicit assumption that while the style of leadership likely to be effective may vary according to the situation, some leadership style will be effective regardless of the situation. It has been found, however, that certain individual, task, and organizational variables act as \"substitutes for leadership,\" negating the hierarchical superior's ability to exert either positive or negative influence over subordinate attitudes and effectiveness. This paper identifies a number of such substitutes for leadership, presents scales of questionnaire items for their measurement, and reports some preliminary tests.",
"title": ""
}
] |
[
{
"docid": "6bbcbe9f4f4ede20d2b86f6da9167110",
"text": "Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role. During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors. However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.",
"title": ""
},
{
"docid": "496e0a7bfd230f00bafefd6c1c8f29da",
"text": "Modern society depends on information technology in nearly every facet of human activity including, finance, transportation, education, government, and defense. Organizations are exposed to various and increasing kinds of risks, including information technology risks. Several standards, best practices, and frameworks have been created to help organizations manage these risks. The purpose of this research work is to highlight the challenges facing enterprises in their efforts to properly manage information security risks when adopting international standards and frameworks. To assist in selecting the best framework to use in risk management, the article presents an overview of the most popular and widely used standards and identifies selection criteria. It suggests an approach to proper implementation as well. A set of recommendations is put forward with further research opportunities on the subject. KeywordsInformation security; risk management; security frameworks; security standards; security management.",
"title": ""
},
{
"docid": "2b3c507c110452aa54c046f9e7f9200d",
"text": "Word embeddings are crucial to many natural language processing tasks. The quality of embeddings relies on large nonnoisy corpora. Arabic dialects lack large corpora and are noisy, being linguistically disparate with no standardized spelling. We make three contributions to address this noise. First, we describe simple but effective adaptations to word embedding tools to maximize the informative content leveraged in each training sentence. Second, we analyze methods for representing disparate dialects in one embedding space, either by mapping individual dialects into a shared space or learning a joint model of all dialects. Finally, we evaluate via dictionary induction, showing that two metrics not typically reported in the task enable us to analyze our contributions’ effects on low and high frequency words. In addition to boosting performance between 2-53%, we specifically improve on noisy, low frequency forms without compromising accuracy on high frequency forms.",
"title": ""
},
{
"docid": "e706c5071b87561f08ee8f9610e41e2e",
"text": "Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been proposed to limit the information provided to the adversary by omitting probability scores, significantly impacting the utility of the provided service. In this work, we illustrate how a service provider can still provide useful, albeit misleading, class probability information, while significantly limiting the success of the attack. Our defense forces the adversary to discard the class probabilities, requiring significantly more queries before they can train a model with comparable performance. We evaluate several attack strategies, model architectures, and hyperparameters under varying adversarial models, and evaluate the efficacy of our defense against the strongest adversary. Finally, we quantify the amount of noise injected into the class probabilities to mesure the loss in utility, e.g., adding 1.26 nats per query on CIFAR-10 and 3.27 on MNIST. Our evaluation shows our defense can degrade the accuracy of the stolen model at least 20%, or require up to 64 times more queries while keeping the accuracy of the protected model almost intact.",
"title": ""
},
{
"docid": "1364758783c75a39112d01db7e7cfc63",
"text": "Steganography plays an important role in secret communication in digital worlds and open environments like Internet. Undetectability and imperceptibility of confidential data are major challenges of steganography methods. This article presents a secure steganography method in frequency domain based on partitioning approach. The cover image is partitioned into 8×8 blocks and then integer wavelet transform through lifting scheme is performed for each block. The symmetric RC4 encryption method is applied to secret message to obtain high security and authentication. Tree Scan Order is performed in frequency domain to find proper location for embedding secret message. Secret message is embedded in cover image with minimal degrading of the quality. Experimental results demonstrate that the proposed method has achieved superior performance in terms of high imperceptibility of stego-image and it is secure against statistical attack in comparison with existing methods.",
"title": ""
},
{
"docid": "a25338ae0035e8a90d6523ee5ef667f7",
"text": "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2% on KTH (better by 3.3%), 95.0% on UCF Sports (better by 3.7%), 57.9% on UCF50 (baseline is 47.9%), and 26.9% on HMDB51 (baseline is 23.2%). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.",
"title": ""
},
{
"docid": "b525081979bebe54e2262086170cbb31",
"text": " Activity recognition strategies assume large amounts of labeled training data which require tedious human labor to label. They also use hand engineered features, which are not best for all applications, hence required to be done separately for each application. Several recognition strategies have benefited from deep learning for unsupervised feature selection, which has two important property – fine tuning and incremental update. Question! Can deep learning be leveraged upon for continuous learning of activity models from streaming videos? Contributions",
"title": ""
},
{
"docid": "cc204a8e12f47259059488bb421f8d32",
"text": "Phishing is a web-based attack that uses social engineering techniques to exploit internet users and acquire sensitive data. Most phishing attacks work by creating a fake version of the real site's web interface to gain the user's trust.. We applied different methods for detecting phishing using known as well as new features. In this we used the heuristic-based approach to handle phishing attacks, in this approached several website features are collected and used to identify the type of the website. The heuristic-based approach can recognize newly created fake websites in real-time. One intelligent approach based on genetic algorithm seems a potential solution that may effectively detect phishing websites with high accuracy and prevent it by blocking them.",
"title": ""
},
{
"docid": "a55881d3cd1091c0b7f614142022718c",
"text": "Successful teams are characterized by high levels of trust between team members, allowing the team to learn from mistakes, take risks, and entertain diverse ideas. We investigated a robot's potential to shape trust within a team through the robot's expressions of vulnerability. We conducted a between-subjects experiment (N = 35 teams, 105 participants) comparing the behavior of three human teammates collaborating with either a social robot making vulnerable statements or with a social robot making neutral statements. We found that, in a group with a robot making vulnerable statements, participants responded more to the robot's comments and directed more of their gaze to the robot, displaying a higher level of engagement with the robot. Additionally, we discovered that during times of tension, human teammates in a group with a robot making vulnerable statements were more likely to explain their failure to the group, console team members who had made mistakes, and laugh together, all actions that reduce the amount of tension experienced by the team. These results suggest that a robot's vulnerable behavior can have \"ripple effects\" on their human team members' expressions of trust-related behavior.",
"title": ""
},
{
"docid": "e8e8e6d288491e715177a03601500073",
"text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.",
"title": ""
},
{
"docid": "de0c3f4d5cbad1ce78e324666937c232",
"text": "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an in creasingly popular method for learning visual features, it is most often traine d at the patch level. Applying the resulting filters convolutionally results in h ig ly redundant codes because overlapping patches are encoded in isolation. By tr aining convolutionally over large image windows, our method reduces the redudancy b etween feature vectors at neighboring locations and improves the efficienc y of the overall representation. In addition to a linear decoder that reconstruct s the image from sparse features, our method trains an efficient feed-forward encod er that predicts quasisparse features from the input. While patch-based training r arely produces anything but oriented edge detectors, we show that convolution al training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves perfor mance on a number of visual recognition and detection tasks.",
"title": ""
},
{
"docid": "f174469e907b60cd481da6b42bafa5f9",
"text": "A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.",
"title": ""
},
{
"docid": "a086686928333e06592cd901e8a346bd",
"text": "BACKGROUND\nClosed-loop artificial pancreas device (APD) systems are externally worn medical devices that are being developed to enable people with type 1 diabetes to regulate their blood glucose levels in a more automated way. The innovative concept of this emerging technology is that hands-free, continuous, glycemic control can be achieved by using digital communication technology and advanced computer algorithms.\n\n\nMETHODS\nA horizon scanning review of this field was conducted using online sources of intelligence to identify systems in development. The systems were classified into subtypes according to their level of automation, the hormonal and glycemic control approaches used, and their research setting.\n\n\nRESULTS\nEighteen closed-loop APD systems were identified. All were being tested in clinical trials prior to potential commercialization. Six were being studied in the home setting, 5 in outpatient settings, and 7 in inpatient settings. It is estimated that 2 systems may become commercially available in the EU by the end of 2016, 1 during 2017, and 2 more in 2018.\n\n\nCONCLUSIONS\nThere are around 18 closed-loop APD systems progressing through early stages of clinical development. Only a few of these are currently in phase 3 trials and in settings that replicate real life.",
"title": ""
},
{
"docid": "6c68bccf376da1f963aaa8ec5e08b646",
"text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.",
"title": ""
},
{
"docid": "742498bfa62278bd5c070145ad3750b0",
"text": "In this paper we address the demand for flexibility and economic efficiency in industrial autonomous guided vehicle (AGV) systems by the use of cloud computing. We propose a cloud-based architecture that moves parts of mapping, localization and path planning tasks to a cloud server. We use a cooperative longterm Simultaneous Localization and Mapping (SLAM) approach which merges environment perception of stationary sensors and mobile robots into a central Holistic Environment Model (HEM). Further, we deploy a hierarchical cooperative path planning approach using Conflict-Based Search (CBS) to find optimal sets of paths which are then provided to the mobile robots. For communication we utilize the Manufacturing Service Bus (MSB) which is a component of the manufacturing cloud platform Virtual Fort Knox (VFK). We demonstrate the feasibility of this approach in a real-life industrial scenario. Additionally, we evaluate the system's communication and the planner for various numbers of agents.",
"title": ""
},
{
"docid": "3a1b9a47a7fe51ab19f53ae6aaa18d6d",
"text": "The overall context proposed in this paper is part of our long-standing goal to contribute to a group of community that suffers from Autism Spectrum Disorder (ASD); a lifelong developmental disability. The objective of this paper is to present the development of our pilot experiment protocol where children with ASD will be exposed to the humanoid robot NAO. This fully programmable humanoid offers an ideal research platform for human-robot interaction (HRI). This study serves as the platform for fundamental investigation to observe the initial response and behavior of the children in the said environment. The system utilizes external cameras, besides the robot's own visual system. Anticipated results are the real initial response and reaction of ASD children during the HRI with the humanoid robot. This shall leads to adaptation of new procedures in ASD therapy based on HRI, especially for a non-technical-expert person to be involved in the robotics intervention during the therapy session.",
"title": ""
},
{
"docid": "823c00a4cbbfb3ca5fc302dfeff0fbb3",
"text": "Given that the synthesis of cumulated knowledge is an essential condition for any field to grow and develop, we believe that the enhanced role of IS reviews requires that this expository form be given careful scrutiny. Over the past decade, several senior scholars have made calls for more review papers in our field. While the number of IS review papers has substantially increased in recent years, no prior research has attempted to develop a general framework to conduct and evaluate the rigor of standalone reviews. In this paper, we fill this gap. More precisely, we present a set of guidelines for guiding and evaluating IS literature reviews and specify to which review types they apply. To do so, we first distinguish between four broad categories of review papers and then propose a set of guidelines that are grouped according to the generic phases and steps of the review process. We hope our work will serve as a valuable source for those conducting, evaluating, and/or interpreting reviews in our field.",
"title": ""
},
{
"docid": "56266e0f3be7a58cfed1c9bdd54798e5",
"text": "In this paper, the design methods for four-way power combiners based on eight-port and nine-port mode networks are proposed. The eight-port mode network is fundamentally a two-stage binary four-way power combiner composed of three magic-Ts: two compact H-plane magic-Ts and one magic-T with coplanar arms. The two compact H-plane magic-Ts and the magic-T with coplanar arms function as the first and second stages, respectively. Thus, four-way coaxial-to-coaxial power combiners can be designed. A one-stage four-way power combiner based on a nine-port mode network is also proposed. Two matched coaxial ports and two matched rectangular ports are used to provide high isolation along the E-plane and the H-plane, respectively. The simulations agree well with the measured results. The designed four-way power combiners are superior in terms of their compact cross-sectional areas, a high degree of isolation, low insertion loss, low output-amplitude imbalance, and low phase imbalance, which make them well suited for solid-state power combination.",
"title": ""
},
{
"docid": "a4c80a334a6f9cd70fe5c7000740c18f",
"text": "CMOS SRAM cell is very less power consuming and have less read and write time. Higher cell ratios can decrease the read and write time and improve stability. PMOS transistor with less width reduces the power consumption. This paper implements 6T SRAM cell with reduced read and write time, area and power consumption. It has been noticed often that increased memory capacity increases the bit-line parasitic capacitance which in turn slows down voltage sensing and make bit-line voltage swings energy expensive. This result in slower and more energy hungry memories.. In this paper Two SRAM cell is being designed for 4 Kb of memory core with supply voltage 1.8 V. A technique of global bit line is used for reducing the power consumption and increasing the memory capacity.",
"title": ""
},
{
"docid": "08d8e372c5ae4eef9848552ee87fbd64",
"text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …",
"title": ""
}
] |
scidocsrr
|
45ccd6eb242f7eb66191c737e6f6b719
|
Fundamental movement skills in children and adolescents: review of associated health benefits.
|
[
{
"docid": "61eb3c9f401ec9d6e89264297395f9d3",
"text": "PURPOSE\nCross-sectional evidence has demonstrated the importance of motor skill proficiency to physical activity participation, but it is unknown whether skill proficiency predicts subsequent physical activity.\n\n\nMETHODS\nIn 2000, children's proficiency in object control (kick, catch, throw) and locomotor (hop, side gallop, vertical jump) skills were assessed in a school intervention. In 2006/07, the physical activity of former participants was assessed using the Australian Physical Activity Recall Questionnaire. Linear regressions examined relationships between the reported time adolescents spent participating in moderate-to-vigorous or organized physical activity and their childhood skill proficiency, controlling for gender and school grade. A logistic regression examined the probability of participating in vigorous activity.\n\n\nRESULTS\nOf 481 original participants located, 297 (62%) consented and 276 (57%) were surveyed. All were in secondary school with females comprising 52% (144). Adolescent time in moderate-to-vigorous and organized activity was positively associated with childhood object control proficiency. Respective models accounted for 12.7% (p = .001), and 18.2% of the variation (p = .003). Object control proficient children became adolescents with a 10% to 20% higher chance of vigorous activity participation.\n\n\nCONCLUSIONS\nObject control proficient children were more likely to become active adolescents. Motor skill development should be a key strategy in childhood interventions aiming to promote long-term physical activity.",
"title": ""
}
] |
[
{
"docid": "ae46639adab554a921b5213b385a4472",
"text": "We develop a framework for rendering photographic images by directly optimizing their perceptual similarity to the original visual scene. Specifically, over the set of all images that can be rendered on a given display, we minimize the normalized Laplacian pyramid distance (NLPD), a measure of perceptual dissimilarity that is derived from a simple model of the early stages of the human visual system. When rendering images acquired with a higher dynamic range than that of the display, we find that the optimization boosts the contrast of low-contrast features without introducing significant artifacts, yielding results of comparable visual quality to current state-of-the-art methods, but without manual intervention or parameter adjustment. We also demonstrate the effectiveness of the framework for a variety of other display constraints, including limitations on minimum luminance (black point), mean luminance (as a proxy for energy consumption), and quantized luminance levels (halftoning). We show that the method may generally be used to enhance details and contrast, and, in particular, can be used on images degraded by optical scattering (e.g., fog). Finally, we demonstrate the necessity of each of the NLPD components-an initial power function, a multiscale transform, and local contrast gain control-in achieving these results and we show that NLPD is competitive with the current state-of-the-art image quality metrics.",
"title": ""
},
{
"docid": "22d78ead5b703225b34f3c29a5ff07ad",
"text": "Children's experiences in early childhood have significant lasting effects in their overall development and in the United States today the majority of young children spend considerable amounts of time in early childhood education settings. At the national level, there is an expressed concern about the low levels of student interest and success in science, technology, engineering, and mathematics (STEM). Bringing these two conversations together our research focuses on how young children of preschool age exhibit behaviors that we consider relevant in engineering. There is much to be explored in STEM education at such an early age, and in order to proceed we created an experimental observation protocol in which we identified various pre-engineering behaviors based on pilot observations, related literature and expert knowledge. This protocol is intended for use by preschool teachers and other professionals interested in studying engineering in the preschool classroom.",
"title": ""
},
{
"docid": "6c270eaa2b9b9a0e140e0d8879f5d383",
"text": "More than 75% of hospital-acquired or nosocomial urinary tract infections are initiated by urinary catheters, which are used during the treatment of 15-25% of hospitalized patients. Among other purposes, urinary catheters are primarily used for draining urine after surgeries and for urinary incontinence. During catheter-associated urinary tract infections, bacteria travel up to the bladder and cause infection. A major cause of catheter-associated urinary tract infection is attributed to the use of non-ideal materials in the fabrication of urinary catheters. Such materials allow for the colonization of microorganisms, leading to bacteriuria and infection, depending on the severity of symptoms. The ideal urinary catheter is made out of materials that are biocompatible, antimicrobial, and antifouling. Although an abundance of research has been conducted over the last forty-five years on the subject, the ideal biomaterial, especially for long-term catheterization of more than a month, has yet to be developed. The aim of this review is to highlight the recent advances (over the past 10years) in developing antimicrobial materials for urinary catheters and to outline future requirements and prospects that guide catheter materials selection and design.\n\n\nSTATEMENT OF SIGNIFICANCE\nThis review article intends to provide an expansive insight into the various antimicrobial agents currently being researched for urinary catheter coatings. According to CDC, approximately 75% of urinary tract infections are caused by urinary catheters and 15-25% of hospitalized patients undergo catheterization. In addition to these alarming statistics, the increasing cost and health related complications associated with catheter associated UTIs make the research for antimicrobial urinary catheter coatings even more pertinent. This review provides a comprehensive summary of the history, the latest progress in development of the coatings and a brief conjecture on what the future entails for each of the antimicrobial agents discussed.",
"title": ""
},
{
"docid": "1d6e20debb1fc89079e0c5e4861e3ca4",
"text": "BACKGROUND\nThe aims of this study were to identify the independent factors associated with intermittent addiction and addiction to the Internet and to examine the psychiatric symptoms in Korean adolescents when the demographic and Internet-related factors were controlled.\n\n\nMETHODS\nMale and female students (N = 912) in the 7th-12th grades were recruited from 2 junior high schools and 2 academic senior high schools located in Seoul, South Korea. Data were collected from November to December 2004 using the Internet-Related Addiction Scale and the Symptom Checklist-90-Revision. A total of 851 subjects were analyzed after excluding the subjects who provided incomplete data.\n\n\nRESULTS\nApproximately 30% (n = 258) and 4.3% (n = 37) of subjects showed intermittent Internet addiction and Internet addiction, respectively. Multivariate logistic regression analysis showed that junior high school students and students having a longer period of Internet use were significantly associated with intermittent addiction. In addition, male gender, chatting, and longer Internet use per day were significantly associated with Internet addiction. When the demographic and Internet-related factors were controlled, obsessive-compulsive and depressive symptoms were found to be independently associated factors for intermittent addiction and addiction to the Internet, respectively.\n\n\nCONCLUSIONS\nStaff working in junior or senior high schools should pay closer attention to those students who have the risk factors for intermittent addiction and addiction to the Internet. Early preventive intervention programs are needed that consider the individual severity level of Internet addiction.",
"title": ""
},
{
"docid": "3f1a2efdff6be4df064f3f5b978febee",
"text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.",
"title": ""
},
{
"docid": "03fc999e12a705e5228d44d97e126ee1",
"text": "This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatiotemporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.",
"title": ""
},
{
"docid": "b5dd3b83c680a9b3717597b92b03bb6b",
"text": "In this correspondence we have not addressed the problem of constructing actual codebooks. Information theory indicates that, in principle , one can construct a codebook by drawing each component of each codeword independently, using the distribution obtained from the Blahut algorithm. This procedure is not in general practical. Practical ways to construct codewords may be found in the extensive literature on vector quantization (see, e.g., the tutorial paper by R. M. Gray [19] or the book [20]). It is not clear at this point if codebook constructing methods from the vector quantizer literature are practical in the setting of this correspondence. Alternatively, one can trade complexity and performance and construct a scalar quantizer. In this case, the distribution obtained from the Blahut algorithm may be used in the Max–Lloyd algorithm [21], [22]. Grenander, \" Conditional-mean estimation via jump-diffusion processes in multiple target tracking/recog-nition, \" IEEE Trans.matic target recognition organized via jump-diffusion algorithms, \" IEEE bounds for estimators on matrix lie groups for atr, \" IEEE Trans. Abstract—A hyperspectral image can be considered as an image cube where the third dimension is the spectral domain represented by hundreds of spectral wavelengths. As a result, a hyperspectral image pixel is actually a column vector with dimension equal to the number of spectral bands and contains valuable spectral information that can be used to account for pixel variability, similarity, and discrimination. In this correspondence, we present a new hyperspectral measure, Spectral Information Measure (SIM), to describe spectral variability and two criteria, spectral information divergence and spectral discriminatory probability, for spectral similarity and discrimination, respectively. The spectral information measure is an information-theoretic measure which treats each pixel as a random variable using its spectral signature histogram as the desired probability distribution. Spectral Information Divergence (SID) compares the similarity between two pixels by measuring the probabilistic discrepancy between two corresponding spectral signatures. The spectral discriminatory probability calculates spectral probabilities of a spectral database (library) relative to a pixel to be identified so as to achieve material identification. In order to compare the discriminatory power of one spectral measure relative to another , a criterion is also introduced for performance evaluation, which is based on the power of discriminating one pixel from another relative to a reference pixel. The experimental results demonstrate that the new hyper-spectral measure can characterize spectral variability more effectively than the commonly used Spectral Angle Mapper (SAM).",
"title": ""
},
{
"docid": "f6e080319e7455fda0695f324941edcb",
"text": "The Internet of Things (IoT) is a distributed system of physical objects that requires the seamless integration of hardware (e.g., sensors, actuators, electronics) and network communications in order to collect and exchange data. IoT smart objects need to be somehow identified to determine the origin of the data and to automatically detect the elements around us. One of the best positioned technologies to perform identification is RFID (Radio Frequency Identification), which in the last years has gained a lot of popularity in applications like access control, payment cards or logistics. Despite its popularity, RFID security has not been properly handled in numerous applications. To foster security in such applications, this article includes three main contributions. First, in order to establish the basics, a detailed review of the most common flaws found in RFID-based IoT systems is provided, including the latest attacks described in the literature. Second, a novel methodology that eases the detection and mitigation of such flaws is presented. Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it. Thus, the methodology is tested in different scenarios where tags are commonly used for identification. In such systems it was possible to clone transponders, extract information, and even emulate both tags and readers. Therefore, it is shown that the methodology proposed is useful for auditing security and reverse engineering RFID communications in IoT applications. It must be noted that, although this paper is aimed at fostering RFID communications security in IoT applications, the methodology can be applied to any RFID communications protocol.",
"title": ""
},
{
"docid": "b9a2a41e12e259fbb646ff92956e148e",
"text": "The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.",
"title": ""
},
{
"docid": "0df681e77b30e9143f7563b847eca5c6",
"text": "BRIDGE bot is a 158 g, 10.7 × 8.9 × 6.5 cm3, magnetic-wheeled robot designed to traverse and inspect steel bridges. Utilizing custom magnetic wheels, the robot is able to securely adhere to the bridge in any orientation. The body platform features flexible, multi-material legs that enable a variety of plane transitions as well as robot shape manipulation. The robot is equipped with a Cortex-M0 processor, inertial sensors, and a modular wireless radio. A camera is included to provide images for detection and evaluation of identified problems. The robot has been demonstrated moving through plane transitions from 45° to 340° as well as over obstacles up to 9.5 mm in height. Preliminary use of sensor feedback to improve plane transitions has also been demonstrated.",
"title": ""
},
{
"docid": "3867ff9ac24349b17e50ec2a34e84da4",
"text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.",
"title": ""
},
{
"docid": "e55fdc146f334c9257e5b2a3e9f2d2d9",
"text": "Customer churn prediction models aim to detect customers with a high propensity to attrite. Predictive accuracy, comprehensibility, and justifiability are three key aspects of a churn prediction model. An accurate model permits to correctly target future churners in a retention marketing campaign, while a comprehensible and intuitive rule-set allows to identify the main drivers for customers to churn, and to develop an effective retention strategy in accordance with domain knowledge. This paper provides an extended overview of the literature on the use of data mining in customer churn prediction modeling. It is shown that only limited attention has been paid to the comprehensibility and the intuitiveness of churn prediction models. Therefore, two novel data mining techniques are applied to churn prediction modeling, and benchmarked to traditional rule induction techniques such as C4.5 and RIPPER. Both AntMiner+ and ALBA are shown to induce accurate as well as comprehensible classification rule-sets. AntMiner+ is a high performing data mining technique based on the principles of Ant Colony Optimization that allows to include domain knowledge by imposing monotonicity constraints on the final rule-set. ALBA on the other hand combines the high predictive accuracy of a non-linear support vector machine model with the comprehensibility of the rule-set format. The results of the benchmarking experiments show that ALBA improves learning of classification techniques, resulting in comprehensible models with increased performance. AntMiner+ results in accurate, comprehensible, but most importantly justifiable models, unlike the other modeling techniques included in this study. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b466803c9a9be5d38171ece8d207365e",
"text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.",
"title": ""
},
{
"docid": "d1d862185a20e1f1efc7d3dc7ca8524b",
"text": "In what ways do the online behaviors of wizards and ogres map to players’ actual leadership status in the offline world? What can we learn from players’ experience in Massively Multiplayer Online games (MMOGs) to advance our understanding of leadership, especially leadership in online settings (E-leadership)? As part of a larger agenda in the emerging field of empirically testing the ‘‘mapping’’ between the online and offline worlds, this study aims to tackle a central issue in the E-leadership literature: how have technology and technology mediated communications transformed leadership-diagnostic traits and behaviors? To answer this question, we surveyed over 18,000 players of a popular MMOG and also collected behavioral data of a subset of survey respondents over a four-month period. Motivated by leadership theories, we examined the connection between respondents’ offline leadership status and their in-game relationship-oriented and task-related-behaviors. Our results indicate that individuals’ relationship-oriented behaviors in the virtual world are particularly relevant to players’ leadership status in voluntary organizations, while their task-oriented behaviors are marginally linked to offline leadership status in voluntary organizations, but not in companies. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "86de6e4d945f0d1fa7a0b699064d7bd5",
"text": "BACKGROUND\nTo increase understanding of the relationships among sexual violence, paraphilias, and mental illness, the authors assessed the legal and psychiatric features of 113 men convicted of sexual offenses.\n\n\nMETHOD\n113 consecutive male sex offenders referred from prison, jail, or probation to a residential treatment facility received structured clinical interviews for DSM-IV Axis I and II disorders, including sexual disorders. Participants' legal, sexual and physical abuse, and family psychiatric histories were also evaluated. We compared offenders with and without paraphilias.\n\n\nRESULTS\nParticipants displayed high rates of lifetime Axis I and Axis II disorders: 96 (85%) had a substance use disorder; 84 (74%), a paraphilia; 66 (58%), a mood disorder (40 [35%], a bipolar disorder and 27 [24%], a depressive disorder); 43 (38%), an impulse control disorder; 26 (23%), an anxiety disorder; 10 (9%), an eating disorder; and 63 (56%), antisocial personality disorder. Presence of a paraphilia correlated positively with the presence of any mood disorder (p <.001), major depression (p =.007), bipolar I disorder (p =.034), any anxiety disorder (p=.034), any impulse control disorder (p =.006), and avoidant personality disorder (p =.013). Although offenders without paraphilias spent more time in prison than those with paraphilias (p =.019), paraphilic offenders reported more victims (p =.014), started offending at a younger age (p =.015), and were more likely to perpetrate incest (p =.005). Paraphilic offenders were also more likely to be convicted of (p =.001) or admit to (p <.001) gross sexual imposition of a minor. Nonparaphilic offenders were more likely to have adult victims exclusively (p =.002), a prior conviction for theft (p <.001), and a history of juvenile offenses (p =.058).\n\n\nCONCLUSIONS\nSex offenders in the study population displayed high rates of mental illness, substance abuse, paraphilias, personality disorders, and comorbidity among these conditions. Sex offenders with paraphilias had significantly higher rates of certain types of mental illness and avoidant personality disorder. Moreover, paraphilic offenders spent less time in prison but started offending at a younger age and reported more victims and more non-rape sexual offenses against minors than offenders without paraphilias. On the basis of our findings, we assert that sex offenders should be carefully evaluated for the presence of mental illness and that sex offender management programs should have a capacity for psychiatric treatment.",
"title": ""
},
{
"docid": "3d9e279afe4ba8beb1effd4f26550f67",
"text": "We propose and demonstrate a scheme for boosting the efficiency of entanglement distribution based on a decoherence-free subspace over lossy quantum channels. By using backward propagation of a coherent light, our scheme achieves an entanglement-sharing rate that is proportional to the transmittance T of the quantum channel in spite of encoding qubits in multipartite systems for the decoherence-free subspace. We experimentally show that highly entangled states, which can violate the Clauser-Horne-Shimony-Holt inequality, are distributed at a rate proportional to T.",
"title": ""
},
{
"docid": "b8702cb8d18ae53664f3dfff95152764",
"text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.",
"title": ""
},
{
"docid": "10f32a4e0671adaee3e18f20592c4619",
"text": "This paper presents a novel flexible sliding thigh frame for a gait enhancing mechatronic system. With its two-layered unique structure, the frame is flexible in certain locations and directions, and stiff at certain other locations, so that it can fît well to the wearer's thigh and transmit the assisting torque without joint loading. The paper describes the basic mechanics of this 3D flexible frame and its stiffness characteristics. We implemented the 3D flexible frame on a gait enhancing mechatronic system and conducted experiments. The performance of the proposed mechanism is verified by simulation and experiments.",
"title": ""
},
{
"docid": "8ce97c23c5714b2032cfd8098a59a8b4",
"text": "In psychodynamic theory, trauma is associated with a life event, which is defined by its intensity, by the inability of the person to respond adequately and by its pathologic longlasting effects on the psychic organization. In this paper, we describe how neurobiological changes link to psychodynamic theory. Initially, Freud believed that all types of neurosis were the result of former traumatic experiences, mainly in the form of sexual trauma. According to the first Freudian theory (1890–1897), hysteric patients suffer mainly from relevant memories. In his later theory of ‘differed action’, i.e., the retroactive attribution of sexual or traumatic meaning to earlier events, Freud links the consequences of sexual trauma in childhood with the onset of pathology in adulthood (Boschan, 2008). The transmission of trauma from parents to children may take place from one generation to the other. The trauma that is being experienced by the child has an interpersonal character and is being reinforced by the parents’ own traumatic experience. The subject’s interpersonal exposure through the relationship with the direct victims has been recognized as a risk factor for the development of a post-traumatic stress disorder. Trauma may be transmitted from the mother to the foetus during the intrauterine life (Opendak & Sullivan, 2016). Empirical studies also demonstrate that in the first year of life infants that had witnessed violence against their mothers presented symptoms of a posttraumatic disorder. Traumatic symptomatology in infants includes eating difficulties, sleep disorders, high arousal level and excessive crying, affect disorders and relational problems with adults and peers. Infants that are directly dependant to the caregiver are more vulnerable and at a greater risk to suffer interpersonal trauma and its neurobiological consequences (Opendak & Sullivan, 2016). In older children symptoms were more related to the severity of violence they had been exposed to than to the mother’s actual emotional state, which shows that the relationship between mother’s and child’s trauma is different in each age stage. The type of attachment and the quality of the mother-child interactional relationship contribute also to the transmission of the trauma. According to Fonagy (2003), the mother who is experiencing trauma is no longer a source of security and becomes a source of danger. Thus, the mentalization ability may be destroyed by an attachment figure, which caused to the child enough stress related to its own thoughts and emotions to an extent, that the child avoids thoughts about the other’s subjective experience. At a neurobiological level, many studies have shown that the effects of environmental stress on the brain are being mediated through molecular and cellular mechanisms. More specifically, trauma causes changes at a chemical and anatomical level resulting in transforming the subject’s response to future stress. The imprinting mechanisms of traumatic experiences are directly related to the activation of the neurobiological circuits associated with emotion, in which amygdala play a central role. The traumatic experiences are strongly encoded in memory and difficult to be erased. Early stress may result in impaired cognitive function related to disrupted functioning of certain areas of the hippocampus in the short or long term. Infants or young children that have suffered a traumatic experience may are unable to recollect events in a conscious way. However, they may maintain latent memory of the reactions to the experience and the intensity of the emotion. The neurobiological data support the ‘deferred action’ of the psychodynamic theory according which when the impact of early interpersonal trauma is so pervasive, the effects can transcend into later stages, even after the trauma has stopped. The two approaches, psychodynamic and neurobiological, are not opposite, but complementary. Psychodynamic psychotherapists and neurobiologists, based on extended theoretical bases, combine data and enrich the understanding of psychiatric disorders in childhood. The study of interpersonal trauma offers a good example of how different approaches, biological and psychodynamic, may come closer and possibly be unified into a single model, which could result in more effective therapeutic approaches.",
"title": ""
},
{
"docid": "75f5679d9c1bab3585c1bf28d50327d8",
"text": "From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.",
"title": ""
}
] |
scidocsrr
|
4c9b2a96fac7e62bf1237a59fe45c80e
|
Multilevel secure data stream processing: Architecture and implementation
|
[
{
"docid": "24da291ca2590eb614f94f8a910e200d",
"text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.",
"title": ""
}
] |
[
{
"docid": "96363ec5134359b5bf7c8b67f67971db",
"text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.",
"title": ""
},
{
"docid": "d974b1ffafd9ad738303514f28a770b9",
"text": "We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.",
"title": ""
},
{
"docid": "a6e0bbc761830bc74d58793a134fa75b",
"text": "With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.",
"title": ""
},
{
"docid": "e9250f1b7c471c522d8a311a18f5c07b",
"text": "In this paper, we explored a learning approach which combines di erent learning methods in inductive logic programming (ILP) to allow a learner to produce more expressive hypotheses than that of each individual learner. Such a learning approach may be useful when the performance of the task depends on solving a large amount of classication problems and each has its own characteristics which may or may not t a particular learning method. The task of semantic parser acquisition in two di erent domains was attempted and preliminary results demonstrated that such an approach is promising.",
"title": ""
},
{
"docid": "9955b14187e172e34f233fec70ae0a38",
"text": "Neural network language models (NNLM) have become an increasingly popular choice for large vocabulary continuous speech recognition (LVCSR) tasks, due to their inherent generalisation and discriminative power. This paper present two techniques to improve performance of standard NNLMs. First, the form of NNLM is modelled by introduction an additional output layer node to model the probability mass of out-of-shortlist (OOS) words. An associated probability normalisation scheme is explicitly derived. Second, a novel NNLM adaptation method using a cascaded network is proposed. Consistent WER reductions were obtained on a state-of-the-art Arabic LVCSR task over conventional NNLMs. Further performance gains were also observed after NNLM adaptation.",
"title": ""
},
{
"docid": "80fed8845ca14843855383d714600960",
"text": "In this paper, a methodology is developed to use data acquisition derived from condition monitoring and standard diagnosis for rehabilitation purposes of transformers. The interpretation and understanding of the test data are obtained from international test standards to determine the current condition of transformers. In an attempt to ascertain monitoring priorities, the effective test methods are selected for transformer diagnosis. In particular, the standardization of diagnostic and analytical techniques are being improved that will enable field personnel to more easily use the test results and will reduce the need for interpretation by experts. In addition, the advanced method has the potential to reduce the time greatly and increase the accuracy of diagnostics. The important aim of the standardization is to develop the multiple diagnostic models that combine results from the different tests and give an overall assessment of reliability and maintenance for transformers.",
"title": ""
},
{
"docid": "7e70955671d2ad8728fdba0fc3ec5548",
"text": "Detection of drowsiness based on extraction of IMF’s from EEG signal using EMD process and characterizing the features using trained Artificial Neural Network (ANN) is introduced in this paper. Our subjects are 8 volunteers who have not slept for last 24 hour due to travelling. EEG signal was recorded when the subject is sitting on a chair facing video camera and are obliged to see camera only. ANN is trained using a utility made in Matlab to mark the EEG data for drowsy state and awaked state and then extract IMF’s of marked data using EMD to prepare feature inputs for Neural Network. Once the neural network is trained, IMFs of New subjects EEG Signals is given as input and ANN will give output in two different states i.e. ‘drowsy’ or ‘awake’. The system is tested on 8 different subjects and it provided good results with more than 84.8% of correct detection of drowsy states.",
"title": ""
},
{
"docid": "17ed052368311073f7f18fd423c817e9",
"text": "We adopt and analyze a synchronous K-step averaging stochastic gradient descent algorithm which we call K-AVG for solving large scale machine learning problems. We establish the convergence results of K-AVG for nonconvex objectives. Our analysis of K-AVG applies to many existing variants of synchronous SGD. We explain why the Kstep delay is necessary and leads to better performance than traditional parallel stochastic gradient descent which is equivalent to K-AVG withK = 1. We also show that K-AVG scales better with the number of learners than asynchronous stochastic gradient descent (ASGD). Another advantage of K-AVG over ASGD is that it allows larger stepsizes and facilitates faster convergence. On a cluster of 128 GPUs, K-AVG is faster than ASGD implementations and achieves better accuracies and faster convergence for training with the CIFAR-10 dataset.",
"title": ""
},
{
"docid": "dd6ed8448043868d17ddb015c98a4721",
"text": "Social networking sites, especially Facebook, are an integral part of the lifestyle of contemporary youth. The facilities are increasingly being used by older persons as well. Usage is mainly for social purposes, but the groupand discussion facilities of Facebook hold potential for focused academic use. This paper describes and discusses a venture in which postgraduate distancelearning students joined an optional group for the purpose of discussions on academic, contentrelated topics, largely initiated by the students themselves. Learning and insight were enhanced by these discussions and the students, in their environment of distance learning, are benefiting by contact with fellow students.",
"title": ""
},
{
"docid": "5d2190a63468e299bf755895488bd7ba",
"text": "We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowledge, which is not necessarily included in publically available knowledge sources. Therefore, we achieve robustness via two extensions. Firstly, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Secondly, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. The resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the various techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature.",
"title": ""
},
{
"docid": "850f51897e97048a376c60a3a989426f",
"text": "With the advent of high dimensionality, adequate identification of relevant features of the data has become indispensable in real-world scenarios. In this context, the importance of feature selection is beyond doubt and different methods have been developed. However, with such a vast body of algorithms available, choosing the adequate feature selection method is not an easy-to-solve question and it is necessary to check their effectiveness on different situations. Nevertheless, the assessment of relevant features is difficult in real datasets and so an interesting option is to use artificial data. In this paper, several synthetic datasets are employed for this purpose, aiming at reviewing the performance of feature selection methods in the presence of a crescent number or irrelevant features, noise in the data, redundancy and interaction between attributes, as well as a small ratio between number of samples and number of features. Seven filters, two embedded methods, and two wrappers are applied over eleven synthetic datasets, tested by four classifiers, so as to be able to choose a robust method, paving the way for its application to real datasets.",
"title": ""
},
{
"docid": "0103439813a724a3df2e3bd827680abd",
"text": "Unsupervised automatic topic discovery in micro-blogging social networks is a very challenging task, as it involves the analysis of very short, noisy, ungrammatical and uncontextual messages. Most of the current approaches to this problem are basically syntactic, as they focus either on the use of statistical techniques or on the analysis of the co-occurrences between the terms. This paper presents a novel topic discovery methodology, based on the mapping of hashtags to WordNet terms and their posterior clustering, in which semantics plays a centre role. The paper also presents a detailed case study in the field of Oncology, in which the discovered topics are thoroughly compared to a golden standard, showing promising results. 2015 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "90abf21c7a6929a47d789c3e1c56f741",
"text": "Nearly 40 years ago, Dr. R.J. Gibbons made the first reports of the clinical relevance of what we now know as bacterial biofilms when he published his observations of the role of polysaccharide glycocalyx formation on teeth by Streptococcus mutans [Sci. Am. 238 (1978) 86]. As the clinical relevance of bacterial biofilm formation became increasingly apparent, interest in the phenomenon exploded. Studies are rapidly shedding light on the biomolecular pathways leading to this sessile mode of growth but many fundamental questions remain. The intent of this review is to consider the reasons why bacteria switch from a free-floating to a biofilm mode of growth. The currently available wealth of data pertaining to the molecular genetics of biofilm formation in commonly studied, clinically relevant, single-species biofilms will be discussed in an effort to decipher the motivation behind the transition from planktonic to sessile growth in the human body. Four potential incentives behind the formation of biofilms by bacteria during infection are considered: (1) protection from harmful conditions in the host (defense), (2) sequestration to a nutrient-rich area (colonization), (3) utilization of cooperative benefits (community), (4) biofilms normally grow as biofilms and planktonic cultures are an in vitro artifact (biofilms as the default mode of growth).",
"title": ""
},
{
"docid": "c0350ac9bd1c38252e04a3fd097ae6ee",
"text": "In contrast to the increasing popularity of REpresentational State Transfer (REST), systematic testing of RESTful Application Programming Interfaces (API) has not attracted much attention so far. This paper describes different aspects of automated testing of RESTful APIs. Later, we focus on functional and security tests, for which we apply a technique called model-based software development. Based on an abstract model of the RESTful API that comprises resources, states and transitions a software generator not only creates the source code of the RESTful API but also creates a large number of test cases that can be immediately used to test the implementation. This paper describes the process of developing a software generator for test cases using state-of-the-art tools and provides an example to show the feasibility of our approach.",
"title": ""
},
{
"docid": "4d2461f0fe7cd85ed2d4678f3a3b164b",
"text": "BACKGROUND\nProblematic Internet addiction or excessive Internet use is characterized by excessive or poorly controlled preoccupations, urges, or behaviors regarding computer use and Internet access that lead to impairment or distress. Currently, there is no recognition of internet addiction within the spectrum of addictive disorders and, therefore, no corresponding diagnosis. It has, however, been proposed for inclusion in the next version of the Diagnostic and Statistical Manual of Mental Disorder (DSM).\n\n\nOBJECTIVE\nTo review the literature on Internet addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment.\n\n\nMETHODS\nReview of published literature between 2000-2009 in Medline and PubMed using the term \"internet addiction.\n\n\nRESULTS\nSurveys in the United States and Europe have indicated prevalence rate between 1.5% and 8.2%, although the diagnostic criteria and assessment questionnaires used for diagnosis vary between countries. Cross-sectional studies on samples of patients report high comorbidity of Internet addiction with psychiatric disorders, especially affective disorders (including depression), anxiety disorders (generalized anxiety disorder, social anxiety disorder), and attention deficit hyperactivity disorder (ADHD). Several factors are predictive of problematic Internet use, including personality traits, parenting and familial factors, alcohol use, and social anxiety.\n\n\nCONCLUSIONS AND SCIENTIFIC SIGNIFICANCE\nAlthough Internet-addicted individuals have difficulty suppressing their excessive online behaviors in real life, little is known about the patho-physiological and cognitive mechanisms responsible for Internet addiction. Due to the lack of methodologically adequate research, it is currently impossible to recommend any evidence-based treatment of Internet addiction.",
"title": ""
},
{
"docid": "412b616f4fcb9399c8220c542ecac83e",
"text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.",
"title": ""
},
{
"docid": "d80070cf7ab3d3e75c2da1525e59be67",
"text": "This paper presents for the first time the analysis and experimental validation of a six-slot four-pole synchronous reluctance motor with nonoverlapping fractional slot-concentrated windings. The machine exhibits high torque density and efficiency due to its high fill factor coils with very short end windings, facilitated by a segmented stator and bobbin winding of the coils. These advantages are coupled with its inherent robustness and low cost. The topology is presented as a logical step forward in advancing synchronous reluctance machines that have been universally wound with a sinusoidally distributed winding. The paper presents the motor design, performance evaluation through finite element studies and validation of the electromagnetic model, and thermal specification through empirical testing. It is shown that high performance synchronous reluctance motors can be constructed with single tooth wound coils, but considerations must be given regarding torque quality and the d-q axis inductances.",
"title": ""
},
{
"docid": "8bd9a5cf3ca49ad8dd38750410a462b0",
"text": "Most regional anesthesia in breast surgeries is performed as postoperative pain management under general anesthesia, and not as the primary anesthesia. Regional anesthesia has very few cardiovascular or pulmonary side-effects, as compared with general anesthesia. Pectoral nerve block is a relatively new technique, with fewer complications than other regional anesthesia. We performed Pecs I and Pec II block simultaneously as primary anesthesia under moderate sedation with dexmedetomidine for breast conserving surgery in a 49-year-old female patient with invasive ductal carcinoma. Block was uneventful and showed no complications. Thus, Pecs block with sedation could be an alternative to general anesthesia for breast surgeries.",
"title": ""
},
{
"docid": "e84856804fd03b5334353937e9db4f81",
"text": "The probabilistic method comes up in various fields in mathematics. In these notes, we will give a brief introduction to graph theory and applications of the probabilistic method in proving bounds for Ramsey numbers and a theorem in graph cuts. This method is based on the following idea: in order to prove the existence of an object with some desired property, one defines a probability space on some larger class of objects, and then shows that an element of this space has the desired property with positive probability. The elements contained in this probability space may be of any kind. We will illustrate the probabilistic method by giving applications in graph theory.",
"title": ""
},
{
"docid": "7f5f267e7628f3d9968c940ee3a5a370",
"text": "Let G=(V,E) be a complete undirected graph, with node set V={v 1 , . . ., v n } and edge set E . The edges (v i ,v j ) ∈ E have nonnegative weights that satisfy the triangle inequality. Given a set of integers K = { k i } i=1 p $(\\sum_{i=1}^p k_i \\leq |V|$) , the minimum K-cut problem is to compute disjoint subsets with sizes { k i } i=1 p , minimizing the total weight of edges whose two ends are in different subsets. We demonstrate that for any fixed p it is possible to obtain in polynomial time an approximation of at most three times the optimal value. We also prove bounds on the ratio between the weights of maximum and minimum cuts.",
"title": ""
}
] |
scidocsrr
|
ccb69c95b57ab3b3a726e8ee0c27059c
|
Improving ChangeDistiller Improving Abstract Syntax Tree based Source Code Change Detection
|
[
{
"docid": "cd8eeaeb81423fcb1c383f2b60e928df",
"text": "Detecting and representing changes to data is important for active databases, data warehousing, view maintenance, and version and configuration management. Most previous work in change management has dealt with flat-file and relational data; we focus on hierarchically structured data. Since in many cases changes must be computed from old and new versions of the data, we define the hierarchical change detection problem as the problem of finding a \"minimum-cost edit script\" that transforms one data tree to another, and we present efficient algorithms for computing such an edit script. Our algorithms make use of some key domain characteristics to achieve substantially better performance than previous, general-purpose algorithms. We study the performance of our algorithms both analytically and empirically, and we describe the application of our techniques to hierarchically structured documents.",
"title": ""
}
] |
[
{
"docid": "1945d4663a49a5e1249e43dc7f64d15b",
"text": "The current generation of adolescents grows up in a media-saturated world. However, it is unclear how media influences the maturational trajectories of brain regions involved in social interactions. Here we review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use. We argue that adolescents are highly sensitive to acceptance and rejection through social media, and that their heightened emotional sensitivity and protracted development of reflective processing and cognitive control may make them specifically reactive to emotion-arousing media. This review illustrates how neuroscience may help understand the mutual influence of media and peers on adolescents’ well-being and opinion formation. The current generation of adolescents grows up in a media-saturated world. Here, Crone and Konijn review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use.",
"title": ""
},
{
"docid": "963d6b615ffd025723c82c1aabdbb9c6",
"text": "A single high-directivity microstrip patch antenna (MPA) having a rectangular profile, which can substitute a linear array is proposed. It is designed by using genetic algorithms with the advantage of not requiring a feeding network. The patch fits inside an area of 2.54 x 0.25, resulting in a broadside pattern with a directivity of 12 dBi and a fractional impedance bandwidth of 4 %. The antenna is fabricated and the measurements are in good agreement with the simulated results. The genetic MPA provides a similar directivity as linear arrays using a corporate or series feeding, with the advantage that the genetic MPA results in more bandwidth.",
"title": ""
},
{
"docid": "909405e3c06f22273107cb70a40d88c6",
"text": "This paper reports a 6-bit 220-MS/s time-interleaving successive approximation register analog-to-digital converter (SAR ADC) for low-power low-cost CMOS integrated systems. The major concept of the design is based on the proposed set-and-down capacitor switching method in the DAC capacitor array. Compared to the conventional switching method, the average switching energy is reduced about 81%. At 220-MS/s sampling rate, the measured SNDR and SFDR are 32.62 dB and 48.96 dB respectively. The resultant ENOB is 5.13 bits. The total power consumption is 6.8 mW. Fabricated in TSMC 0.18-µm 1P5M Digital CMOS technology, the ADC only occupies 0.032 mm2 active area.",
"title": ""
},
{
"docid": "f8947be81285e037eef69c5d2fcb94fb",
"text": "To build a flexible and an adaptable architecture network supporting variety of services and their respective requirements, 5G NORMA introduced a network of functions based architecture breaking the major design principles followed in the current network of entities based architecture. This revolution exploits the advantages of the new technologies like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) in conjunction with the network slicing and multitenancy concepts. In this paper we focus on the concept of Software Defined for Mobile Network Control (SDM-C) network: its definition, its role in controlling the intra network slices resources, its specificity to be QoE aware thanks to the QoE/QoS monitoring and modeling component and its complementarity with the orchestration component called SDM-O. To operate multiple network slices on the same infrastructure efficiently through controlling resources and network functions sharing among instantiated network slices, a common entity named SDM-X is introduced. The proposed design brings a set of new capabilities to make the network energy efficient, a feature that is discussed through some use cases.",
"title": ""
},
{
"docid": "49791684a7a455acc9daa2ca69811e74",
"text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.",
"title": ""
},
{
"docid": "704f4681b724a0e4c7c10fd129f3378b",
"text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.",
"title": ""
},
{
"docid": "6470c8a921a9095adb96afccaa0bf97b",
"text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.",
"title": ""
},
{
"docid": "3d7e7ec8d4d0c2b3167805b2c3ad6e94",
"text": "The Electric Vehicle Routing Problem with Time Windows (EVRPTW) is an extension to the well-known Vehicle Routing Problem with Time Windows (VRPTW) where the fleet consists of electric vehicles (EVs). Since EVs have limited driving range due to their battery capacities they may need to visit recharging stations while servicing the customers along their route. The recharging may take place at any battery level and after the recharging the battery is assumed to be full. In this paper, we relax the full recharge restriction and allow partial recharging (EVRPTW-PR) which is more practical in the real world due to shorter recharging duration. We formulate this problem as 0-1 mixed integer linear program and develop an Adaptive Large Neighborhood Search (ALNS) algorithm to solve it efficiently. We apply several removal and insertion mechanisms by selecting them dynamically and adaptively based on their past performances, including new mechanisms specifically designed for EVRPTW and EVRPTWPR. We test the performance of ALNS by using benchmark instances from the recent literature. The computational results show that the proposed method is effective in finding high quality solutions and the partial recharging option may significantly improve the routing decisions.",
"title": ""
},
{
"docid": "4cd868f43a4a468791d014515800fb04",
"text": "Rescue operations play an important role in disaster management and in most of the cases rescue operation are challenged by the conditions where human intervention is highly unlikely allowed, in such cases a device which can replace human limitations with advanced technology in robotics and humanoids which can track or follow a route to find the targets. In this paper we use Cellular mobile communication technology as communication channel between the transmitter and the receiving robot device. A phone is established between the transmitter mobile phone and the one on robot with a DTMF decoder which receives the motion control commands from the keypad via mobile phone. The implemented system is built around on the ARM7 LPC2148. It processes the information came from sensors and DTMF module and send to the motor driver bridge to control the motors to change direction and position of the robot. This system is designed to use best in the conditions of accidents or incidents happened in coal mining, fire accidents, bore well incidents and so on.",
"title": ""
},
{
"docid": "56f18b39a740dd65fc2907cdef90ac99",
"text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.",
"title": ""
},
{
"docid": "1277b7b45f5a54eec80eb8ab47ee3fbb",
"text": "Latent variable models, and probabilistic graphical models more generally, provide a declarative language for specifying prior knowledge and structural relationships in complex datasets. They have a long and rich history in natural language processing, having contributed to fundamental advances such as statistical alignment for translation (Brown et al., 1993), topic modeling (Blei et al., 2003), unsupervised part-of-speech tagging (Brown et al., 1992), and grammar induction (Klein and Manning, 2004), among others. Deep learning, broadly construed, is a toolbox for learning rich representations (i.e., features) of data through numerical optimization. Deep learning is the current dominant paradigm in natural language processing, and some of the major successes include language modeling (Bengio et al., 2003; Mikolov et al., 2010; Zaremba et al., 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), and natural language understanding tasks such as question answering and natural language inference.",
"title": ""
},
{
"docid": "5c47c2de88f662f8c6e735b5bb9cd37a",
"text": "Neural Machine Translation (NMT) models are often trained on heterogeneous mixtures of domains, from news to parliamentary proceedings, each with unique distributions and language. In this work we show that training NMT systems on naively mixed data can degrade performance versus models fit to each constituent domain. We demonstrate that this problem can be circumvented, and propose three models that do so by jointly learning domain discrimination and translation. We demonstrate the efficacy of these techniques by merging pairs of domains in three languages: Chinese, French, and Japanese. After training on composite data, each approach outperforms its domain-specific counterparts, with a model based on a discriminator network doing so most reliably. We obtain consistent performance improvements and an average increase of 1.1 BLEU.",
"title": ""
},
{
"docid": "448dc3c1c5207e606f1bd3b386f8bbde",
"text": "Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.",
"title": ""
},
{
"docid": "b9b5c187df7a83392244d51b2b4f30a7",
"text": "OBJECTIVE\nTo compare the prevalence of anxiety, depression, and stress in medical students from all semesters of a Brazilian medical school and assess their respective associated factors.\n\n\nMETHOD\nA cross-sectional study of students from the twelve semesters of a Brazilian medical school was carried out. Students filled out a questionnaire including sociodemographics, religiosity (DUREL - Duke Religion Index), and mental health (DASS-21 - Depression, Anxiety, and Stress Scale). The students were compared for mental health variables (Chi-squared/ANOVA). Linear regression models were employed to assess factors associated with DASS-21 scores.\n\n\nRESULTS\n761 (75.4%) students answered the questionnaire; 34.6% reported depressive symptomatology, 37.2% showed anxiety symptoms, and 47.1% stress symptoms. Significant differences were found for: anxiety - ANOVA: [F = 2.536, p=0.004] between first and tenth (p=0.048) and first and eleventh (p=0.025) semesters; depression - ANOVA: [F = 2.410, p=0.006] between first and second semesters (p=0.045); and stress - ANOVA: [F = 2.968, p=0.001] between seventh and twelfth (p=0.044), tenth and twelfth (p=0.011), and eleventh and twelfth (p=0.001) semesters. The following factors were associated with (a) stress: female gender, anxiety, and depression; (b) depression: female gender, intrinsic religiosity, anxiety, and stress; and (c) anxiety: course semester, depression, and stress.\n\n\nCONCLUSION\nOur findings revealed high levels of depression, anxiety, and stress symptoms in medical students, with marked differences among course semesters. Gender and religiosity appeared to influence the mental health of the medical students.",
"title": ""
},
{
"docid": "2cd1edeccd5d8b2f8471864a938e7438",
"text": "A large body of evidence supports the hypothesis that mesolimbic dopamine (DA) mediates, in animal models, the reinforcing effects of central nervous system stimulants such as cocaine and amphetamine. The role DA plays in mediating amphetamine-type subjective effects of stimulants in humans remains to be established. Both amphetamine and cocaine increase norepinephrine (NE) via stimulation of release and inhibition of reuptake, respectively. If increases in NE mediate amphetamine-type subjective effects of stimulants in humans, then one would predict that stimulant medications that produce amphetamine-type subjective effects in humans should share the ability to increase NE. To test this hypothesis, we determined, using in vitro methods, the neurochemical mechanism of action of amphetamine, 3,4-methylenedioxymethamphetamine (MDMA), (+)-methamphetamine, ephedrine, phentermine, and aminorex. As expected, their rank order of potency for DA release was similar to their rank order of potency in published self-administration studies. Interestingly, the results demonstrated that the most potent effect of these stimulants is to release NE. Importantly, the oral dose of these stimulants, which produce amphetamine-type subjective effects in humans, correlated with the their potency in releasing NE, not DA, and did not decrease plasma prolactin, an effect mediated by DA release. These results suggest that NE may contribute to the amphetamine-type subjective effects of stimulants in humans.",
"title": ""
},
{
"docid": "ced0dfa1447b86cc5af2952012960511",
"text": "OBJECTIVE\nThe pathophysiology of peptic ulcer disease (PUD) in liver cirrhosis (LC) and chronic hepatitis has not been established. The aim of this study was to assess the role of portal hypertension from PUD in patients with LC and chronic hepatitis.\n\n\nMATERIALS AND METHODS\nWe analyzed the medical records of 455 hepatic vein pressure gradient (HVPG) and esophagogastroduodenoscopy patients who had LC or chronic hepatitis in a single tertiary hospital. The association of PUD with LC and chronic hepatitis was assessed by univariate and multivariate analysis.\n\n\nRESULTS\nA total of 72 PUD cases were detected. PUD was associated with LC more than with chronic hepatitis (odds ratio [OR]: 4.13, p = 0.03). In the univariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 4.34, p = 0.04) and smoking was associated with PUD in patients with chronic hepatitis (OR: 3.61, p = 0.04). In the multivariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 2.93, p = 0.04). However, HVPG was not related to PUD in patients with LC or chronic hepatitis.\n\n\nCONCLUSION\nAccording to the present study, patients with LC have a higher risk of PUD than those with chronic hepatitis. The risk factor was taking ulcerogenic medication. However, HVPG reflecting portal hypertension was not associated with PUD in LC or chronic hepatitis (Clinicaltrial number NCT01944878).",
"title": ""
},
{
"docid": "27c0c6c43012139fc3e4ee64ae043c0b",
"text": "This paper presents a method for measuring signal backscattering from RFID tags, and for calculating a tag's radar cross section (RCS). We derive a theoretical formula for the RCS of an RFID tag with a minimum-scattering antenna. We describe an experimental measurement technique, which involves using a network analyzer connected to an anechoic chamber with and without the tag. The return loss measured in this way allows us to calculate the backscattered power and to find the tag's RCS. Measurements were performed using an RFID tag operating in the UHF band. To determine whether the tag was turned on, we used an RFID tag tester. The tag's RCS was also calculated theoretically, using electromagnetic simulation software. The theoretical results were found to be in good agreement with experimental data",
"title": ""
},
{
"docid": "b37064e74a2c88507eacb9062996a911",
"text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.",
"title": ""
},
{
"docid": "d77a9e08115ecda71a126819bb6012d4",
"text": "Music, an abstract stimulus, can arouse feelings of euphoria and craving, similar to tangible rewards that involve the striatal dopaminergic system. Using the neurochemical specificity of [11C]raclopride positron emission tomography scanning, combined with psychophysiological measures of autonomic nervous system activity, we found endogenous dopamine release in the striatum at peak emotional arousal during music listening. To examine the time course of dopamine release, we used functional magnetic resonance imaging with the same stimuli and listeners, and found a functional dissociation: the caudate was more involved during the anticipation and the nucleus accumbens was more involved during the experience of peak emotional responses to music. These results indicate that intense pleasure in response to music can lead to dopamine release in the striatal system. Notably, the anticipation of an abstract reward can result in dopamine release in an anatomical pathway distinct from that associated with the peak pleasure itself. Our results help to explain why music is of such high value across all human societies.",
"title": ""
}
] |
scidocsrr
|
ac92d4f51a9bdcca07e144e93ce6a31a
|
Coupled-Resonator Filters With Frequency-Dependent Couplings: Coupling Matrix Synthesis
|
[
{
"docid": "359d3e06c221e262be268a7f5b326627",
"text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.",
"title": ""
}
] |
[
{
"docid": "4ebdfc3fe891f11902fb94973b6be582",
"text": "This work introduces the CASCADE error correction protocol and LDPC (Low-Density Parity Check) error correction codes which are both parity check based. We also give the results of computer simulations that are performed for comparing their performances (redundant information, success).",
"title": ""
},
{
"docid": "561b37c506657693d27fa65341faf51e",
"text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.",
"title": ""
},
{
"docid": "27312c44c3e453ad9e5f35a45b50329c",
"text": "The immunologic processes involved in Graves' disease (GD) have one unique characteristic--the autoantibodies to the TSH receptor (TSHR)--which have both linear and conformational epitopes. Three types of TSHR antibodies (stimulating, blocking, and cleavage) with different functional capabilities have been described in GD patients, which induce different signaling effects varying from thyroid cell proliferation to thyroid cell death. The establishment of animal models of GD by TSHR antibody transfer or by immunization with TSHR antigen has confirmed its pathogenic role and, therefore, GD is the result of a breakdown in TSHR tolerance. Here we review some of the characteristics of TSHR antibodies with a special emphasis on new developments in our understanding of what were previously called \"neutral\" antibodies and which we now characterize as autoantibodies to the \"cleavage\" region of the TSHR ectodomain.",
"title": ""
},
{
"docid": "936cebe86936c6aa49758636554a4dc7",
"text": "A new kind of distributed power divider/combiner circuit for use in octave bandwidth (or more) microstrip power transistor amplifier is presented. The design, characteristics and advantages are discussed. Experimental results on a 4-way divider are presented and compared with theory.",
"title": ""
},
{
"docid": "97578b3a8f5f34c96e7888f273d4494f",
"text": "We analyze the use, advantages, and drawbacks of graph kernels in chemoin-formatics, including a comparison of kernel-based approaches with other methodology, as well as examples of applications. Kernel-based machine learning [1], now widely applied in chemoinformatics, delivers state-of-the-art performance [2] in tasks like classification and regression. Molecular graph kernels [3] are a recent development where kernels are defined directly on the molecular structure graph. This allows the adaptation of methods from graph theory to structure graphs and their direct use with kernel learning algorithms. The main advantage of kernel learning, the so-called “kernel trick”, allows for a systematic, computationally feasible, and often globally optimal search for non-linear patterns, as well as the direct use of non-numerical inputs such as strings and graphs. A drawback is that solutions are expressed indirectly in terms of similarity to training samples, and runtimes that are typically quadratic or cubic in the number of training samples. Graph kernels [3] are positive semidefinite functions defined directly on graphs. The most important types are based on random walks, subgraph patterns, optimal assignments, and graphlets. Molecular structure graphs have strong properties that can be exploited [4], e.g., they are undirected, have no self-loops and no multiple edges, are connected (except for salts), annotated, often planar in the graph-theoretic sense, and their vertex degree is bounded by a small constant. In many applications, they are small. Many graph kernels are generalpurpose, some are suitable for structure graphs, and a few have been explicitly designed for them. We present three exemplary applications of the iterative similarity optimal assignment kernel [5], which was designed for the comparison of small structure graphs: The discovery of novel agonists of the peroxisome proliferator-activated receptor g [6] (ligand-based virtual screening), the estimation of acid dissociation constants [7] (quantitative structure-property relationships), and molecular de novo design [8].",
"title": ""
},
{
"docid": "ba4ffbb6c3dc865f803cbe31b52919c5",
"text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.",
"title": ""
},
{
"docid": "258d0290b2cc7d083800d51dfa525157",
"text": "In recent years, study of influence propagation in social networks has gained tremendous attention. In this context, we can identify three orthogonal dimensions—the number of seed nodes activated at the beginning (known as budget), the expected number of activated nodes at the end of the propagation (known as expected spread or coverage), and the time taken for the propagation. We can constrain one or two of these and try to optimize the third. In their seminal paper, Kempe et al. constrained the budget, left time unconstrained, and maximized the coverage: this problem is known as Influence Maximization (or MAXINF for short). In this paper, we study alternative optimization problems which are naturally motivated by resource and time constraints on viral marketing campaigns. In the first problem, termed minimum target set selection (or MINTSS for short), a coverage threshold η is given and the task is to find the minimum size seed set such that by activating it, at least η nodes are eventually activated in the expected sense. This naturally captures the problem of deploying a viral campaign on a budget. In the second problem, termed MINTIME, the goal is to minimize the time in which a predefined coverage is achieved. More precisely, in MINTIME, a coverage threshold η and a budget threshold k are given, and the task is to find a seed set of size at most k such that by activating it, at least η nodes are activated in the expected sense, in the minimum possible time. This problem addresses the issue of timing when deploying viral campaigns. Both these problems are NP-hard, which motivates our interest in their approximation. For MINTSS, we develop a simple greedy algorithm and show that it provides a bicriteria approximation. We also establish a generic hardness result suggesting that improving this bicriteria approximation is likely to be hard. For MINTIME, we show that even bicriteria and tricriteria approximations are hard under several conditions. We show, however, that if we allow the budget for number of seeds k to be boosted by a logarithmic factor and allow the coverage to fall short, then the problem can be solved exactly in PTIME, i.e., we can achieve the required coverage within the time achieved by the optimal solution to MINTIME with budget k and coverage threshold η. Finally, we establish the value of the approximation algorithms, by conducting an experimental evaluation, comparing their quality against that achieved by various heuristics.",
"title": ""
},
{
"docid": "3a2ae63e5b8a9132e30a24373d9262e1",
"text": "Nine projective linear measurements were taken to determine morphometric differences of the face among healthy young adult Chinese, Vietnamese, and Thais (60 in each group) and to assess the validity of six neoclassical facial canons in these populations. In addition, the findings in the Asian ethnic groups were compared to the data of 60 North American Caucasians. The canons served as criteria for determining the differences between the Asians and Caucasians. In neither Asian nor Caucasian subjects were the three sections of the facial profile equal. The validity of the five other facial canons was more frequent in Caucasians (range: 16.7–36.7%) than in Asians (range: 1.7–26.7%). Horizontal measurement results were significantly greater in the faces of the Asians (en–en, al–al, zy–zy) than in their white counterparts; as a result, the variation between the classical proportions and the actual measurements was significantly higher among Asians (range: 90–100%) than Caucasians (range: 13.3–48%). The dominant characteristics of the Asian face were a wider intercanthal distance in relation to a shorter palpebral fissure, a much wider soft nose within wide facial contours, a smaller mouth width, and a lower face smaller than the forehead height. In the absence of valid anthropometric norms of craniofacial measurements and proportion indices, our results, based on quantitative analysis of the main vertical and horizontal measurements of the face, offers surgeons guidance in judging the faces of Asian patients in preparation for corrective surgery.",
"title": ""
},
{
"docid": "66fb14019184326107647df9771046f6",
"text": "Word embeddings are well known to capture linguistic regularities of the language on which they are trained. Researchers also observe that these regularities can transfer across languages. However, previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. In this work, we show that such cross-lingual connection can actually be established without any form of supervision. We achieve this end by formulating the problem as a natural adversarial game, and investigating techniques that are crucial to successful training. We carry out evaluation on the unsupervised bilingual lexicon induction task. Even though this task appears intrinsically cross-lingual, we are able to demonstrate encouraging performance without any cross-lingual clues.",
"title": ""
},
{
"docid": "c26eabb377db5f1033ec6d354d890a6f",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "ec0da5cea716d1270b2143ffb6c610d6",
"text": "This study focuses on the development of a web-based Attendance Register System or formerly known as ARS. The development of this system is motivated due to the fact that the students’ attendance records are one of the important elements that reflect their academic achievements in the higher academic institutions. However, the current practice implemented in most of the higher academic institutions in Malaysia is becoming more prone to human errors and frauds. Assisted by the System Development Life Cycle (SDLC) methodology, the ARS has been built using the web-based applications such as PHP, MySQL and Apache to cater the recording and reporting of the students’ attendances. The development of this prototype system is inspired by the feasibility study done in Universiti Teknologi MARA, Malaysia where 550 respondents have taken part in answering the questionnaires. From the analysis done, it has revealed that a more systematic and revolutionary system is indeed needed to be reinforced in order to improve the process of recording and reporting the attendances in the higher academic institution. ARS can be easily accessed by the lecturers via the Web and most importantly, the reports can be generated in realtime processing, thus, providing invaluable information about the students’ commitments in attending the classes. This paper will discuss in details the development of ARS from the feasibility study until the design phase.",
"title": ""
},
{
"docid": "ee72a297c05a438a49e86a45b81db17f",
"text": "Screening for cyclodextrin glycosyltransferase (CGTase)-producing alkaliphilic bacteria from samples collected from hyper saline soda lakes (Wadi Natrun Valley, Egypt), resulted in isolation of potent CGTase producing alkaliphilic bacterium, termed NPST-10. 16S rDNA sequence analysis identified the isolate as Amphibacillus sp. CGTase was purified to homogeneity up to 22.1 fold by starch adsorption and anion exchange chromatography with a yield of 44.7%. The purified enzyme was a monomeric protein with an estimated molecular weight of 92 kDa using SDS-PAGE. Catalytic activities of the enzyme were found to be 88.8 U mg(-1) protein, 20.0 U mg(-1) protein and 11.0 U mg(-1) protein for cyclization, coupling and hydrolytic activities, respectively. The enzyme was stable over a wide pH range from pH 5.0 to 11.0, with a maximal activity at pH 8.0. CGTase exhibited activity over a wide temperature range from 45 °C to 70 °C, with maximal activity at 50 °C and was stable at 30 °C to 55 °C for at least 1 h. Thermal stability of the purified enzyme could be significantly improved in the presence of CaCl(2). K(m) and V(max) values were estimated using soluble starch as a substrate to be 1.7 ± 0.15 mg/mL and 100 ± 2.0 μmol/min, respectively. CGTase was significantly inhibited in the presence of Co(2+), Zn(2+), Cu(2+), Hg(2+), Ba(2+), Cd(2+), and 2-mercaptoethanol. To the best of our knowledge, this is the first report of CGTase production by Amphibacillus sp. The achieved high conversion of insoluble raw corn starch into cyclodextrins (67.2%) with production of mainly β-CD (86.4%), makes Amphibacillus sp. NPST-10 desirable for the cyclodextrin production industry.",
"title": ""
},
{
"docid": "29c32c8c447b498f43ec215633305923",
"text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.",
"title": ""
},
{
"docid": "2d644e4146358131d43fbe25ba725c74",
"text": "Neural interface technology has made enormous strides in recent years but stimulating electrodes remain incapable of reliably targeting specific cell types (e.g. excitatory or inhibitory neurons) within neural tissue. This obstacle has major scientific and clinical implications. For example, there is intense debate among physicians, neuroengineers and neuroscientists regarding the relevant cell types recruited during deep brain stimulation (DBS); moreover, many debilitating side effects of DBS likely result from lack of cell-type specificity. We describe here a novel optical neural interface technology that will allow neuroengineers to optically address specific cell types in vivo with millisecond temporal precision. Channelrhodopsin-2 (ChR2), an algal light-activated ion channel we developed for use in mammals, can give rise to safe, light-driven stimulation of CNS neurons on a timescale of milliseconds. Because ChR2 is genetically targetable, specific populations of neurons even sparsely embedded within intact circuitry can be stimulated with high temporal precision. Here we report the first in vivo behavioral demonstration of a functional optical neural interface (ONI) in intact animals, involving integrated fiberoptic and optogenetic technology. We developed a solid-state laser diode system that can be pulsed with millisecond precision, outputs 20 mW of power at 473 nm, and is coupled to a lightweight, flexible multimode optical fiber, approximately 200 microm in diameter. To capitalize on the unique advantages of this system, we specifically targeted ChR2 to excitatory cells in vivo with the CaMKIIalpha promoter. Under these conditions, the intensity of light exiting the fiber ( approximately 380 mW mm(-2)) was sufficient to drive excitatory neurons in vivo and control motor cortex function with behavioral output in intact rodents. No exogenous chemical cofactor was needed at any point, a crucial finding for in vivo work in large mammals. Achieving modulation of behavior with optical control of neuronal subtypes may give rise to fundamental network-level insights complementary to what electrode methodologies have taught us, and the emerging optogenetic toolkit may find application across a broad range of neuroscience, neuroengineering and clinical questions.",
"title": ""
},
{
"docid": "d7b77fae980b3bc26ffb4917d6d093c1",
"text": "This work presents a combination of a teach-and-replay visual navigation and Monte Carlo localization methods. It improves a reliable teach-and-replay navigation method by replacing its dependency on precise dead-reckoning by introducing Monte Carlo localization to determine robot position along the learned path. In consequence, the navigation method becomes robust to dead-reckoning errors, can be started from at any point in the map and can deal with the ‘kidnapped robot’ problem. Furthermore, the robot is localized with MCL only along the taught path, i.e. in one dimension, which does not require a high number of particles and significantly reduces the computational cost. Thus, the combination of MCL and teach-and-replay navigation mitigates the disadvantages of both methods. The method was tested using a P3-AT ground robot and a Parrot AR.Drone aerial robot over a long indoor corridor. Experiments show the validity of the approach and establish a solid base for continuing this work.",
"title": ""
},
{
"docid": "8216a6da70affe452ec3c5998e3c77ba",
"text": "In this paper, the performance of a rectangular microstrip patch antenna fed by microstrip line is designed to operate for ultra-wide band applications. It consists of a rectangular patch with U-shaped slot on one side of the substrate and a finite ground plane on the other side. The U-shaped slot and the finite ground plane are used to achieve an excellent impedance matching to increase the bandwidth. The proposed antenna is designed and optimized based on extensive 3D EM simulation studies. The proposed antenna is designed to operate over a frequency range from 3.6 to 15 GHz.",
"title": ""
},
{
"docid": "e68da0df82ade1ef0ff2e0b26da4cb4e",
"text": "What service-quality attributes must Internet banks offer to induce consumers to switch to online transactions and keep using them?",
"title": ""
},
{
"docid": "0612781063f878c3b85321fd89026426",
"text": "A lot of research has been done on multiple-valued logic (MVL) such as ternary logic in these years. MVL reduces the number of necessary operations and also decreases the chip area that would be used. Carbon nanotube field effect transistors (CNTFETs) are considered a viable alternative for silicon transistors (MOSFETs). Combining carbon nanotube transistors and MVL can produce a unique design that is faster and more flexible. In this paper, we design a new half adder and a new multiplier by nanotechnology using a ternary logic, which decreases the power consumption and chip surface and raises the speed. The presented design is simulated using CNTFET of Stanford University and HSPICE software, and the results are compared with those of other studies.",
"title": ""
},
{
"docid": "0cccb226bb72be281ead8c614bd46293",
"text": "We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.",
"title": ""
},
{
"docid": "c19b9828de0416b17d0e24b66c7cb0a5",
"text": "Process monitoring using indirect methods leverages on the usage of sensors. Using sensors to acquire vital process related information also presents itself with the problem of big data management and analysis. Due to uncertainty in the frequency of events occurring, a higher sampling rate is often used in real-time monitoring applications to increase the chances of capturing and understanding all possible events related to the process. Advanced signal processing methods helps to further decipher meaningful information from the acquired data. In this research work, power spectrum density (PSD) of sensor data acquired at sampling rates between 40 kHz-51.2 kHz was calculated and the co-relation between PSD and completed number of cycles/passes is presented. Here, the progress in number of cycles/passes is the event this research work intends to classify and the algorithm used to compute PSD is Welchs estimate method. A comparison between Welchs estimate method and statistical methods is also discussed. A clear co-relation was observed using Welchs estimate to classify the number of cyceles/passes.",
"title": ""
}
] |
scidocsrr
|
bd445f10eb1f0fc811869f66ed27b6d4
|
Pke: an Open Source Python-based Keyphrase Extraction Toolkit
|
[
{
"docid": "3a37bf4ffad533746d2335f2c442a6d6",
"text": "Keyphrase extraction is the task of identifying single or multi-word expressions that represent the main topics of a document. In this paper we present TopicRank, a graph-based keyphrase extraction method that relies on a topical representation of the document. Candidate keyphrases are clustered into topics and used as vertices in a complete graph. A graph-based ranking model is applied to assign a significance score to each topic. Keyphrases are then generated by selecting a candidate from each of the topranked topics. We conducted experiments on four evaluation datasets of different languages and domains. Results show that TopicRank significantly outperforms state-of-the-art methods on three datasets.",
"title": ""
}
] |
[
{
"docid": "90dfa19b821aeab985a96eba0c3037d3",
"text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.",
"title": ""
},
{
"docid": "49a041e18a063876dc595f33fe8239a8",
"text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. These vulnerabilities mostly emanate from the open nature of such systems and their reliance on userspecified judgments for building profiles. Attackers can easily introduce biased data in an attempt to force the system to “adapt” in a manner advantageous to them. Our research in secure personalization is examining a range of attack models, from the simple to the complex, and a variety of recommendation techniques. In this chapter, we explore an attack model that focuses on a subset of users with similar tastes and show that such an attack can be highly successful against both user-based and item-based collaborative filtering. We also introduce a detection model that can significantly decrease the impact of this attack.",
"title": ""
},
{
"docid": "c56c71775a0c87f7bb6c59d6607e5280",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
},
{
"docid": "f4d060cd114ffa2c028dada876fcb735",
"text": "Mutations of SALL1 related to spalt of Drosophila have been found to cause Townes-Brocks syndrome, suggesting a function of SALL1 for the development of anus, limbs, ears, and kidneys. No function is yet known for SALL2, another human spalt-like gene. The structure of SALL2 is different from SALL1 and all other vertebrate spalt-like genes described in mouse, Xenopus, and Medaka, suggesting that SALL2-like genes might also exist in other vertebrates. Consistent with this hypothesis, we isolated and characterized a SALL2 homologous mouse gene, Msal-2. In contrast to other vertebrate spalt-like genes both SALL2 and Msal-2 encode only three double zinc finger domains, the most carboxyterminal of which only distantly resembles spalt-like zinc fingers. The evolutionary conservation of SALL2/Msal-2 suggests that two lines of sal-like genes with presumably different functions arose from an early evolutionary duplication of a common ancestor gene. Msal-2 is expressed throughout embryonic development but also in adult tissues, predominantly in brain. However, the function of SALL2/Msal-2 still needs to be determined.",
"title": ""
},
{
"docid": "fc50b185323c45e3d562d24835e99803",
"text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.",
"title": ""
},
{
"docid": "97f0e7c134d2852d0bfcfec63fb060d7",
"text": "Action selection is a fundamental decision process for us, and depends on the state of both our body and the environment. Because signals in our sensory and motor systems are corrupted by variability or noise, the nervous system needs to estimate these states. To select an optimal action these state estimates need to be combined with knowledge of the potential costs or rewards of different action outcomes. We review recent studies that have investigated the mechanisms used by the nervous system to solve such estimation and decision problems, which show that human behaviour is close to that predicted by Bayesian Decision Theory. This theory defines optimal behaviour in a world characterized by uncertainty, and provides a coherent way of describing sensorimotor processes.",
"title": ""
},
{
"docid": "f10ac6d718b07a22b798ef236454b806",
"text": "The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.",
"title": ""
},
{
"docid": "3b113b9b299987677daa2bebc7e7bf03",
"text": "The restoration of endodontic tooth is always a challenge for the clinician, not only due to excessive loss of tooth structure but also invasion of the biological width due to large decayed lesions. In this paper, the 7 most common clinical scenarios in molars with class II lesions ever deeper were examined. This includes both the type of restoration (direct or indirect) and the management of the cavity margin, such as the need for deep margin elevation (DME) or crown lengthening. It is necessary to have the DME when the healthy tooth remnant is in the sulcus or at the epithelium level. For caries that reaches the connective tissue or the bone crest, crown lengthening is required. Endocrowns are a good treatment option in the endodontically treated tooth when the loss of structure is advanced.",
"title": ""
},
{
"docid": "dbc468368059e6b676c8ece22b040328",
"text": "In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm.",
"title": ""
},
{
"docid": "ffbebb5d8f4d269353f95596c156ba5c",
"text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.",
"title": ""
},
{
"docid": "440a6b8b41a98e392ec13a5e13d7e7ba",
"text": "A classical heuristic in software testing is to reward diversity, which implies that a higher priority must be assigned to test cases that differ the most from those already prioritized. This approach is commonly known as similarity-based test prioritization (SBTP) and can be realized using a variety of techniques. The objective of our study is to investigate whether SBTP is more effective at finding defects than random permutation, as well as determine which SBTP implementations lead to better results. To achieve our objective, we implemented five different techniques from the literature and conducted an experiment using the defects4j dataset, which contains 395 real faults from six real-world open-source Java programs. Findings indicate that running the most dissimilar test cases early in the process is largely more effective than random permutation (Vargha–Delaney A [VDA]: 0.76–0.99 observed using normalized compression distance). No technique was found to be superior with respect to the effectiveness. Locality-sensitive hashing was, to a small extent, less effective than other SBTP techniques (VDA: 0.38 observed in comparison to normalized compression distance), but its speed largely outperformed the other techniques (i.e., it was approximately 5–111 times faster). Our results bring to mind the well-known adage, “don’t put all your eggs in one basket”. To effectively consume a limited testing budget, one should spread it evenly across different parts of the system by running the most dissimilar test cases early in the testing process.",
"title": ""
},
{
"docid": "64ec8a9073308280740c96fb0c8b4617",
"text": "Lifting is a common manual material handling task performed in the workplaces. It is considered as one of the main risk factors for Work-related Musculoskeletal Disorders. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks, which requires very accurate 3D pose. Existing approaches mainly utilize marker-based sensors to collect 3D information. However, these methods are usually expensive to setup, timeconsuming in process, and sensitive to the surrounding environment. In this study, we propose a multi-view based deep perceptron approach to address aforementioned limitations. Our approach consists of two modules: a \"view-specific perceptron\" network extracts rich information independently from the image of view, which includes both 2D shape and hierarchical texture information; while a \"multi-view integration\" network synthesizes information from all available views to predict accurate 3D pose. To fully evaluate our approach, we carried out comprehensive experiments to compare different variants of our design. The results prove that our approach achieves comparable performance with former marker-based methods, i.e. an average error of 14:72 ± 2:96 mm on the lifting dataset. The results are also compared with state-of-the-art methods on HumanEva- I dataset [1], which demonstrates the superior performance of our approach.",
"title": ""
},
{
"docid": "7bf0b158d9fa4e62b38b6757887c13ed",
"text": "Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities.",
"title": ""
},
{
"docid": "78ee892fada4ec9ff860072d0d0ecbe3",
"text": "The popularity of FPGAs is rapidly growing due to the unique advantages that they offer. However, their distinctive features also raise new questions concerning the security and communication capabilities of an FPGA-based hardware platform. In this paper, we explore the some of the limits of FPGA side-channel communication. Specifically, we identify a previously unexplored capability that significantly increases both the potential benefits and risks associated with side-channel communication on an FPGA: an in-device receiver. We designed and implemented three new communication mechanisms: speed modulation, timing modulation and pin hijacking. These non-traditional interfacing techniques have the potential to provide reliable communication with an estimated maximum bandwidth of 3.3 bit/sec, 8 Kbits/sec, and 3.4 Mbits/sec, respectively.",
"title": ""
},
{
"docid": "2bb936db4a73e009a86e2bff45f88313",
"text": "Chimeric antigen receptors (CARs) have been used to redirect the specificity of autologous T cells against leukemia and lymphoma with promising clinical results. Extending this approach to allogeneic T cells is problematic as they carry a significant risk of graft-versus-host disease (GVHD). Natural killer (NK) cells are highly cytotoxic effectors, killing their targets in a non-antigen-specific manner without causing GVHD. Cord blood (CB) offers an attractive, allogeneic, off-the-self source of NK cells for immunotherapy. We transduced CB-derived NK cells with a retroviral vector incorporating the genes for CAR-CD19, IL-15 and inducible caspase-9-based suicide gene (iC9), and demonstrated efficient killing of CD19-expressing cell lines and primary leukemia cells in vitro, with marked prolongation of survival in a xenograft Raji lymphoma murine model. Interleukin-15 (IL-15) production by the transduced CB-NK cells critically improved their function. Moreover, iC9/CAR.19/IL-15 CB-NK cells were readily eliminated upon pharmacologic activation of the iC9 suicide gene. In conclusion, we have developed a novel approach to immunotherapy using engineered CB-derived NK cells, which are easy to produce, exhibit striking efficacy and incorporate safety measures to limit toxicity. This approach should greatly improve the logistics of delivering this therapy to large numbers of patients, a major limitation to current CAR-T-cell therapies.",
"title": ""
},
{
"docid": "dc7474e5e82f06eb1feb7c579fd713a7",
"text": "OBJECTIVE\nTo determine the current values and estimate the projected values (to the year 2041) for annual number of proximal femoral fractures (PFFs), age-adjusted rates of fracture, rates of death in the acute care setting, associated length of stay (LOS) in hospital, and seasonal variation by sex and age in elderly Canadians.\n\n\nDESIGN\nHospital discharge data for fiscal year 1993-94 from the Canadian Institute for Health Information were used to determine PFF incidence, and Statistics Canada population projections were used to estimate the rate and number of PFFs to 2041.\n\n\nSETTING\nCanada.\n\n\nPARTICIPANTS\nCanadian patients 65 years of age or older who underwent hip arthroplasty.\n\n\nOUTCOME MEASURES\nPFF rates, death rates and LOS by age, sex and province.\n\n\nRESULTS\nIn 1993-94 the incidence of PFF increased exponentially with increasing age. The age-adjusted rates were 479 per 100,000 for women and 187 per 100,000 for men. The number of PFFs was estimated at 23,375 (17,823 in women and 5552 in men), with a projected increase to 88,124 in 2041. The rate of death during the acute care stay increased exponentially with increasing age. The death rates for men were twice those for women. In 1993-94 an estimated 1570 deaths occurred in the acute care setting, and 7000 deaths were projected for 2041. LOS in the acute care setting increased with advancing age, as did variability in LOS, which suggests a more heterogeneous case mix with advancing age. The LOS for 1993-94 and 2041 was estimated at 465,000 and 1.8 million patient-days respectively. Seasonal variability in the incidence of PFFs by sex was not significant. Significant season-province interactions were seen (p < 0.05); however, the differences in incidence were small (on the order of 2% to 3%) and were not considered to have a large effect on resource use in the acute care setting.\n\n\nCONCLUSIONS\nOn the assumption that current conditions contributing to hip fractures will remain constant, the number of PFFs will rise exponentially over the next 40 years. The results of this study highlight the serious implications for Canadians if incidence rates are not reduced by some form of intervention.",
"title": ""
},
{
"docid": "23c00b95cbdc39bc040ea6c3e3e128d8",
"text": "Network Security is one of the important concepts in data security as the data to be uploaded should be made secure. To make data secure, there exist number of algorithms like AES (Advanced Encryption Standard), IDEA (International Data Encryption Algorithm) etc. These techniques of making the data secure come under Cryptography. Involving lnternet of Things (IoT) in Cryptography is an emerging domain. IoT can be defined as controlling things located at any part of the world via Internet. So, IoT involves data security i.e. Cryptography. Here, in this paper we discuss how data can be made secure for IoT using Cryptography.",
"title": ""
},
{
"docid": "35f2b171f4e8fbb469ef7198d8e2116e",
"text": "Recent advances in computer vision technologies have made possible the development of intelligent monitoring systems for video surveillance and ambientassisted living. By using this technology, these systems are able to automatically interpret visual data from the environment and perform tasks that would have been unthinkable years ago. These achievements represent a radical improvement but they also suppose a new threat to individual’s privacy. The new capabilities of such systems give them the ability to collect and index a huge amount of private information about each individual. Next-generation systems have to solve this issue in order to obtain the users’ acceptance. Therefore, there is a need for mechanisms or tools to protect and preserve people’s privacy. This paper seeks to clarify how privacy can be protected in imagery data, so as a main contribution a comprehensive classification of the protection methods for visual privacy as well as an up-to-date review of them are provided. A survey of the existing privacy-aware intelligent monitoring systems and a valuable discussion of important aspects of visual privacy are also provided.",
"title": ""
},
{
"docid": "c50230c77645234564ab51a11fcf49d1",
"text": "We present an image set classification algorithm based on unsupervised clustering of labeled training and unlabeled test data where labels are only used in the stopping criterion. The probability distribution of each class over the set of clusters is used to define a true set based similarity measure. To this end, we propose an iterative sparse spectral clustering algorithm. In each iteration, a proximity matrix is efficiently recomputed to better represent the local subspace structure. Initial clusters capture the global data structure and finer clusters at the later stages capture the subtle class differences not visible at the global scale. Image sets are compactly represented with multiple Grassmannian manifolds which are subsequently embedded in Euclidean space with the proposed spectral clustering algorithm. We also propose an efficient eigenvector solver which not only reduces the computational cost of spectral clustering by many folds but also improves the clustering quality and final classification results. Experiments on five standard datasets and comparison with seven existing techniques show the efficacy of our algorithm.",
"title": ""
},
{
"docid": "4e122b71c30c6c0721d5065adcf0b52c",
"text": "License plate recognition usually contains three steps, namely license plate detection/localization, character segmentation and character recognition. When reading characters on a license plate one by one after license plate detection step, it is crucial to accurately segment the characters. The segmentation step may be affected by many factors such as license plate boundaries (frames). The recognition accuracy will be significantly reduced if the characters are not properly segmented. This paper presents an efficient algorithm for character segmentation on a license plate. The algorithm follows the step that detects the license plates using an AdaBoost algorithm. It is based on an efficient and accurate skew and slant correction of license plates, and works together with boundary (frame) removal of license plates. The algorithm is efficient and can be applied in real-time applications. The experiments are performed to show the accuracy of segmentation.",
"title": ""
}
] |
scidocsrr
|
6b59d358b108eda94fcea4c866c3c13e
|
Energy-Efficient Power Control: A Look at 5G Wireless Technologies
|
[
{
"docid": "6a2d7b29a0549e99cdd31dbd2a66fc0a",
"text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.",
"title": ""
}
] |
[
{
"docid": "88488d730255a534d3255eb5884a69a6",
"text": "As Computer curricula have developed, Human-Computer Interaction has gradually become part of many of those curricula and the recent ACM/IEEE report on the core of Computing Science and Engineering, includes HumanComputer Interaction as one of the fundamental sub-areas that should be addressed by any such curricula. However, both technology and Human-Computer Interaction are evolving rapidly, thus a continuous effort is needed to maintain a program, bibliography and a set of practical assignments up to date and adapted to the current technology. This paper briefly presents an introductory course on Human-Computer Interaction offered to Electrical and Computer Engineering students at the University of Aveiro.",
"title": ""
},
{
"docid": "4b6b9539468db238d92e9762b2650b61",
"text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: m.rosemann@qut.edu.au J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke",
"title": ""
},
{
"docid": "8de530a30b8352e36b72f3436f47ffb2",
"text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.",
"title": ""
},
{
"docid": "7190c91917d1e1280010c66139837568",
"text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.",
"title": ""
},
{
"docid": "5c05ad44ac2bf3fb26cea62d563435f8",
"text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.",
"title": ""
},
{
"docid": "194bea0d713d5d167e145e43b3c8b4e2",
"text": "Users can enjoy personalized services provided by various context-aware applications that collect users' contexts through sensor-equipped smartphones. Meanwhile, serious privacy concerns arise due to the lack of privacy preservation mechanisms. Currently, most mechanisms apply passive defense policies in which the released contexts from a privacy preservation system are always real, leading to a great probability with which an adversary infers the hidden sensitive contexts about the users. In this paper, we apply a deception policy for privacy preservation and present a novel technique, FakeMask, in which fake contexts may be released to provably preserve users' privacy. The output sequence of contexts by FakeMask can be accessed by the untrusted context-aware applications or be used to answer queries from those applications. Since the output contexts may be different from the original contexts, an adversary has greater difficulty in inferring the real contexts. Therefore, FakeMask limits what adversaries can learn from the output sequence of contexts about the user being in sensitive contexts, even if the adversaries are powerful enough to have the knowledge about the system and the temporal correlations among the contexts. The essence of FakeMask is a privacy checking algorithm which decides whether to release a fake context for the current context of the user. We present a novel privacy checking algorithm and an efficient one to accelerate the privacy checking process. Extensive evaluation experiments on real smartphone context traces of users demonstrate the improved performance of FakeMask over other approaches.",
"title": ""
},
{
"docid": "b04ae3842293f5f81433afbaa441010a",
"text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.",
"title": ""
},
{
"docid": "d5bf84e6b391bee0bec00924ed788bf8",
"text": "In this paper, we explore the use of the Stellar Consensus Protocol (SCP) and its Federated Byzantine Agreement (FBA) algorithm for ensuring trust and reputation between federated, cloud-based platform instances (nodes) and their participants. Our approach is grounded on federated consensus mechanisms, which promise data quality managed through computational trust and data replication, without a centralized authority. We perform our experimentation on the ground of the NIMBLE cloud manufacturing platform, which is designed to support growth of B2B digital manufacturing communities and their businesses through federated platform services, managed by peer-to-peer networks. We discuss the message exchange flow between the NIMBLE application logic and Stellar consensus logic.",
"title": ""
},
{
"docid": "e4d58b9b8775f2a30bc15fceed9cd8bf",
"text": "Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.",
"title": ""
},
{
"docid": "87c3f3ab2c5c1e9a556ed6f467f613a9",
"text": "In this study, we apply learning-to-rank algorithms to design trading strategies using relative performance of a group of stocks based on investors’ sentiment toward these stocks. We show that learning-to-rank algorithms are effective in producing reliable rankings of the best and the worst performing stocks based on investors’ sentiment. More specifically, we use the sentiment shock and trend indicators introduced in the previous studies, and we design stock selection rules of holding long positions of the top 25% stocks and short positions of the bottom 25% stocks according to rankings produced by learning-to-rank algorithms. We then apply two learning-to-rank algorithms, ListNet and RankNet, in stock selection processes and test long-only and long-short portfolio selection strategies using 10 years of market and news sentiment data. Through backtesting of these strategies from 2006 to 2014, we demonstrate that our portfolio strategies produce risk-adjusted returns superior to the S&P500 index return, the hedge fund industry average performance HFRIEMN, and some sentiment-based approaches without learning-to-rank algorithm during the same period.",
"title": ""
},
{
"docid": "895f0424cb71c79b86ecbd11a4f2eb8e",
"text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.",
"title": ""
},
{
"docid": "c47b59ea14b86fa18e69074129af72ec",
"text": "Multiple networks naturally appear in numerous high-impact applications. Network alignment (i.e., finding the node correspondence across different networks) is often the very first step for many data mining tasks. Most, if not all, of the existing alignment methods are solely based on the topology of the underlying networks. Nonetheless, many real networks often have rich attribute information on nodes and/or edges. In this paper, we propose a family of algorithms FINAL to align attributed networks. The key idea is to leverage the node/edge attribute information to guide (topology-based) alignment process. We formulate this problem from an optimization perspective based on the alignment consistency principle, and develop effective and scalable algorithms to solve it. Our experiments on real networks show that (1) by leveraging the attribute information, our algorithms can significantly improve the alignment accuracy (i.e., up to a 30% improvement over the existing methods); (2) compared with the exact solution, our proposed fast alignment algorithm leads to a more than 10 times speed-up, while preserving a 95% accuracy; and (3) our on-query alignment method scales linearly, with an around 90% ranking accuracy compared with our exact full alignment method and a near real-time response time.",
"title": ""
},
{
"docid": "8d5759855079e2ddaab2e920b93ca2a3",
"text": "In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.",
"title": ""
},
{
"docid": "0df3d30837edd0e7809ed77743a848db",
"text": "Many language processing tasks can be reduced to breaking the text into segments with prescribed properties. Such tasks include sentence splitting, tokenization, named-entity extraction, and chunking. We present a new model of text segmentation based on ideas from multilabel classification. Using this model, we can naturally represent segmentation problems involving overlapping and non-contiguous segments. We evaluate the model on entity extraction and noun-phrase chunking and show that it is more accurate for overlapping and non-contiguous segments, but it still performs well on simpler data sets for which sequential tagging has been the best method.",
"title": ""
},
{
"docid": "d168bdb3f1117aac53da1fbac0906887",
"text": "Enforcing open source licenses such as the GNU General Public License (GPL), analyzing a binary for possible vulnerabilities, and code maintenance are all situations where it is useful to be able to determine the source code provenance of a binary. While previous work has either focused on computing binary-to-binary similarity or source-to-source similarity, BinPro is the first work we are aware of to tackle the problem of source-to-binary similarity. BinPro can match binaries with their source code even without knowing which compiler was used to produce the binary, or what optimization level was used with the compiler. To do this, BinPro utilizes machine learning to compute optimal code features for determining binaryto-source similarity and a static analysis pipeline to extract and compute similarity based on those features. Our experiments show that on average BinPro computes a similarity of 81% for matching binaries and source code of the same applications, and an average similarity of 25% for binaries and source code of similar but different applications. This shows that BinPro’s similarity score is useful for determining if a binary was derived from a particular source code.",
"title": ""
},
{
"docid": "f7ef3c104fe6c5f082e7dd060a82c03e",
"text": "Research about the artificial muscle made of fishing lines or sewing threads, called the twisted and coiled polymer actuator (abbreviated as TCA in this paper) has collected many interests, recently. Since TCA has a specific power surpassing the human skeletal muscle theoretically, it is expected to be a new generation of the artificial muscle actuator. In order that the TCA is utilized as a useful actuator, this paper introduces the fabrication and the modeling of the temperature-controllable TCA. With an embedded micro thermistor, the TCA is able to measure temperature directly, and feedback control is realized. The safe range of the force and temperature for the continuous use of the TCA was identified through experiments, and the closed-loop temperature control is successfully performed without the breakage of TCA.",
"title": ""
},
{
"docid": "77812e38f7250bc23e5157554bb101bc",
"text": "PinOS is an extension of the Pin dynamic instrumentation framework for whole-system instrumentation, i.e., to instrument both kernel and user-level code. It achieves this by interposing between the subject system and hardware using virtualization techniques. Specifically, PinOS is built on top of the Xen virtual machine monitor with Intel VT technology to allow instrumentation of unmodified OSes. PinOS is based on software dynamic translation and hence can perform pervasive fine-grain instrumentation. By inheriting the powerful instrumentation API from Pin, plus introducing some new API for system-level instrumentation, PinOS can be used to write system-wide instrumentation tools for tasks like program analysis and architectural studies. As of today, PinOS can boot Linux on IA-32 in uniprocessor mode, and can instrument complex applications such as database and web servers.",
"title": ""
},
{
"docid": "98110985cd175f088204db452a152853",
"text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.",
"title": ""
},
{
"docid": "0ff90bc5ff6ecbb5c9d89902fce1fa0a",
"text": "Improving a decision maker’s1 situational awareness of the cyber domain isn’t greatly different than enabling situation awareness in more traditional domains2. Situation awareness necessitates working with processes capable of identifying domain specific activities as well as processes capable of identifying activities that cross domains. These processes depend on the context of the environment, the domains, and the goals and interests of the decision maker but they can be defined to support any domain. This chapter will define situation awareness in its broadest sense, describe our situation awareness reference and process models, describe some of the applicable processes, and identify a set of metrics usable for measuring the performance of a capability supporting situation awareness. These techniques are independent of domain but this chapter will also describe how they apply to the cyber domain. 2.1 What is Situation Awareness (SA)? One of the challenges in working in this area is that there are a multitude of definitions and interpretations concerning the answer to this simple question. A keyword search (executed on 8 April 2009) of ‘situation awareness’ on Google yields over 18,000,000 links the first page of which ranged from a Wikipedia page through the importance of “SA while driving” and ends with a link to a free internet radio show. Also on this first search page are several links to publications by Dr. Mica Endsley whose work in SA is arguably providing a standard for SA definitions and George P. Tadda and John S. Salerno, Air Force Research Laboratory Rome NY 1 Decision maker is used very loosely to describe anyone who uses information to make decisions within a complex dynamic environment. This is necessary because, as will be discussed, situation awareness is unique and dependant on the environment being considered, the context of the decision to be made, and the user of the information. 2 Traditional domains could include land, air, or sea. S. Jajodia et al., (eds.), Cyber Situational Awareness, 15 Advances in Information Security 46, DOI 10.1007/978-1-4419-0140-8 2, c © Springer Science+Business Media, LLC 2010 16 George P. Tadda and John S. Salerno techniques particularly for dynamic environments. In [5], Dr. Endsley provides a general definition of SA in dynamic environments: “Situation awareness is the perception of the elements of the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.” Also in [5], Endsley differentiates between situation awareness, “a state of knowledge”, and situation assessment, “process of achieving, acquiring, or maintaining SA.” This distinction becomes exceedingly important when trying to apply computer automation to SA. Since situation awareness is “a state of knowledge”, it resides primarily in the minds of humans (cognitive), while situation assessment as a process or set of processes lends itself to automated techniques. Endsley goes on to note that: “SA, decision making, and performance are different stages with different factors influencing them and with wholly different approaches for dealing with each of them; thus it is important to treat these constructs separately.” The “stages” that Endsley defines have a direct correlation with Boyd’s ubiquitous OODA loop with SA relating to Observe and Orient, decision making to Decide, and performance to Act. We’ll see these stages as well as Endsley’s three “levels” of SA (perception, comprehension, and projection) manifest themselves again throughout this discussion. As first mentioned, there are several definitions for SA, from the Army Field Manual 1-02 (September 2004), Situational Awareness is: “Knowledge and understanding of the current situation which promotes timely, relevant and accurate assessment of friendly, competitive and other operations within the battlespace in order to facilitate decision making. An informational perspective and skill that fosters an ability to determine quickly the context and relevance of events that are unfolding.”",
"title": ""
}
] |
scidocsrr
|
ae969b4380a452408f920e23e7508508
|
Implementing Gender-Dependent Vowel-Level Analysis for Boosting Speech-Based Depression Recognition
|
[
{
"docid": "b66be42a294208ec31d44e57ae434060",
"text": "Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian pdfs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian PDFs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.",
"title": ""
},
{
"docid": "80bf80719a1751b16be2420635d34455",
"text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.",
"title": ""
}
] |
[
{
"docid": "6e4d8bde993e88fa2c729d2fafb6fd90",
"text": "The plant hormones gibberellin and abscisic acid regulate gene expression, secretion and cell death in aleurone. The emerging picture is of gibberellin perception at the plasma membrane whereas abscisic acid acts at both the plasma membrane and in the cytoplasm - although gibberellin and abscisic acid receptors have yet to be identified. A range of downstream-signalling components and events has been implicated in gibberellin and abscisic acid signalling in aleurone. These include the Galpha subunit of a heterotrimeric G protein, a transient elevation in cGMP, Ca2+-dependent and Ca2+-independent events in the cytoplasm, reversible protein phosphory-lation, and several promoter cis-elements and transcription factors, including GAMYB. In parallel, molecular genetic studies on mutants of Arabidopsis that show defects in responses to these hormones have identified components of gibberellin and abscisic acid signalling. These two approaches are yielding results that raise the possibility that specific gibberellin and abscisic acid signalling components perform similar functions in aleurone and other tissues.",
"title": ""
},
{
"docid": "45d551e2d813c37e032b90799c71f4c1",
"text": "A process is described to produce single sheets of functionalized graphene through thermal exfoliation of graphite oxide. The process yields a wrinkled sheet structure resulting from reaction sites involved in oxidation and reduction processes. The topological features of single sheets, as measured by atomic force microscopy, closely match predictions of first-principles atomistic modeling. Although graphite oxide is an insulator, functionalized graphene produced by this method is electrically conducting.",
"title": ""
},
{
"docid": "6c1d3eb9d3e39b25f32b77942b04d165",
"text": "The aim of this study is to investigate the factors influencing the consumer acceptance of mobile banking in Bangladesh. The demographic, attitudinal, and behavioural characteristics of mobile bank users were examined. 292 respondents from seven major mobile financial service users of different mobile network operators participated in the consumer survey. Infrastructural facility, selfcontrol, social influence, perceived risk, ease of use, need for interaction, perceived usefulness, and customer service were found to influence consumer attitudes towards mobile banking services. The infrastructural facility of updated user friendly technology and its availability was found to be the most important factor that motivated consumers’ attitudes in Bangladesh towards mobile banking. The sample size was not necessarily representative of the Bangladeshi population as a whole as it ignored large rural population. This study identified two additional factors i.e. infrastructural facility and customer service relevant to mobile banking that were absent in previous researches. By addressing the concerns of and benefits sought by the consumers, marketers can create positive attractions and policy makers can set regulations for the expansion of mobile banking services in Bangladesh. This study offers an insight into mobile banking in Bangladesh focusing influencing factors, which has not previously been investigated.",
"title": ""
},
{
"docid": "7113e007073184671d0bf5c9bdda1f5c",
"text": "It is widely accepted that mineral flotation is a very challenging control problem due to chaotic nature of process. This paper introduces a novel approach of combining multi-camera system and expert controllers to improve flotation performance. The system has been installed into the zinc circuit of Pyhäsalmi Mine (Finland). Long-term data analysis in fact shows that the new approach has improved considerably the recovery of the zinc circuit, resulting in a substantial increase in the mill’s annual profit. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7d1a7bc7809a578cd317dfb8ba5b7678",
"text": "In this paper, we introduce a new technology, which allows people to share taste and smell sensations digitally with a remote person through existing networking technologies such as the Internet. By introducing this technology, we expect people to share their smell and taste experiences with their family and friends remotely. Sharing these senses are immensely beneficial since those are strongly associated with individual memories, emotions, and everyday experiences. As the initial step, we developed a control system, an actuator, which could digitally stimulate the sense of taste remotely. The system uses two approaches to stimulate taste sensations digitally: the electrical and thermal stimulations on tongue. Primary results suggested that sourness and saltiness are the main sensations that could be evoked through this device. Furthermore, this paper focuses on future aspects of such technology for remote smell actuation followed by applications and possibilities for further developments.",
"title": ""
},
{
"docid": "e6ca4f592446163124bcf00f87ccb8df",
"text": "A full-vector beam propagation method based on a finite-element scheme for a helicoidal system is developed. The permittivity and permeability tensors of a straight waveguide are replaced with equivalent ones for a helicoidal system, obtained by transformation optics. A cylindrical, perfectly matched layer is implemented for the absorbing boundary condition. To treat wide-angle beam propagation, a second-order differentiation term with respect to the propagation direction is directly discretized without using a conventional Padé approximation. The transmission spectra of twisted photonic crystal fibers are thoroughly investigated, and it is found that the diameters of the air holes greatly affect the spectra. The calculated results are in good agreement with the recently reported measured results, showing the validity and usefulness of the method developed here.",
"title": ""
},
{
"docid": "a6a7770857964e96f98bd4021d38f59f",
"text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.",
"title": ""
},
{
"docid": "b9f7c203717e6d2b0677b51f55e614f2",
"text": "This paper demonstrates a computer-aided diagnosis (CAD) system for lung cancer classification of CT scans with unmarked nodules, a dataset from the Kaggle Data Science Bowl 2017. Thresholding was used as an initial segmentation approach to segment out lung tissue from the rest of the CT scan. Thresholding produced the next best lung segmentation. The initial approach was to directly feed the segmented CT scans into 3D CNNs for classification, but this proved to be inadequate. Instead, a modified U-Net trained on LUNA16 data (CT scans with labeled nodules) was used to first detect nodule candidates in the Kaggle CT scans. The U-Net nodule detection produced many false positives, so regions of CTs with segmented lungs where the most likely nodule candidates were located as determined by the U-Net output were fed into 3D Convolutional Neural Networks (CNNs) to ultimately classify the CT scan as positive or negative for lung cancer. The 3D CNNs produced a test set Accuracy of 86.6%. The performance of our CAD system outperforms the current CAD systems in literature which have several training and testing phases that each requires a lot of labeled data, while our CAD system has only three major phases (segmentation, nodule candidate detection, and malignancy classification), allowing more efficient training and detection and more generalizability to other cancers. Keywords—Lung Cancer; Computed Tomography; Deep Learning; Convolutional Neural Networks; Segmentation.",
"title": ""
},
{
"docid": "883a22f7036514d87ce3af86b5853de3",
"text": "A wideband integrated RF duplexer supports 3G/4G bands I, II, III, IV, and IX, and achieves a TX-to-RX isolation of more than 55dB in the transmit-band, and greater than 45dB in the corresponding receive-band across 200MHz of bandwidth. A 65nm CMOS duplexer/LNA achieves a transmit insertion loss of 2.5dB, and a cascaded receiver noise figure of 5dB with more than 27dB of gain, exceeding the commercial external duplexers performance at considerably lower cost and area.",
"title": ""
},
{
"docid": "1f8128a4a525f32099d4fefe4bea1212",
"text": "Information overload on the Web has created enormous challenges to customers selecting products for online purchases and to online businesses attempting to identify customers’ preferences efficiently. Various recommender systems employing different data representations and recommendation methods are currently used to address these challenges. In this research, we developed a graph model that provides a generic data representation and can support different recommendation methods. To demonstrate its usefulness and flexibility, we developed three recommendation methods: direct retrieval, association mining, and high-degree association retrieval. We used a data set from an online bookstore as our research test-bed. Evaluation results showed that combining product content information and historical customer transaction information achieved more accurate predictions and relevant recommendations than using only collaborative information. However, comparisons among different methods showed that high-degree association retrieval did not perform significantly better than the association mining method or the direct retrieval method in our test-bed.",
"title": ""
},
{
"docid": "16fec520bf539ab23a5164ffef5561b4",
"text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.",
"title": ""
},
{
"docid": "8a22f454a657768a3d5fd6e6ec743f5f",
"text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.",
"title": ""
},
{
"docid": "6e9edeffb12cf8e50223a933885bcb7c",
"text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.",
"title": ""
},
{
"docid": "d34be0ce0f9894d6e219d12630166308",
"text": "The need for curricular reform in K-4 mathematics is clear. Such reform must address both the content and emphasis of the curriculum as well as approaches to instruction. A longstanding preoccupation with computation and other traditional skills has dominated both what mathematics is taught and the way mathematics is taught at this level. As a result, the present K-4 curriculum is narrow in scope; fails to foster mathematical insight, reasoning, and problem solving; and emphasizes rote activities. Even more significant is that children begin to lose their belief that learning mathematics is a sense-making experience. They become passive receivers of rules and procedures rather than active participants in creating knowledge.",
"title": ""
},
{
"docid": "9ffb4220530a4758ea6272edf6e7e531",
"text": "Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy, and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures, and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering 12 publicly-available real-life event logs, 12 proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.",
"title": ""
},
{
"docid": "fe0fa94ce6f02626fca12f21b60bec46",
"text": "Solid waste management (SWM) is a major public health and environmental concern in urban areas of many developing countries. Nairobi’s solid waste situation, which could be taken to generally represent the status which is largely characterized by low coverage of solid waste collection, pollution from uncontrolled dumping of waste, inefficient public services, unregulated and uncoordinated private sector and lack of key solid waste management infrastructure. This paper recapitulates on the public-private partnership as the best system for developing countries; challenges, approaches, practices or systems of SWM, and outcomes or advantages to the approach; the literature review focuses on surveying information pertaining to existing waste management methodologies, policies, and research relevant to the SWM. Information was sourced from peer-reviewed academic literature, grey literature, publicly available waste management plans, and through consultation with waste management professionals. Literature pertaining to SWM and municipal solid waste minimization, auditing and management were searched for through online journal databases, particularly Web of Science, and Science Direct. Legislation pertaining to waste management was also researched using the different databases. Additional information was obtained from grey literature and textbooks pertaining to waste management topics. After conducting preliminary research, prevalent references of select sources were identified and scanned for additional relevant articles. Research was also expanded to include literature pertaining to recycling, composting, education, and case studies; the manuscript summarizes with future recommendationsin terms collaborations of public/ private patternships, sensitization of people, privatization is important in improving processes and modernizing urban waste management, contract private sector, integrated waste management should be encouraged, provisional government leaders need to alter their mind set, prepare a strategic, integrated SWM plan for the cities, enact strong and adequate legislation at city and national level, evaluate the real impacts of waste management systems, utilizing locally based solutions for SWM service delivery and design, location, management of the waste collection centersand recycling and compositing activities should be",
"title": ""
},
{
"docid": "945dea6576c6131fc33cd14e5a2a0be8",
"text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.",
"title": ""
},
{
"docid": "8d4fdbdd76085391f2a80022f130459e",
"text": "Recently completed whole-genome sequencing projects marked the transition from gene-based phylogenetic studies to phylogenomics analysis of entire genomes. We developed an algorithm MGRA for reconstructing ancestral genomes and used it to study the rearrangement history of seven mammalian genomes: human, chimpanzee, macaque, mouse, rat, dog, and opossum. MGRA relies on the notion of the multiple breakpoint graphs to overcome some limitations of the existing approaches to ancestral genome reconstructions. MGRA also generates the rearrangement-based characters guiding the phylogenetic tree reconstruction when the phylogeny is unknown.",
"title": ""
},
{
"docid": "456327904250958baace54bde107f0f7",
"text": "Dependability on AI models is of utmost importance to ensure full acceptance of the AI systems. One of the key aspects of the dependable AI system is to ensure that all its decisions are fair and not biased towards any individual. In this paper, we address the problem of detecting whether a model has an individual discrimination. Such a discrimination exists when two individuals who differ only in the values of their protected attributes (such as, gender/race) while the values of their non-protected ones are exactly the same, get different decisions. Measuring individual discrimination requires an exhaustive testing, which is infeasible for a nontrivial system. In this paper, we present an automated technique to generate test inputs, which is geared towards finding individual discrimination. Our technique combines the wellknown technique called symbolic execution along with the local explainability for generation of effective test cases. Our experimental results clearly demonstrate that our technique produces 3.72 times more successful test cases than the existing state-of-the-art across all our chosen benchmarks.",
"title": ""
},
{
"docid": "7a1f244aae5f28cd9fb2d5ba54113c28",
"text": "Next generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy. Sequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform. All three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support.",
"title": ""
}
] |
scidocsrr
|
d234c5f58bdf816d4e53862e5714cf5c
|
How Random Walks Can Help Tourism
|
[
{
"docid": "ae9469b80390e5e2e8062222423fc2cd",
"text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.",
"title": ""
},
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
}
] |
[
{
"docid": "573fd558864a9c05fef5935a6074c3bc",
"text": "Recurrent Neural Networks (RNNs) play a major role in the field of sequential learning, and have outperformed traditional algorithms on many benchmarks. Training deep RNNs still remains a challenge, and most of the state-of-the-art models are structured with a transition depth of 2-4 layers. Recurrent Highway Networks (RHNs) were introduced in order to tackle this issue. These have achieved state-of-the-art performance on a few benchmarks using a depth of 10 layers. However, the performance of this architecture suffers from a bottleneck, and ceases to improve when an attempt is made to add more layers. In this work, we analyze the causes for this, and postulate that the main source is the way that the information flows through time. We introduce a novel and simple variation for the RHN cell, called Highway State Gating (HSG), which allows adding more layers, while continuing to improve performance. By using a gating mechanism for the state, we allow the net to ”choose” whether to pass information directly through time, or to gate it. This mechanism also allows the gradient to back-propagate directly through time and, therefore, results in a slightly faster convergence. We use the Penn Treebank (PTB) dataset as a platform for empirical proof of concept. Empirical results show that the improvement due to Highway State Gating is for all depths, and as the depth increases, the improvement also increases.",
"title": ""
},
{
"docid": "0b29e6813c08637d8df1a472e0e323b6",
"text": "A significant number of promising applications for vehicular ad hoc networks (VANETs) are becoming a reality. Most of these applications require a variety of heterogenous content to be delivered to vehicles and to their on-board users. However, the task of content delivery in such dynamic and large-scale networks is easier said than done. In this article, we propose a classification of content delivery solutions applied to VANETs while highlighting their new characteristics and describing their underlying architectural design. First, the two fundamental building blocks that are part of an entire content delivery system are identified: replica allocation and content delivery. The related solutions are then classified according to their architectural definition. Within each category, solutions are described based on the techniques and strategies that have been adopted. As result, we present an in-depth discussion on the architecture, techniques, and strategies adopted by studies in the literature that tackle problems related to vehicular content delivery networks.",
"title": ""
},
{
"docid": "1d1f14cb78693e56d014c89eacfcc3ef",
"text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.",
"title": ""
},
{
"docid": "eee9b5301c83faf4fe8fd786f0d99efd",
"text": "We present a named entity recognition and classification system that uses only probabilistic character-level features. Classifications by multiple orthographic tries are combined in a hidden Markov model framework to incorporate both internal and contextual evidence. As part of the system, we perform a preprocessing stage in which capitalisation is restored to sentence-initial and all-caps words with high accuracy. We report f-values of 86.65 and 79.78 for English, and 50.62 and 54.43 for the German datasets.",
"title": ""
},
{
"docid": "408d3db3b2126990611fdc3a62a985ea",
"text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "62a7c4bd564a7741cd966f3e11487236",
"text": "This paper presents an implementation method for the people counting system which detects and tracks moving people using a fixed single camera. The main contribution of this paper is the novel head detection method based on body’s geometry. A novel body descriptor is proposed for finding people’s head which is defined as Body Feature Rectangle (BFR). First, a vertical projection method is used to get the line which divides touching persons into individuals. Second, a special inscribed rectangle is found to locate the neck position which describes the torso area. Third, locations of people’s heads can be got according to its neck-positions. Last, a robust counting method named MEA is proposed to get the real counts of walking people flows. The proposed method can divide the multiple-people image into individuals whatever people merge with each other or not. Moreover, the passing people can be counted accurately under the influence of wearing hats. Experimental results show that our proposed method can nearly reach to an accuracy of 100% if the number of a people-merging pattern is less than six. Keywords-People Counting; Head Detection; BFR; People-flow Tracking",
"title": ""
},
{
"docid": "bc3c7f4fb6d9a2fd12fb702a69a35b23",
"text": "Vestibular migraine is a chameleon among the episodic vertigo syndromes because considerable variation characterizes its clinical manifestation. The attacks may last from seconds to days. About one-third of patients presents with monosymptomatic attacks of vertigo or dizziness without headache or other migrainous symptoms. During attacks most patients show spontaneous or positional nystagmus and in the attack-free interval minor ocular motor and vestibular deficits. Women are significantly more often affected than men. Symptoms may begin at any time in life, with the highest prevalence in young adults and between the ages of 60 and 70. Over the last 10 years vestibular migraine has evolved into a medical entity in dizziness units. It is the most common cause of spontaneous recurrent episodic vertigo and accounts for approximately 10% of patients with vertigo and dizziness. Its broad spectrum poses a diagnostic problem of how to rule out Menière's disease or vestibular paroxysmia. Vestibular migraine should be included in the International Headache Classification of Headache Disorders (ICHD) as a subcategory of migraine. It should, however, be kept separate and distinct from basilar-type migraine and benign paroxysmal vertigo of childhood. We prefer the term \"vestibular migraine\" to \"migrainous vertigo,\" because the latter may also refer to various vestibular and non-vestibular symptoms. Antimigrainous medication to treat the single attack and to prevent recurring attacks appears to be effective, but the published evidence is weak. A randomized, double-blind, placebo-controlled study is required to evaluate medical treatment of this condition.",
"title": ""
},
{
"docid": "5f1a273e8419836388faa49df63330c4",
"text": "In this paper, the traditional k-modes clustering algorithm is extended by weighting attribute value matches in dissimilarity computation. The use of attribute value weighting technique makes it possible to generate clusters with stronger intra-similarities, and therefore achieve better clustering performance. Experimental results on real life datasets show that these value weighting based k-modes algorithms are superior to the standard k-modes algorithm with respect to clustering accuracy.",
"title": ""
},
{
"docid": "8a42bc2dec684cf087d19bbbd2e815f8",
"text": "Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us fullcircle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.",
"title": ""
},
{
"docid": "81b82ae24327c7d5c0b0bf4a04904826",
"text": "AIM\nTo identify key predictors and moderators of mental health 'help-seeking behavior' in adolescents.\n\n\nBACKGROUND\nMental illness is highly prevalent in adolescents and young adults; however, individuals in this demographic group are among the least likely to seek help for such illnesses. Very little quantitative research has examined predictors of help-seeking behaviour in this demographic group.\n\n\nDESIGN\nA cross-sectional design was used.\n\n\nMETHODS\nA group of 180 volunteers between the ages of 17-25 completed a survey designed to measure hypothesized predictors and moderators of help-seeking behaviour. Predictors included a range of health beliefs, personality traits and attitudes. Data were collected in August 2010 and were analysed using two standard and three hierarchical multiple regression analyses.\n\n\nFINDINGS\nThe standard multiple regression analyses revealed that extraversion, perceived benefits of seeking help, perceived barriers to seeking help and social support were direct predictors of help-seeking behaviour. Tests of moderated relationships (using hierarchical multiple regression analyses) indicated that perceived benefits were more important than barriers in predicting help-seeking behaviour. In addition, perceived susceptibility did not predict help-seeking behaviour unless individuals were health conscious to begin with or they believed that they would benefit from help.\n\n\nCONCLUSION\nA range of personality traits, attitudes and health beliefs can predict help-seeking behaviour for mental health problems in adolescents. The variable 'Perceived Benefits' is of particular importance as it is: (1) a strong and robust predictor of help-seeking behaviour; and (2) a factor that can theoretically be modified based on health promotion programmes.",
"title": ""
},
{
"docid": "2f04cd1b83b2ec17c9930515e8b36b95",
"text": "Traditionally, visualization design assumes that the e↵ectiveness of visualizations is based on how much, and how clearly, data are presented. We argue that visualization requires a more nuanced perspective. Data are not ends in themselves, but means to an end (such as generating knowledge or assisting in decision-making). Focusing on the presentation of data per se can result in situations where these higher goals are ignored. This is especially the case for situations where cognitive or perceptual biases make the presentation of “just” the data as misleading as willful distortion. We argue that we need to de-sanctify data, and occasionally promote designs which distort or obscure data in service of understanding. We discuss examples of beneficial embellishment, distortion, and obfuscation in visualization, and argue that these examples are representative of a wider class of techniques for going beyond simplistic presentations of data.",
"title": ""
},
{
"docid": "ae94106e02e05a38aa50842d7978c2c0",
"text": "Fast and reliable face and facial feature detection are required abilities for any Human Computer Interaction approach based on Computer Vision. Since the publication of the Viola-Jones object detection framework and the more recent open source implementation, an increasing number of applications have appeared, particularly in the context of facial processing. In this respect, the OpenCV community shares a collection of public domain classifiers for this scenario. However, as far as we know these classifiers have never been evaluated and/or compared. In this paper we analyze the individual performance of all those public classifiers getting the best performance for each target. These results are valid to define a baseline for future approaches. Additionally we propose a simple hierarchical combination of those classifiers to increase the facial feature detection rate while reducing the face false detection rate.",
"title": ""
},
{
"docid": "db26de1462b3e8e53bf54846849ae2c2",
"text": "The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.",
"title": ""
},
{
"docid": "9ddddb7775122ed13544b37c70607507",
"text": "We present results from a multi-generational study of collocated group console gaming. We examine the intergenerational gaming practices of four generations of gamers, from ages 3 to 83 and, in particular, the roles that gamers of different generations take on when playing together in groups. Our findings highlight the extent to which existing gaming technologies are amenable to interactions within collocated intergenerational groups and the broader set of roles that have emerged in these computer-mediated interactions than have previously been documented by studies of more traditional collocated, intergenerational interactions. We articulate attributes of the games that encourage intergenerational interaction.",
"title": ""
},
{
"docid": "1c365e6256ae1c404c6f3f145eb04924",
"text": "Progress in signal processing continues to enable welcome advances in high-frequency (HF) radio performance and efficiency. The latest data waveforms use channels wider than 3 kHz to boost data throughput and robustness. This has driven the need for a more capable Automatic Link Establishment (ALE) system that links faster and adapts the wideband HF (WBHF) waveform to efficiently use available spectrum. In this paper, we investigate the possibility and advantages of using various non-scanning ALE techniques with the new wideband ALE (WALE) to further improve spectrum awareness and linking speed.",
"title": ""
},
{
"docid": "24bd1a178fde153c8ee8a4fa332611cf",
"text": "This paper proposes a comprehensive methodology for the design of a controllable electric vehicle charger capable of making the most of the interaction with an autonomous smart energy management system (EMS) in a residential setting. Autonomous EMSs aim achieving the potential benefits associated with energy exchanges between consumers and the grid, using bidirectional and power-controllable electric vehicle chargers. A suitable design for a controllable charger is presented, including the sizing of passive elements and controllers. This charger has been implemented using an experimental setup with a digital signal processor to validate its operation. The experimental results obtained foresee an adequate interaction between the proposed charger and a compatible autonomous EMS in a typical residential setting.",
"title": ""
},
{
"docid": "c773efb805899ee9e365b5f19ddb40bc",
"text": "In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium on Computational Intelligence and Games (CIG). First, we describe the competition regulations and the software framework. Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship. The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions.",
"title": ""
},
{
"docid": "90b502cb72488529ec0d389ca99b57b8",
"text": "The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.",
"title": ""
},
{
"docid": "2ae96a524ba3b6c43ea6bfa112f71a30",
"text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.",
"title": ""
}
] |
scidocsrr
|
2b87c4c7c558342c8daf9fbc3234cb48
|
The particle swarm optimization algorithm: convergence analysis and parameter selection
|
[
{
"docid": "555ad116b9b285051084423e2807a0ba",
"text": "The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors. '",
"title": ""
}
] |
[
{
"docid": "e198dab977ba3e97245ecdd07fd25690",
"text": "The majority of the human genome consists of non-coding regions that have been called junk DNA. However, recent studies have unveiled that these regions contain cis-regulatory elements, such as promoters, enhancers, silencers, insulators, etc. These regulatory elements can play crucial roles in controlling gene expressions in specific cell types, conditions, and developmental stages. Disruption to these regions could contribute to phenotype changes. Precisely identifying regulatory elements is key to deciphering the mechanisms underlying transcriptional regulation. Cis-regulatory events are complex processes that involve chromatin accessibility, transcription factor binding, DNA methylation, histone modifications, and the interactions between them. The development of next-generation sequencing techniques has allowed us to capture these genomic features in depth. Applied analysis of genome sequences for clinical genetics has increased the urgency for detecting these regions. However, the complexity of cis-regulatory events and the deluge of sequencing data require accurate and efficient computational approaches, in particular, machine learning techniques. In this review, we describe machine learning approaches for predicting transcription factor binding sites, enhancers, and promoters, primarily driven by next-generation sequencing data. Data sources are provided in order to facilitate testing of novel methods. The purpose of this review is to attract computational experts and data scientists to advance this field.",
"title": ""
},
{
"docid": "895da346d947feba89cb171accb3f142",
"text": "A six-phase six-step voltage-fed induction motor is presented. The inverter is a transistorized six-step voltage source inverter, while the motor is a modified standard three-phase squirrel-cage motor. The stator is rewound with two three-phase winding sets displaced from each other by 30 electrical degrees. A model for the system is developed to simulate the drive and predict its performance. The simulation results for steady-state conditions and experimental measurements show very good correlation. It is shown that this winding configuration results in the elimination of all air-gap flux time harmonics of the order (6v ±1, v = 1,3,5,...). Consequently, all rotor copper losses produced by these harmonics as well as all torque harmonics of the order (6v, v = 1,3,5,...) are eliminated. A comparison between-the measured instantaneous torque of both three-phase and six-phase six-step voltage-fed induction machines shows the advantage of the six-phase system over the three-phase system in eliminating the sixth harmonic dominant torque ripple.",
"title": ""
},
{
"docid": "c692dd35605c4af62429edef6b80c121",
"text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.",
"title": ""
},
{
"docid": "dd8222a589e824b5189194ab697f27d7",
"text": "Facial expression recognition has been investigated for many years, and there are two popular models: Action Units (AUs) and the Valence-Arousal space (V-A space) that have been widely used. However, most of the databases for estimating V-A intensity are captured in laboratory settings, and the benchmarks \"in-the-wild\" do not exist. Thus, the First Affect-In-The-Wild Challenge released a database for V-A estimation while the videos were captured in wild condition. In this paper, we propose an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation. The key idea is to apply AUs to estimate the V-A intensity since both AUs and V-A space could be utilized to recognize some emotion categories. Besides, the AU detector is trained based on the convolutional neural network (CNN) for facial attribute recognition. In experiments, we will show the results of the above three tasks to verify the performances of our proposed network framework.",
"title": ""
},
{
"docid": "b98585e7ed4b34afb72f81aeae2ebdcc",
"text": "The capability of transcribing music audio into music notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cognition (recognizing musical objects), knowledge representation (forming musical structures), and inference (testing alternative hypotheses). Automatic music transcription (AMT), i.e., the design of computational algorithms to convert acoustic music signals into some form of music notation, is a challenging task in signal processing and artificial intelligence. It comprises several subtasks, including multipitch estimation (MPE), onset and offset detection, instrument recognition, beat and rhythm tracking, interpretation of expressive timing and dynamics, and score typesetting.",
"title": ""
},
{
"docid": "acf4645478c28811d41755b0ed81fb39",
"text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading social network data analytics social network data analytics, you can take more advantages with limited budget.",
"title": ""
},
{
"docid": "eaad298fce83ade590a800d2318a2928",
"text": "Space vector modulation (SVM) is the best modulation technique to drive 3-phase load such as 3-phase induction motor. In this paper, the pulse width modulation strategy with SVM is analyzed in detail. The modulation strategy uses switching time calculator to calculate the timing of voltage vector applied to the three-phase balanced-load. The principle of the space vector modulation strategy is performed using Matlab/Simulink. The simulation result indicates that this algorithm is flexible and suitable to use for advance vector control. The strategy of the switching minimizes the distortion of load current as well as loss due to minimize number of commutations in the inverter.",
"title": ""
},
{
"docid": "10aca07789cf8e465443ac9813eef189",
"text": "INTRODUCTION\nThe faculty of Medicine, (FOM) Makerere University Kampala was started in 1924 and has been running a traditional curriculum for 79 years. A few years back it embarked on changing its curriculum from traditional to Problem Based Learning (PBL) and Community Based Education and Service (COBES) as well as early clinical exposure. This curriculum has been implemented since the academic year 2003/2004. The study was done to describe the steps taken to change and implement the curriculum at the Faculty of Medicine, Makerere University Kampala.\n\n\nOBJECTIVE\nTo describe the steps taken to change and implement the new curriculum at the Faculty of Medicine.\n\n\nMETHODS\nThe stages taken during the process were described and analysed.\n\n\nRESULTS\nThe following stages were recognized characterization of Uganda's health status, analysis of government policy, analysis of old curriculum, needs assessment, adoption of new model (SPICES), workshop/retreats for faculty sensitization, incremental development of programs by faculty, implementation of new curriculum.\n\n\nCONCLUSION\nThe FOM has successfully embarked on curriculum change. This has not been without challenges. However, challenges have been taken on and handled as they arose and this has led to the implementation of new curriculum. Problem based learning can be adopted even in a low resourced country like Uganda.",
"title": ""
},
{
"docid": "2f2c99ac066dd2875fcfa2dc42467757",
"text": "The popularity of wireless networks has increased in recent years and is becoming a common addition to LANs. In this paper we investigate a novel use for a wireless network based on the IEEE 802.11 standard: inferring the location of a wireless client from signal quality measures. Similar work has been limited to prototype systems that rely on nearest-neighbor techniques to infer location. In this paper, we describe Nibble, a Wi-Fi location service that uses Bayesian networks to infer the location of a device. We explain the general theory behind the system and how to use the system, along with describing our experiences at a university campus building and at a research lab. We also discuss how probabilistic modeling can be applied to a diverse range of applications that use sensor data.",
"title": ""
},
{
"docid": "9978f33847a09c651ccce68c3b88287f",
"text": "We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact.",
"title": ""
},
{
"docid": "261ef8b449727b615f8cd5bd458afa91",
"text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.",
"title": ""
},
{
"docid": "024168795536bc141bb07af74486ef78",
"text": "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.",
"title": ""
},
{
"docid": "139d9d5866a1e455af954b2299bdbcf6",
"text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses",
"title": ""
},
{
"docid": "9dac75a40e421163c4e05cfd5d36361f",
"text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.",
"title": ""
},
{
"docid": "b3c9d10efd071659336a1521ce0f8465",
"text": "The traditional diet in Okinawa is anchored by root vegetables (principally sweet potatoes), green and yellow vegetables, soybean-based foods, and medicinal plants. Marine foods, lean meats, fruit, medicinal garnishes and spices, tea, alcohol are also moderately consumed. Many characteristics of the traditional Okinawan diet are shared with other healthy dietary patterns, including the traditional Mediterranean diet, DASH diet, and Portfolio diet. All these dietary patterns are associated with reduced risk for cardiovascular disease, among other age-associated diseases. Overall, the important shared features of these healthy dietary patterns include: high intake of unrefined carbohydrates, moderate protein intake with emphasis on vegetables/legumes, fish, and lean meats as sources, and a healthy fat profile (higher in mono/polyunsaturated fats, lower in saturated fat; rich in omega-3). The healthy fat intake is likely one mechanism for reducing inflammation, optimizing cholesterol, and other risk factors. Additionally, the lower caloric density of plant-rich diets results in lower caloric intake with concomitant high intake of phytonutrients and antioxidants. Other shared features include low glycemic load, less inflammation and oxidative stress, and potential modulation of aging-related biological pathways. This may reduce risk for chronic age-associated diseases and promote healthy aging and longevity.",
"title": ""
},
{
"docid": "94ec2b6c24cbbbb8a648bd83873aa0c5",
"text": "s since January 1975, a full-text search capacity, and a personal archive for saving articles and search results of interest. All articles can be printed in a format that is virtually identical to that of the typeset pages. Beginning six months after publication, the full text of all Original Articles and Special Articles is available free to nonsubscribers who have completed a brief registration. Copyright © 2003 Massachusetts Medical Society. All rights reserved. Downloaded from www.nejm.org at UNIV OF CINCINNATI SERIALS DEPT on August 8, 2007 .",
"title": ""
},
{
"docid": "4185d65971d7345afbd7189368ed9303",
"text": "Ticket annotation and search has become an essential research subject for the successful delivery of IT operational analytics. Millions of tickets are created yearly to address business users' IT related problems. In IT service desk management, it is critical to first capture the pain points for a group of tickets to determine root cause; secondly, to obtain the respective distributions in order to layout the priority of addressing these pain points. An advanced ticket analytics system utilizes a combination of topic modeling, clustering and Information Retrieval (IR) technologies to address the above issues and the corresponding architecture which integrates of these features will allow for a wider distribution of this technology and progress to a significant financial benefit for the system owner. Topic modeling has been used to extract topics from given documents; in general, each topic is represented by a unigram language model. However, it is not clear how to interpret the results in an easily readable/understandable way until now. Due to the inefficiency to render top concepts using existing techniques, in this paper, we propose a probabilistic framework, which consists of language modeling (especially the topic models), Part-Of-Speech (POS) tags, query expansion, retrieval modeling and so on for the practical challenge. The rigorously empirical experiments demonstrate the consistent and utility performance of the proposed method on real datasets.",
"title": ""
},
{
"docid": "d22e8f2029e114b0c648a2cdfba4978a",
"text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.",
"title": ""
},
{
"docid": "cf6a7252039826211635cc9221f1db66",
"text": "Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core technologies have to offer, especially with respect to their data processing capabilities. In this paper, we first survey the state of the art, focusing on private blockchains (in which parties are authenticated). We analyze both in-production and research systems in four dimensions: distributed ledger, cryptography, consensus protocol, and smart contract. We then present BLOCKBENCH, a benchmarking framework for understanding performance of private blockchains against data processing workloads. We conduct a comprehensive evaluation of three major blockchain systems based on BLOCKBENCH, namely Ethereum, Parity, and Hyperledger Fabric. The results demonstrate several trade-offs in the design space, as well as big performance gaps between blockchain and database systems. Drawing from design principles of database systems, we discuss several research directions for bringing blockchain performance closer to the realm of databases.",
"title": ""
}
] |
scidocsrr
|
9277968249a44de6d80e829cdafc1e57
|
A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture
|
[
{
"docid": "84ca7dc9cac79fe14ea2061919c44a05",
"text": "We describe two new color indexing techniques. The rst one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1-, L 2-, or L 1-distance between two cumulative color histograms can be used to deene a similarity measure of these two color distributions. We show that while this method produces only slightly better results than color histogram methods, it is more robust with respect to the quantization parameter of the histograms. The second technique is an example of a new approach to color indexing. Instead of storing the complete color distributions, the index contains only their dominant features. We implement this approach by storing the rst three moments of each color channel of an image in the index, i.e., for a HSV image we store only 9 oating point numbers per image. The similarity function which is used for the retrieval is a weighted sum of the absolute diierences between corresponding moments. Our tests clearly demonstrate that a retrieval based on this technique produces better results and runs faster than the histogram-based methods.",
"title": ""
}
] |
[
{
"docid": "c4676e3c0fea689408e27ee197f993a3",
"text": "20140530 is provided in screen-viewable form for personal use only by members of MIT CogNet. Unauthorized use or dissemination of this information is expressly forbidden. If you have any questions about this material, please contact cognetadmin@cognet.mit.edu.",
"title": ""
},
{
"docid": "19d2c60e0c293d8104c0e6b4005c996e",
"text": "An electronic scanning antenna (ESA) that uses a beam former, such as a Rotman lens, has the ability to form multiple beams for shared-aperture applications. This characteristic makes the antenna suitable for integration into systems exploiting the multi-function radio frequency (MFRF) concept, meeting the needs for a future combat system (FCS) RF sensor. An antenna which electronically scans 45/spl deg/ in azimuth has been built and successfully tested at ARL to demonstrate this multiple-beam, shared-aperture approach at K/sub a/ band. Subsequent efforts are focused on reducing the component size and weight while extending the scanning ability of the antenna to a full hemisphere with both azimuth and elevation scanning. Primary emphasis has been on the beamformer, a Rotman lens or similar device, and the switches used to select the beams. Approaches described include replacing the cavity Rotman lens used in the prototype MFRF system with a dielectrically loaded Rotman lens having a waveguide-fed cavity, a microstrip-fed parallel plate, or a surface-wave configuration in order to reduce the overall size. The paper discusses the challenges and progress in the development of Rotman lens beam formers to support such an antenna.",
"title": ""
},
{
"docid": "4e7582d4e8db248f10f8fbe97522190a",
"text": "Recent advances in semantic epistemolo-gies and flexible symmetries offer a viable alternative to the lookaside buffer. Here, we verify the analysis of systems. Though such a hypothesis is never an appropriate purpose, it mostly conflicts with the need to provide model checking to scholars. We show that though link-level acknowledge-99] can be made electronic, game-theoretic, and virtual, model checking and architecture can agree to solve this question.",
"title": ""
},
{
"docid": "43e3d3639d30d9e75da7e3c5a82db60a",
"text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.",
"title": ""
},
{
"docid": "21147cc465a671b2513bf87edb202b6d",
"text": "We present a new o -line electronic cash system based on a problem, called the representation problem, of which little use has been made in literature thus far. Our system is the rst to be based entirely on discrete logarithms. Using the representation problem as a basic concept, some techniques are introduced that enable us to construct protocols for withdrawal and payment that do not use the cut and choose methodology of earlier systems. As a consequence, our cash system is much more e cient in both computation and communication complexity than previously proposed systems. Another important aspect of our system concerns its provability. Contrary to previously proposed systems, its correctness can be mathematically proven to a very great extent. Speci cally, if we make one plausible assumption concerning a single hash-function, the ability to break the system seems to imply that one can break the Di e-Hellman problem. Our system o ers a number of extensions that are hard to achieve in previously known systems. In our opinion the most interesting of these is that the entire cash system (including all the extensions) can be incorporated straightforwardly in a setting based on wallets with observers, which has the important advantage that doublespending can be prevented in the rst place, rather than detecting the identity of a double-spender after the fact. In particular, it can be incorporated even under the most stringent requirements conceivable about the privacy of the user, which seems to be impossible to do with previously proposed systems. Another bene t of our system is that framing attempts by a bank have negligible probability of success (independent of computing power) by a simple mechanism from within the system, which is something that previous solutions lack entirely. Furthermore, the basic cash system can be extended to checks, multi-show cash and divisibility, while retaining its computational e ciency. Although in this paper we only make use of the representation problem in groups of prime order, similar intractable problems hold in RSA-groups (with computational equivalence to factoring and computing RSAroots). We discuss how one can use these problems to construct an e cient cash system with security related to factoring or computation of RSA-roots, in an analogous way to the discrete log based system. Finally, we discuss a decision problem (the decision variant of the Di e-Hellman problem) that is strongly related to undeniable signatures, which to our knowledge has never been stated in literature and of which we do not know whether it is inBPP . A proof of its status would be of interest to discrete log based cryptography in general. Using the representation problem, we show in the appendix how to batch the con rmation protocol of undeniable signatures such that polynomially many undeniable signatures can be veri ed in four moves. AMS Subject Classi cation (1991): 94A60 CR Subject Classi cation (1991): D.4.6",
"title": ""
},
{
"docid": "fa665333f76eaa4dd5861d3b127b0f40",
"text": "A four-layer transmitarray operating at 30 GHz is designed using a dual-resonant double square ring as the unit cell element. The two resonances of the double ring are used to increase the per-layer phase variation while maintaining a wide transmission magnitude bandwidth of the unit cell. The design procedure for both the single-layer unit cell and the cascaded connection of four layers is described and it leads to a 50% increase in the -1 dB gain bandwidth over that of previous transmitarrays. Results of a 7.5% -1 dB gain bandwidth and 47% radiation efficiency are reported.",
"title": ""
},
{
"docid": "e95fa624bb3fd7ea45650213088a43b0",
"text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.",
"title": ""
},
{
"docid": "a6defeca542d1586e521a56118efc56f",
"text": "We expose and explore technical and trust issues that arise in acquiring forensic evidence from infrastructure-as-aservice cloud computing and analyze some strategies for addressing these challenges. First, we create a model to show the layers of trust required in the cloud. Second, we present the overarching context for a cloud forensic exam and analyze choices available to an examiner. Third, we provide for the first time an evaluation of popular forensic acquisition tools including Guidance EnCase and AccesData Forensic Toolkit, and show that they can successfully return volatile and non-volatile data from the cloud. We explain, however, that with those techniques judge and jury must accept a great deal of trust in the authenticity and integrity of the data from many layers of the cloud model. In addition, we explore four other solutions for acquisition—Trusted Platform Modules, the management plane, forensics as a service, and legal solutions, which assume less trust but require more cooperation from the cloud service provider. Our work lays a foundation for future development of new acquisition methods for the cloud that will be trustworthy and forensically sound. Our work also helps forensic examiners, law enforcement, and the court evaluate confidence in evidence from the cloud.",
"title": ""
},
{
"docid": "eaf3d25c7babb067e987b2586129e0e4",
"text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.",
"title": ""
},
{
"docid": "ebca43d1e96ead6d708327d807b9e72f",
"text": "Weakly supervised semantic segmentation has been a subject of increased interest due to the scarcity of fully annotated images. We introduce a new approach for solving weakly supervised semantic segmentation with deep Convolutional Neural Networks (CNNs). The method introduces a novel layer which applies simplex projection on the output of a neural network using area constraints of class objects. The proposed method is general and can be seamlessly integrated into any CNN architecture. Moreover, the projection layer allows strongly supervised models to be adapted to weakly supervised models effortlessly by substituting ground truth labels. Our experiments have shown that applying such an operation on the output of a CNN improves the accuracy of semantic segmentation in a weakly supervised setting with image-level labels.",
"title": ""
},
{
"docid": "90faa9a8dc3fd87614a61bfbdf24cab6",
"text": "The methods proposed recently for specializing word embeddings according to a particular perspective generally rely on external knowledge. In this article, we propose Pseudofit, a new method for specializing word embeddings according to semantic similarity without any external knowledge. Pseudofit exploits the notion of pseudo-sense for building several representations for each word and uses these representations for making the initial embeddings more generic. We illustrate the interest of Pseudofit for acquiring synonyms and study several variants of Pseudofit according to this perspective.",
"title": ""
},
{
"docid": "e9b6bceebe87a5a97fbcbb01f6e6544b",
"text": "OBJECTIVES\nTo investigate the prevalence, location, size and course of the anastomosis between the dental branch of the posterior superior alveolar artery (PSAA), known as alveolar antral artery (AAA), and the infraorbital artery (IOA).\n\n\nMATERIAL AND METHODS\nThe first part of the study was performed on 30 maxillary sinuses deriving from 15 human cadaver heads. In order to visualize such anastomosis, the vascular network afferent to the sinus was injected with liquid latex mixed with green India ink through the external carotid artery. The second part of the study consisted of 100 CT scans from patients scheduled for sinus lift surgery.\n\n\nRESULTS\nAn anastomosis between the AAA and the IOA was found by dissection in the context of the sinus anterolateral wall in 100% of cases, while a well-defined bony canal was detected radiographically in 94 out of 200 sinuses (47% of cases). The mean vertical distance from the lowest point of this bony canal to the alveolar crest was 11.25 ± 2.99 mm (SD) in maxillae examined by CT. The canal diameter was <1 mm in 55.3% of cases, 1-2 mm in 40.4% of cases and 2-3 mm in 4.3% of cases. In 100% of cases, the AAA was found to be partially intra-osseous, that is between the Schneiderian membrane and the lateral bony wall of the sinus, in the area selected for sinus antrostomy.\n\n\nCONCLUSIONS\nA sound knowledge of the maxillary sinus vascular anatomy and its careful analysis by CT scan is essential to prevent complications during surgical interventions involving this region.",
"title": ""
},
{
"docid": "8800dba6bb4cea195c8871eb5be5b0a8",
"text": "Text summarization and sentiment classification, in NLP, are two main tasks implemented on text analysis, focusing on extracting the major idea of a text at different levels. Based on the characteristics of both, sentiment classification can be regarded as a more abstractive summarization task. According to the scheme, a Self-Attentive Hierarchical model for jointly improving text Summarization and Sentiment Classification (SAHSSC) is proposed in this paper. This model jointly performs abstractive text summarization and sentiment classification within a hierarchical end-to-end neural framework, in which the sentiment classification layer on top of the summarization layer predicts the sentiment label in the light of the text and the generated summary. Furthermore, a self-attention layer is also proposed in the hierarchical framework, which is the bridge that connects the summarization layer and the sentiment classification layer and aims at capturing emotional information at text-level as well as summary-level. The proposed model can generate a more relevant summary and lead to a more accurate summary-aware sentiment prediction. Experimental results evaluated on SNAP amazon online review datasets show that our model outperforms the state-of-the-art baselines on both abstractive text summarization and sentiment classification by a considerable margin.",
"title": ""
},
{
"docid": "d67ee0219625f02ff7023e4d0d39e8d8",
"text": "In information retrieval, pseudo-relevance feedback (PRF) refers to a strategy for updating the query model using the top retrieved documents. PRF has been proven to be highly effective in improving the retrieval performance. In this paper, we look at the PRF task as a recommendation problem: the goal is to recommend a number of terms for a given query along with weights, such that the final weights of terms in the updated query model better reflect the terms' contributions in the query. To do so, we propose RFMF, a PRF framework based on matrix factorization which is a state-of-the-art technique in collaborative recommender systems. Our purpose is to predict the weight of terms that have not appeared in the query and matrix factorization techniques are used to predict these weights. In RFMF, we first create a matrix whose elements are computed using a weight function that shows how much a term discriminates the query or the top retrieved documents from the collection. Then, we re-estimate the created matrix using a matrix factorization technique. Finally, the query model is updated using the re-estimated matrix. RFMF is a general framework that can be employed with any retrieval model. In this paper, we implement this framework for two widely used document retrieval frameworks: language modeling and the vector space model. Extensive experiments over several TREC collections demonstrate that the RFMF framework significantly outperforms competitive baselines. These results indicate the potential of using other recommendation techniques in this task.",
"title": ""
},
{
"docid": "0d1193978e4f8be0b78c6184d7ece3fe",
"text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …",
"title": ""
},
{
"docid": "e584549afba4c444c32dfe67ee178a84",
"text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cb632cd4d78d85834838b7ac7a126efc",
"text": "We present an approach to combining distributional semantic representations induced from text corpora with manually constructed lexical-semantic networks. While both kinds of semantic resources are available with high lexical coverage, our aligned resource combines the domain specificity and availability of contextual information from distributional models with the conciseness and high quality of manually crafted lexical networks. We start with a distributional representation of induced senses of vocabulary terms, which are accompanied with rich context information given by related lexical items. We then automatically disambiguate such representations to obtain a full-fledged proto-conceptualization, i.e. a typed graph of induced word senses. In a final step, this proto-conceptualization is aligned to a lexical ontology, resulting in a hybrid aligned resource. Moreover, unmapped induced senses are associated with a semantic type in order to connect them to the core resource. Manual evaluations against ground-truth judgments for different stages of our method as well as an extrinsic evaluation on a knowledge-based Word Sense Disambiguation benchmark all indicate the high quality of the new hybrid resource. Additionally, we show the benefits of enriching top-down lexical knowledge resources with bottom-up distributional information from text for addressing high-end knowledge acquisition tasks such as cleaning hypernym graphs and learning taxonomies from scratch.",
"title": ""
},
{
"docid": "0ef2a90669c0469df0dc2281a414cf37",
"text": "Web Intelligence is a direction for scientific research that explores practical applications of Artificial Intelligence to the next generation of Web-empowered systems. In this paper, we present a Web-based intelligent tutoring system for computer programming. The decision making process conducted in our intelligent system is guided by Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student navigate through the online course materials, recommend learning goals, and generate appropriate reading sequences.",
"title": ""
},
{
"docid": "96f13d8d4e12ef65948216286a0982c9",
"text": "Regression test case selection techniques attempt to increase the testing effectiveness based on the measurement capabilities, such as cost, coverage, and fault detection. This systematic literature review presents state-of-the-art research in effective regression test case selection techniques. We examined 47 empirical studies published between 2007 and 2015. The selected studies are categorized according to the selection procedure, empirical study design, and adequacy criteria with respect to their effectiveness measurement capability and methods used to measure the validity of these results.\n The results showed that mining and learning-based regression test case selection was reported in 39% of the studies, unit level testing was reported in 18% of the studies, and object-oriented environment (Java) was used in 26% of the studies. Structural faults, the most common target, was used in 55% of the studies. Overall, only 39% of the studies conducted followed experimental guidelines and are reproducible.\n There are 7 different cost measures, 13 different coverage types, and 5 fault-detection metrics reported in these studies. It is also observed that 70% of the studies being analyzed used cost as the effectiveness measure compared to 31% that used fault-detection capability and 16% that used coverage.",
"title": ""
}
] |
scidocsrr
|
3973b47c48100d90604ee1a64dbea1df
|
Hierarchical Parsing Net: Semantic Scene Parsing From Global Scene to Objects
|
[
{
"docid": "4d2be7aac363b77c6abd083947bc28c7",
"text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",
"title": ""
}
] |
[
{
"docid": "d11c2dd512f680e79706f73d4cd3d0aa",
"text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.",
"title": ""
},
{
"docid": "b35d58ad8987bb4fd9d7df2c09a4daab",
"text": "Visual search is necessary for rapid scene analysis because information processing in the visual system is limited to one or a few regions at one time [3]. To select potential regions or objects of interest rapidly with a task-independent manner, the so-called \"visual saliency\", is important for reducing the complexity of scenes. From the perspective of engineering, modeling visual saliency usually facilitates subsequent higher visual processing, such as image re-targeting [10], image compression [12], object recognition [16], etc. Visual attention model is deeply studied in recent decades. Most of existing models are built on the biologically-inspired architecture based on the famous Feature Integration Theory (FIT) [19, 20]. For instance, Itti et al. proposed a famous saliency model which computes the saliency map with local contrast in multiple feature dimensions, such as color, orientation, etc. [15] [23]. However, FIT-based methods perhaps risk being immersed in local saliency (e.g., object boundaries), because they employ local contrast of features in limited regions and ignore the global information. Visual attention models usually provide location information of the potential objects, but miss some object-related information (e.g., object surfaces) that is necessary for further object detection and recognition. Distinguished from FIT, Guided Search Theory (GST) [3] [24] provides a mechanism to search the regions of interest (ROI) or objects with the guidance from scene layout or top-down sources. The recent version of GST claims that the visual system searches objects of interest along two parallel pathways, i.e., the non-selective pathway and the selective pathway [3]. This new visual search strategy allows observers to extract spatial layout (or gist) information rapidly from entire scene via non-selective pathway. Then, this context information of scene acts as top-down modulation to guide the salient object search along the selective pathway. This two-pathway-based search strategy provides a parallel processing of global and local information for rapid visual search. Referring to the GST, we assume that the non-selective pathway provides \"where\" information and prior of multiple objects for visual search, a counterpart to visual selective saliency, and we use certain simple and fast fixation prediction method to provide an initial estimate of where the objects present. At the same time, the bottom-up visual selective pathway extracts fine image features in multiple cue channels, which could be regarded as a counterpart to the \"what\" pathway in visual system for object recognition. When these bottom-up features meet \"where\" information of objects, the visual system …",
"title": ""
},
{
"docid": "06ae65d560af6e99cdc96495d32379d1",
"text": "Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.",
"title": ""
},
{
"docid": "e9f9be3fad4a9a71e26a75023929147d",
"text": "BACKGROUND\nAesthetic surgery of female genitalia is an uncommon procedure, and of the techniques available, labia minora reduction can achieve excellent results. Recently, more conservative labia minora reduction techniques have been developed, because the simple isolated strategy of straight amputation does not ensure a favorable outcome. This study was designed to review a series of labia minora reductions using inferior wedge resection and superior pedicle flap reconstruction.\n\n\nMETHODS\nTwenty-one patients underwent inferior wedge resection and superior pedicle flap reconstruction. The mean follow-up was 46 months. Aesthetic results and postoperative outcomes were collected retrospectively and evaluated.\n\n\nRESULTS\nTwenty patients (95.2 percent) underwent bilateral procedures, and 90.4 percent of patients had a congenital labia minora hypertrophy. Five complications occurred in 21 patients (23.8 percent). Wound-healing problems were observed more frequently. The cosmetic result was considered to be good or very good in 85.7 percent of patients, and 95.2 percent were very satisfied with the procedure. All complications except one were observed immediately after the procedure.\n\n\nCONCLUSIONS\nThe results of this study demonstrate that inferior wedge resection and superior pedicle flap reconstruction is a simple and consistent technique and deserves a place among the main procedures available. The complications observed were not unexpected and did not extend hospital stay or interfere with the normal postoperative period. The success of the procedure depends on patient selection, careful preoperative planning, and adequate intraoperative management.",
"title": ""
},
{
"docid": "881a495a8329c71a0202c3510e21b15d",
"text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.",
"title": ""
},
{
"docid": "5d3ae892c7cbe056734c9b098e018377",
"text": "Information on the Nuclear Magnetic Resonance Gyro under development by Northrop Grumman Corporation is presented. The basics of Operation are summarized, a review of the completed phases is presented, and the current state of development and progress in phase 4 is discussed. Many details have been left out for the sake of brevity, but the principles are still complete.",
"title": ""
},
{
"docid": "b9efcefffc894501f7cfc42d854d6068",
"text": "Disruption of electric power operations can be catastrophic on the national security and economy. Due to the complexity of widely dispersed assets and the interdependency between computer, communication, and power systems, the requirement to meet security and quality compliance on the operations is a challenging issue. In recent years, NERC's cybersecurity standard was initiated to require utilities compliance on cybersecurity in control systems - NERC CIP 1200. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). This paper is an overview of the cybersecurity issues for electric power control and automation systems, the control architectures, and the possible methodologies for vulnerability assessment of existing systems.",
"title": ""
},
{
"docid": "3d10793b2e4e63e7d639ff1e4cdf04b6",
"text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.",
"title": ""
},
{
"docid": "c8e8d82af2d8d2c6c51b506b4f26533f",
"text": "We present an efficient method for detecting anomalies in videos. Recent applications of convolutional neural networks have shown promises of convolutional layers for object detection and recognition, especially in images. However, convolutional neural networks are supervised and require labels as learning signals. We propose a spatiotemporal architecture for anomaly detection in videos including crowded scenes. Our architecture includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features. Experimental results on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of our method is comparable to state-of-the-art methods at a considerable speed of up to 140 fps.",
"title": ""
},
{
"docid": "d2c6e2e807376b63828da4037028f891",
"text": "Cortical circuits in the brain are refined by experience during critical periods early in postnatal life. Critical periods are regulated by the balance of excitatory and inhibitory (E/I) neurotransmission in the brain during development. There is now increasing evidence of E/I imbalance in autism, a complex genetic neurodevelopmental disorder diagnosed by abnormal socialization, impaired communication, and repetitive behaviors or restricted interests. The underlying cause is still largely unknown and there is no fully effective treatment or cure. We propose that alteration of the expression and/or timing of critical period circuit refinement in primary sensory brain areas may significantly contribute to autistic phenotypes, including cognitive and behavioral impairments. Dissection of the cellular and molecular mechanisms governing well-established critical periods represents a powerful tool to identify new potential therapeutic targets to restore normal plasticity and function in affected neuronal circuits.",
"title": ""
},
{
"docid": "4b049e3fee1adfba2956cb9111a38bd2",
"text": "This paper presents an optimization based algorithm for underwater image de-hazing problem. Underwater image de-hazing is the most prominent area in research. Underwater images are corrupted due to absorption and scattering. With the effect of that, underwater images have the limitation of low visibility, low color and poor natural appearance. To avoid the mentioned problems, Enhanced fuzzy intensification method is proposed. For each color channel, enhanced fuzzy membership function is derived. Second, the correction of fuzzy based pixel intensification is carried out for each channel to remove haze and to enhance visibility and color. The post processing of fuzzy histogram equalization is implemented for red channel alone when the captured image is having highest value of red channel pixel values. The proposed method provides better results in terms maximum entropy and PSNR with minimum MSE with very minimum computational time compared to existing methodologies.",
"title": ""
},
{
"docid": "925aacab817a20ff527afd4100c2a8bd",
"text": "This paper presents an efficient design approach for band-pass post filters in waveguides, based on mode-matching technique. With this technique, the characteristics of symmetrical cylindrical post arrangements in the cross-section of the considered waveguides can be analyzed accurately and quickly. Importantly, the approach is applicable to post filters in waveguide but can be extended to Substrate Integrated Waveguide (SIW) technologies. The fast computations provide accurate relationships for the K factors as a function of the post radii and the distances between posts, and allow analyzing the influence of machining tolerances on the filter performance. The computations are used to choose reasonable posts for designing band-pass filters, while the error analysis helps to judge whether a given machining precision is sufficient. The approach is applied to a Chebyshev band-pass post filter and a band-pass SIW filter with a center frequency of 10.5 GHz and a fractional bandwidth of 9.52% with verification via full-wave simulations using HFSS and measurements on manufactured prototypes.",
"title": ""
},
{
"docid": "5bf8b65e644f0db9920d3dd7fdf4d281",
"text": "Software developers face a number of challenges when creating applications that attempt to keep important data confidential. Even with diligent attention paid to correct software design and implementation practices, secrets can still be exposed through a single flaw in any of the privileged code on the platform, code which may have been written by thousands of developers from hundreds of organizations throughout the world. Intel is developing innovative security technology which provides the ability for software developers to maintain control of the security of sensitive code and data by creating trusted domains within applications to protect critical information during execution and at rest. This paper will describe how this technology has been effectively used in lab exercises to protect private information in applications including enterprise rights management, video chat, trusted financial transactions, and others. Examples will include both protection of local processing and the establishment of secure communication with cloud services. It will illustrate useful software design patterns that can be followed to create many additional types of trusted software solutions.",
"title": ""
},
{
"docid": "9911063e58b5c2406afd761d8826538a",
"text": "BACKGROUND\nThe purpose of our study was to evaluate inter-observer reliability of the Three-Column classifications with conventional Schatzker and AO/OTA of Tibial Plateau Fractures.\n\n\nMETHODS\n50 cases involving all kinds of the fracture patterns were collected from 278 consecutive patients with tibial plateau fractures who were internal fixed in department of Orthopedics and Trauma III in Shanghai Sixth People's Hospital. The series were arranged randomly, numbered 1 to 50. Four observers were chosen to classify these cases. Before the research, a classification training session was held to each observer. They were given as much time as they required evaluating the radiographs accurately and independently. The classification choices made at the first viewing were not available during the second viewing. The observers were not provided with any feedback after the first viewing. The kappa statistic was used to analyze the inter-observer reliability of the three fracture classification made by the four observers.\n\n\nRESULTS\nThe mean kappa values for inter-observer reliability regarding Schatzker classification was 0.567 (range: 0.513-0.589), representing \"moderate agreement\". The mean kappa values for inter-observer reliability regarding AO/ASIF classification systems was 0.623 (range: 0.510-0.710) representing \"substantial agreement\". The mean kappa values for inter-observer reliability regarding Three-Column classification systems was 0.766 (range: 0.706-0.890), representing \"substantial agreement\".\n\n\nCONCLUSION\nThree-Column classification, which is dependent on the understanding of the fractures using CT scans as well as the 3D reconstruction can identity the posterior column fracture or fragment. It showed \"substantial agreement\" in the assessment of inter-observer reliability, higher than the conventional Schatzker and AO/OTA classifications. We finally conclude that Three-Column classification provides a higher agreement among different surgeons and could be popularized and widely practiced in other clinical centers.",
"title": ""
},
{
"docid": "ad9f3510ffaf7d0bdcf811a839401b83",
"text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.",
"title": ""
},
{
"docid": "2c2fd7484d137a2ac01bdd4d3f176b44",
"text": "This paper presents a novel two-stage low dropout regulator (LDO) that minimizes output noise via a pre-regulator stage and achieves high power supply rejection via a simple subtractor circuit in the power driver stage. The LDO is fabricated with a standard 0.35mum CMOS process and occupies 0.26 mm2 and 0.39mm2 for single and dual output respectively. Measurement showed PSR is 60dB at 10kHz and integrated noise is 21.2uVrms ranging from 1kHz to 100kHz",
"title": ""
},
{
"docid": "c501b2c5d67037b7ca263ec9c52503a9",
"text": "Edith Penrose’s (1959) book, The Theory of the Growth of the Firm, is considered by many scholars in the strategy field to be the seminal work that provided the intellectual foundations for the modern, resource-based theory of the firm. However, the present paper suggests that Penrose’s direct or intended contribution to resource-based thinking has been misinterpreted. Penrose never aimed to provide useful strategy prescriptions for managers to create a sustainable stream of rents; rather, she tried to rigorously describe the processes through which firms grow. In her theory, rents were generally assumed not to occur. If they arose this reflected an inefficient macro-level outcome of an otherwise efficient micro-level growth process. Nevertheless, her ideas have undoubtedly stimulated ‘good conversation’ within the strategy field in the spirit of Mahoney and Pandian (1992); their emerging use by some scholars as building blocks in models that show how sustainable competitive advantage and rents can be achieved is undeniable, although such use was never intended by Edith Penrose herself. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "d2268cd9a2ea751ea2080a4d86e32e17",
"text": "Predicting panic is of critical importance in many areas of human and animal behavior, notably in the context of economics. The recent financial crisis is a case in point. Panic may be due to a specific external threat or self-generated nervousness. Here we show that the recent economic crisis and earlier large single-day panics were preceded by extended periods of high levels of market mimicry--direct evidence of uncertainty and nervousness, and of the comparatively weak influence of external news. High levels of mimicry can be a quite general indicator of the potential for self-organized crises.",
"title": ""
},
{
"docid": "35a063ab339f32326547cc54bee334be",
"text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.",
"title": ""
},
{
"docid": "8272e9a13d2cae8b76cfc3e64b14297d",
"text": "Whether they are made to entertain you, or to educate you, good video games engage you. Significant research has tried to understand engagement in games by measuring player experience (PX). Traditionally, PX evaluation has focused on the enjoyment of game, or the motivation of players; these factors no doubt contribute to engagement, but do decisions regarding play environment (e.g., the choice of game controller) affect the player more deeply than that? We apply self-determination theory (specifically satisfaction of needs and self-discrepancy represented using the five factors model of personality) to explain PX in an experiment with controller type as the manipulation. Our study shows that there are a number of effects of controller on PX and in-game player personality. These findings provide both a lens with which to view controller effects in games and a guide for controller choice in the design of new games. Our research demonstrates that including self-characteristics assessment in the PX evaluation toolbox is valuable and useful for understanding player experience.",
"title": ""
}
] |
scidocsrr
|
1c83c6ee0cf7f8b5e72e8b9a00e6b0fe
|
Deep automatic license plate recognition system
|
[
{
"docid": "b4316fcbc00b285e11177811b61d2b99",
"text": "Automatic license plate recognition (ALPR) is one of the most important aspects of applying computer techniques towards intelligent transportation systems. In order to recognize a license plate efficiently, however, the location of the license plate, in most cases, must be detected in the first place. Due to this reason, detecting the accurate location of a license plate from a vehicle image is considered to be the most crucial step of an ALPR system, which greatly affects the recognition rate and speed of the whole system. In this paper, a region-based license plate detection method is proposed. In this method, firstly, mean shift is used to filter and segment a color vehicle image in order to get candidate regions. These candidate regions are then analyzed and classified in order to decide whether a candidate region contains a license plate. Unlike other existing license plate detection methods, the proposed method focuses on regions, which demonstrates to be more robust to interference characters and more accurate when compared with other methods.",
"title": ""
},
{
"docid": "8d5dd3f590dee87ea609278df3572f6e",
"text": "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.",
"title": ""
},
{
"docid": "7c796d22e9875bc4fe1a5267d28e5d40",
"text": "A simple approach to learning invariances in image classification consists in augmenting the training set with transformed versions of the original images. However, given a large set of possible transformations, selecting a compact subset is challenging. Indeed, all transformations are not equally informative and adding uninformative transformations increases training time with no gain in accuracy. We propose a principled algorithm -- Image Transformation Pursuit (ITP) -- for the automatic selection of a compact set of transformations. ITP works in a greedy fashion, by selecting at each iteration the one that yields the highest accuracy gain. ITP also allows to efficiently explore complex transformations, that combine basic transformations. We report results on two public benchmarks: the CUB dataset of bird images and the ImageNet 2010 challenge. Using Fisher Vector representations, we achieve an improvement from 28.2% to 45.2% in top-1 accuracy on CUB, and an improvement from 70.1% to 74.9% in top-5 accuracy on ImageNet. We also show significant improvements for deep convnet features: from 47.3% to 55.4% on CUB and from 77.9% to 81.4% on ImageNet.",
"title": ""
}
] |
[
{
"docid": "f4adaf2cbb8d176b72939a9a81c92da7",
"text": "This paper describes a new method for recognizing overtraced strokes to 2D geometric primitives, which are further interpreted as 2D line drawings. This method can support rapid grouping and fitting of overtraced polylines or conic curves based on the classified characteristics of each stroke during its preprocessing stage. The orientation and its endpoints of a classified stroke are used in the stroke grouping process. The grouped strokes are then fitted with 2D geometry. This method can deal with overtraced sketch strokes in both solid and dash linestyles, fit grouped polylines as a whole polyline and simply fit conic strokes without computing the direction of a stroke. It avoids losing joint information due to segmentation of a polyline into line-segments. The proposed method has been tested with our freehand sketch recognition system (FSR), which is robust and easier to use by removing some limitations embedded with most existing sketching systems which only accept non-overtraced stroke drawing. The test results showed that the proposed method can support freehand sketching based conceptual design with no limitations on drawing sequence, directions and overtraced cases while achieving a satisfactory interpretation rate.",
"title": ""
},
{
"docid": "8234cd43e0bfba657bf81b6ca9b6825a",
"text": "We derive upper and lower limits on the majority vote accuracy with respect to individual accuracy p, the number of classifiers in the pool (L), and the pairwise dependence between classifiers, measured by Yule’s Q statistic. Independence between individual classifiers is typically viewed as an asset in classifier fusion. We show that the majority vote with dependent classifiers can potentially offer a dramatic improvement both over independent classifiers and over an individual classifier with accuracy p. A functional relationship between the limits and the pairwise dependence Q is derived. Two patterns of the joint distribution for classifier outputs (correct/incorrect) are identified to derive the limits: the pattern of success and the pattern of failure. The results support the intuition that negative pairwise dependence is beneficial although not straightforwardly related to the accuracy. The pattern of success showed that for the highest improvement over p, all pairs of classifiers in the pool should have the same negative dependence.",
"title": ""
},
{
"docid": "b354f4f9bd12caef2a22ebfeae315cb5",
"text": "In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.",
"title": ""
},
{
"docid": "997502358acec488a3c02b4c711c6fc2",
"text": "This study presents the first results of an analysis primarily based on semi-structured interviews with government officials and managers who are responsible for smart city initiatives in four North American cities—Philadelphia and Seattle in the United States, Quebec City in Canada, and Mexico City in Mexico. With the reference to the Smart City Initiatives Framework that we suggested in our previous research, this study aims to build a new understanding of smart city initiatives. Main findings are categorized into eight aspects including technology, management and organization, policy context, governance, people and communities, economy, built infrastructure, and natural environ-",
"title": ""
},
{
"docid": "0070d6e21bdb8bac260178603cfbf67d",
"text": "Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience.",
"title": ""
},
{
"docid": "6dcf25b450d8c4eea6b61556d505a729",
"text": "Skip connections made the training of very deep neural networks possible and have become an indispendable component in a variety of neural architectures. A satisfactory explanation for their success remains elusive. Here, we present an explanation for the benefits of skip connections in training very deep neural networks. We argue that skip connections help break symmetries inherent in the loss landscapes of deep networks, leading to drastically simplified landscapes. In particular, skip connections between adjacent layers in a multilayer network break the permutation symmetry of nodes in a given layer, and the recently proposed DenseNet architecture, where each layer projects skip connections to every layer above it, also breaks the rescaling symmetry of connectivity matrices between different layers. This hypothesis is supported by evidence from a toy model with binary weights and from experiments with fully-connected networks suggesting (i) that skip connections do not necessarily improve training unless they help break symmetries and (ii) that alternative ways of breaking the symmetries also lead to significant performance improvements in training deep networks, hence there is nothing special about skip connections in this respect. We find, however, that skip connections confer additional benefits over and above symmetry-breaking, such as the ability to deal effectively with the vanishing gradients problem.",
"title": ""
},
{
"docid": "18f13858b5f9e9a8e123d80b159c4d72",
"text": "Cryptocurrency, and its underlying technologies, has been gaining popularity for transaction management beyond financial transactions. Transaction information is maintained in the blockchain, which can be used to audit the integrity of the transaction. The focus on this paper is the potential availability of block-chain technology of other transactional uses. Block-chain is one of the most stable open ledgers that preserves transaction information, and is difficult to forge. Since the information stored in block-chain is not related to personally identifiable information, it has the characteristics of anonymity. Also, the block-chain allows for transparent transaction verification since all information in the block-chain is open to the public. These characteristics are the same as the requirements for a voting system. That is, strong robustness, anonymity, and transparency. In this paper, we propose an electronic voting system as an application of blockchain, and describe block-chain based voting at a national level through examples.",
"title": ""
},
{
"docid": "9bbc3e426c7602afaa857db85e754229",
"text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.",
"title": ""
},
{
"docid": "bdce7ff18a7a5b7ed8c09fa98e426378",
"text": "Following Ebbinghaus (1885/1964), a number of procedures have been devised to measure short-term memory using immediate serial recall: digit span, Knox's (1913) cube imitation test and Corsi's (1972) blocks task. Understanding the cognitive processes involved in these tasks was obstructed initially by the lack of a coherent concept of short-term memory and later by the mistaken assumption that short-term and long-term memory reflected distinct processes as well as different kinds of experimental task. Despite its apparent conceptual simplicity, a variety of cognitive mechanisms are responsible for short-term memory, and contemporary theories of working memory have helped to clarify these. Contrary to the earliest writings on the subject, measures of short-term memory do not provide a simple measure of mental capacity, but they do provide a way of understanding some of the key mechanisms underlying human cognition.",
"title": ""
},
{
"docid": "bec4932c66f8a8a87c1967ca42ad4315",
"text": "Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80% while retaining or even improving the network accuracy.",
"title": ""
},
{
"docid": "554d234697cd98bf790444fe630c179b",
"text": "This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.",
"title": ""
},
{
"docid": "28d01dba790cf55591a84ef88b70ebbf",
"text": "A novel method for simultaneous keyphrase extraction and generic text summarization is proposed by modeling text documents as weighted undirected and weighted bipartite graphs. Spectral graph clustering algorithms are useed for partitioning sentences of the documents into topical groups with sentence link priors being exploited to enhance clustering quality. Within each topical group, saliency scores for keyphrases and sentences are generated based on a mutual reinforcement principle. The keyphrases and sentences are then ranked according to their saliency scores and selected for inclusion in the top keyphrase list and summaries of the document. The idea of building a hierarchy of summaries for documents capturing different levels of granularity is also briefly discussed. Our method is illustrated using several examples from news articles, news broadcast transcripts and web documents.",
"title": ""
},
{
"docid": "229605eada4ca390d17c5ff168c6199a",
"text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.",
"title": ""
},
{
"docid": "ada4554ce6e6180075459557409f524d",
"text": "The conceptualization of the notion of a system in systems engineering, as exemplified in, for instance, the engineering standard IEEE Std 1220-1998, is problematic when applied to the design of socio-technical systems. This is argued using Intelligent Transportation Systems as an example. A preliminary conceptualization of socio-technical systems is introduced which includes technical and social elements and actors, as well as four kinds of relations. Current systems engineering practice incorporates technical elements and actors in the system but sees social elements exclusively as contextual. When designing socio-technical systems, however, social elements and the corresponding relations must also be considered as belonging to the system.",
"title": ""
},
{
"docid": "479089fb59b5b810f95272d04743f571",
"text": "We address offensive tactic recognition in broadcast basketball videos. As a crucial component towards basketball video content understanding, tactic recognition is quite challenging because it involves multiple independent players, each of which has respective spatial and temporal variations. Motivated by the observation that most intra-class variations are caused by non-key players, we present an approach that integrates key player detection into tactic recognition. To save the annotation cost, our approach can work on training data with only video-level tactic annotation, instead of key players labeling. Specifically, this task is formulated as an MIL (multiple instance learning) problem where a video is treated as a bag with its instances corresponding to subsets of the five players. We also propose a representation to encode the spatio-temporal interaction among multiple players. It turns out that our approach not only effectively recognizes the tactics but also precisely detects the key players.",
"title": ""
},
{
"docid": "a25839666b7e208810979dc93d20f950",
"text": "Energy consumption management has become an essential concept in cloud computing. In this paper, we propose a new power aware load balancing, named Bee-MMT (artificial bee colony algorithm-Minimal migration time), to decline power consumption in cloud computing; as a result of this decline, CO2 production and operational cost will be decreased. According to this purpose, an algorithm based on artificial bee colony algorithm (ABC) has been proposed to detect over utilized hosts and then migrate one or more VMs from them to reduce their utilization; following that we detect underutilized hosts and, if it is possible, migrate all VMs which have been allocated to these hosts and then switch them to the sleep mode. However, there is a trade-off between energy consumption and providing high quality of service to the customers. Consequently, we consider SLA Violation as a metric to qualify the QOS that require to satisfy the customers. The results show that the proposed method can achieve greater power consumption saving than other methods like LR-MMT (local regression-Minimal migration time), DVFS (Dynamic Voltage Frequency Scaling), IQR-MMT (Interquartile Range-MMT), MAD-MMT (Median Absolute Deviation) and non-power aware.",
"title": ""
},
{
"docid": "c9878a454c91fec094fce02e1ac49348",
"text": "Autonomous walking bipedal machines, possibly useful for rehabilitation and entertainment purposes, need a high energy efficiency, offered by the concept of ‘Passive Dynamic Walking’ (exploitation of the natural dynamics of the robot). 2D passive dynamic bipeds have been shown to be inherently stable, but in the third dimension two problematic degrees of freedom are introduced: yaw and roll. We propose a design for a 3D biped with a pelvic body as a passive dynamic compensator, which will compensate for the undesired yaw and roll motion, and allow the rest of the robot to move as if it were a 2D machine. To test our design, we perform numerical simulations on a multibody model of the robot. With limit cycle analysis we calculate the stability of the robot when walking at its natural speed. The simulation shows that the compensator, indeed, effectively compensates for both the yaw and the roll motion, and that the walker is stable.",
"title": ""
},
{
"docid": "1994e427b1d00f1f64ed91559ffa5daa",
"text": "We started investigating the collection of HTML tables on the Web and developed the WebTables system a few years ago [4]. Since then, our work has been motivated by applying WebTables in a broad set of applications at Google, resulting in several product launches. In this paper, we describe the challenges faced, lessons learned, and new insights that we gained from our efforts. The main challenges we faced in our efforts were (1) identifying tables that are likely to contain high-quality data (as opposed to tables used for navigation, layout, or formatting), and (2) recovering the semantics of these tables or signals that hint at their semantics. The result is a semantically enriched table corpus that we used to develop several services. First, we created a search engine for structured data whose index includes over a hundred million HTML tables. Second, we enabled users of Google Docs (through its Research Panel) to find relevant data tables and to insert such data into their documents as needed. Most recently, we brought WebTables to a much broader audience by using the table corpus to provide richer tabular snippets for fact-seeking web search queries on Google.com.",
"title": ""
},
{
"docid": "a1348a9823fc85d22bc73f3fe177e0ba",
"text": "Ultrasound imaging makes use of backscattering of waves during their interaction with scatterers present in biological tissues. Simulation of synthetic ultrasound images is a challenging problem on account of inability to accurately model various factors of which some include intra-/inter scanline interference, transducer to surface coupling, artifacts on transducer elements, inhomogeneous shadowing and nonlinear attenuation. Current approaches typically solve wave space equations making them computationally expensive and slow to operate. We propose a generative adversarial network (GAN) inspired approach for fast simulation of patho-realistic ultrasound images. We apply the framework to intravascular ultrasound (IVUS) simulation. A stage 0 simulation performed using pseudo B-mode ultrasound image simulator yields speckle mapping of a digitally defined phantom. The stage I GAN subsequently refines them to preserve tissue specific speckle intensities. The stage II GAN further refines them to generate high resolution images with patho-realistic speckle profiles. We evaluate patho-realism of simulated images with a visual Turing test indicating an equivocal confusion in discriminating simulated from real. We also quantify the shift in tissue specific intensity distributions of the real and simulated images to prove their similarity.",
"title": ""
},
{
"docid": "1772d22c19635b6636e42f8bb1b1a674",
"text": "• MacArthur Fellowship, 2010 • Guggenheim Fellowship, 2010 • Li Ka Shing Foundation Women in Science Distinguished Lectu re Series Award, 2010 • MIT Technology Review TR-35 Award (recognizing the world’s top innovators under the age of 35), 2009. • Okawa Foundation Research Award, 2008. • Sloan Research Fellow, 2007. • Best Paper Award, 2007 USENIX Security Symposium. • George Tallman Ladd Research Award, Carnegie Mellon Univer sity, 2007. • Highest ranked paper, 2006 IEEE Security and Privacy Sympos ium; paper invited to a special issue of the IEEE Transactions on Dependable and Secure Computing. • NSF CAREER Award on “Exterminating Large Scale Internet Att acks”, 2005. • IBM Faculty Award, 2005. • Highest ranked paper, 1999 IEEE Computer Security Foundati on Workshop; paper invited to a special issue of Journal of Computer Security.",
"title": ""
}
] |
scidocsrr
|
611e2f512e9bcf17f66f557d8a61e545
|
Visual Analytics for MOOC Data
|
[
{
"docid": "c995426196ad943df2f5a4028a38b781",
"text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.",
"title": ""
}
] |
[
{
"docid": "eb888ba37e7e97db36c330548569508d",
"text": "Since the first online demonstration of Neural Machine Translation (NMT) by LISA (Bahdanau et al., 2014), NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing rollout of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages ( 12 languages, for32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for ”generic” translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to/by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest/uniform systems.",
"title": ""
},
{
"docid": "821be0a049a74abf5b009b012022af2f",
"text": "BACKGROUND\nIn theory, infections that arise after female genital mutilation (FGM) in childhood might ascend to the internal genitalia, causing inflammation and scarring and subsequent tubal-factor infertility. Our aim was to investigate this possible association between FGM and primary infertility.\n\n\nMETHODS\nWe did a hospital-based case-control study in Khartoum, Sudan, to which we enrolled women (n=99) with primary infertility not caused by hormonal or iatrogenic factors (previous abdominal surgery), or the result of male-factor infertility. These women underwent diagnostic laparoscopy. Our controls were primigravidae women (n=180) recruited from antenatal care. We used exact conditional logistic regression, stratifying for age and controlling for socioeconomic status, level of education, gonorrhoea, and chlamydia, to compare these groups with respect to FGM.\n\n\nFINDINGS\nOf the 99 infertile women examined, 48 had adnexal pathology indicative of previous inflammation. After controlling for covariates, these women had a significantly higher risk than controls of having undergone the most extensive form of FGM, involving the labia majora (odds ratio 4.69, 95% CI 1.49-19.7). Among women with primary infertility, both those with tubal pathology and those with normal laparoscopy findings were at a higher risk than controls of extensive FGM, both with borderline significance (p=0.054 and p=0.055, respectively). The anatomical extent of FGM, rather than whether or not the vulva had been sutured or closed, was associated with primary infertility.\n\n\nINTERPRETATION\nOur findings indicate a positive association between the anatomical extent of FGM and primary infertility. Laparoscopic postinflammatory adnexal changes are not the only explanation for this association, since cases without such pathology were also affected. The association between FGM and primary infertility is highly relevant for preventive work against this ancient practice.",
"title": ""
},
{
"docid": "5d6c2580602945084d5a643c335c40f2",
"text": "Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends.",
"title": ""
},
{
"docid": "66e8940044bb58971da01cc059b8ef09",
"text": "The use of Bayesian methods for data analysis is creating a revolution in fields ranging from genetics to marketing. Yet, results of our literature review, including more than 10,000 articles published in 15 journals from January 2001 and December 2010, indicate that Bayesian approaches are essentially absent from the organizational sciences. Our article introduces organizational science researchers to Bayesian methods and describes why and how they should be used. We use multiple linear regression as the framework to offer a step-by-step demonstration, including the use of software, regarding how to implement Bayesian methods. We explain and illustrate how to determine the prior distribution, compute the posterior distribution, possibly accept the null value, and produce a write-up describing the entire Bayesian process, including graphs, results, and their interpretation. We also offer a summary of the advantages of using Bayesian analysis and examples of how specific published research based on frequentist analysis-based approaches failed to benefit from the advantages offered by a Bayesian approach and how using Bayesian analyses would have led to richer and, in some cases, different substantive conclusions. We hope that our article will serve as a catalyst for the adoption of Bayesian methods in organizational science research.",
"title": ""
},
{
"docid": "162823edcbd50579a1d386f88931d59d",
"text": "Elevated liver enzymes are a common scenario encountered by physicians in clinical practice. For many physicians, however, evaluation of such a problem in patients presenting with no symptoms can be challenging. Evidence supporting a standardized approach to evaluation is lacking. Although alterations of liver enzymes could be a normal physiological phenomenon in certain cases, it may also reflect potential liver injury in others, necessitating its further assessment and management. In this article, we provide a guide to primary care clinicians to interpret abnormal elevation of liver enzymes in asymptomatic patients using a step-wise algorithm. Adopting a schematic approach that classifies enzyme alterations on the basis of pattern (hepatocellular, cholestatic and isolated hyperbilirubinemia), we review an approach to abnormal alteration of liver enzymes within each section, the most common causes of enzyme alteration, and suggest initial investigations.",
"title": ""
},
{
"docid": "f008e38cd63db0e4cf90705cc5e8860e",
"text": "6 Abstract— The purpose of this paper is to propose a MATLAB/ Simulink simulators for PV cell/module/array based on the Two-diode model of a PV cell.This model is known to have better accuracy at low irradiance levels which allows for more accurate prediction of PV systems performance.To reduce computational time , the input parameters are reduced as the values of Rs and Rp are estimated by an efficient iteration method. Furthermore ,all of the inputs to the simulators are information available on a standard PV module datasheet. The present paper present first abrief introduction to the behavior and functioning of a PV device and write the basic equation of the two-diode model,without the intention of providing an indepth analysis of the photovoltaic phenomena and the semicondutor physics. The introduction on PV devices is followed by the modeling and simulation of PV cell/PV module/PV array, which is the main subject of this paper. A MATLAB Simulik based simulation study of PV cell/PV module/PV array is carried out and presented .The simulation model makes use of the two-diode model basic circuit equations of PV solar cell, taking the effect of sunlight irradiance and cell temperature into consideration on the output current I-V characteristic and output power P-V characteristic . A particular typical 50W solar panel was used for model evaluation. The simulation results , compared with points taken directly from the data sheet and curves pubblished by the manufacturers, show excellent correspondance to the model.",
"title": ""
},
{
"docid": "33f86056827e1e8958ab17e11d7e4136",
"text": "The successful integration of Information and Communications Technology (ICT) into the teaching and learning of English Language is largely dependent on the level of teacher’s ICT competence, the actual utilization of ICT in the language classroom and factors that challenge teachers to use it in language teaching. The study therefore assessed the Secondary School English language teachers’ ICT literacy, the extent of ICT utilization in English language teaching and the challenges that prevent language teachers to integrate ICT in teaching. To answer the problems, three sets of survey questionnaires were distributed to 30 English teachers in the 11 schools of Cluster 1 (CarCanMadCarLan). Data gathered were analyzed using descriptive statistics and frequency count. The results revealed that the teachers’ ICT literacy was moderate. The findings provided evidence that there was only a limited use of ICT in language teaching. Feedback gathered from questionnaires show that teachers faced many challenges that demotivate them from using ICT in language activities. Based on these findings, it is recommended the teachers must be provided with intensive ICT-based trainings to equip them with knowledge of ICT and its utilization in language teaching. School administrators as well as stakeholders may look for interventions to upgrade school’s ICTbased resources for its optimum use in teaching and learning. Most importantly, a larger school-wide ICT development plan may be implemented to ensure coherence of ICT implementation in the teaching-learning activities. ‘ICT & Innovations in Education’ International Journal International Electronic Journal | ISSN 2321 – 7189 | www.ictejournal.com Volume 2, Issue 1 | February 2014",
"title": ""
},
{
"docid": "5f92491cb7da547ba3ea6945832342ac",
"text": "SwitchKV is a new key-value store system design that combines high-performance cache nodes with resourceconstrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient contentbased routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. Our evaluation results demonstrate that SwitchKV can achieve up to 5× throughput and 3× latency improvements over traditional system designs.",
"title": ""
},
{
"docid": "a2a4936ca3600dc4fb2369c43ffc9016",
"text": "Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets and toys to express particular behaviors or to tell stories with style and emotion. The puppet has 17 degrees of freedom and can therefore represent a variety of motions. We develop a novel similarity metric between puppet and human motion by computing the reconstruction errors of the puppet motion in the latent space of the human motion and those of the human motion in the latent space of the puppet motion. This metric works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors.",
"title": ""
},
{
"docid": "59aa4318fa39c1d6ec086af7041148b2",
"text": "Two of the most important outcomes of learning analytics are predicting students’ learning and providing effective feedback. Learning Management Systems (LMS), which are widely used to support online and face-to-face learning, provide extensive research opportunities with detailed records of background data regarding users’ behaviors. The purpose of this study was to investigate the effects of undergraduate students’ LMS learning behaviors on their academic achievements. In line with this purpose, the participating students’ online learning behaviors in LMS were examined by using learning analytics for 14 weeks, and the relationship between students’ behaviors and their academic achievements was analyzed, followed by an analysis of their views about the influence of LMS on their academic achievement. The present study, in which quantitative and qualitative data were collected, was carried out with the explanatory mixed method. A total of 71 undergraduate students participated in the study. The results revealed that the students used LMSs as a support to face-to-face education more intensively on course days (at the beginning of the related lessons and at nights on course days) and that they activated the content elements the most. Lastly, almost all the students agreed that LMSs helped increase their academic achievement only when LMSs included such features as effectiveness, interaction, reinforcement, attractive design, social media support, and accessibility.",
"title": ""
},
{
"docid": "3c8cc4192ee6ddd126e53c8ab242f396",
"text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.",
"title": ""
},
{
"docid": "7f65d625ca8f637a6e2e9cb7006d1778",
"text": "Recent work in machine learning for information extraction has focused on two distinct sub-problems: the conventional problem of filling template slots from natural language text, and the problem of wrapper induction, learning simple extraction procedures (“wrappers”) for highly structured text such as Web pages produced by CGI scripts. For suitably regular domains, existing wrapper induction algorithms can efficiently learn wrappers that are simple and highly accurate, but the regularity bias of these algorithms makes them unsuitable for most conventional information extraction tasks. Boosting is a technique for improving the performance of a simple machine learning algorithm by repeatedly applying it to the training set with different example weightings. We describe an algorithm that learns simple, low-coverage wrapper-like extraction patterns, which we then apply to conventional information extraction problems using boosting. The result is BWI, a trainable information extraction system with a strong precision bias and F1 performance better than state-of-the-art techniques in many domains.",
"title": ""
},
{
"docid": "78982bfdcf476081bd708c8aa2e5c5bd",
"text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While sparse point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work incorporates a real-time deep-learned object detector to the monocular SLAM framework for representing generic objects as quadrics that permit detections to be seamlessly integrated while allowing the real-time performance. Finer reconstruction of an object, learned by a CNN network, is also incorporated and provides a shape prior for the quadric leading further refinement. To capture the dominant structure of the scene, additional planar landmarks are detected by a CNN-based plane detector and modelled as landmarks in the map. Experiments show that the introduced plane and object landmarks and the associated constraints, using the proposed monocular plane detector and incorporated object detector, significantly improve camera localization and lead to a richer semantically more meaningful map.",
"title": ""
},
{
"docid": "cefcd78be7922f4349f1bb3aa59d2e1d",
"text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …",
"title": ""
},
{
"docid": "33c06f0ee7d3beb0273a47790f2a84cd",
"text": "This study presents the clinical results of a surgical technique that expands a narrow ridge when its orofacial width precludes the placement of dental implants. In 170 people, 329 implants were placed in sites needing ridge enlargement using the endentulous ridge expansion procedure. This technique involves a partial-thickness flap, crestal and vertical intraosseous incisions into the ridge, and buccal displacement of the buccal cortical plate, including a portion of the underiying spongiosa. Implants were placed in the expanded ridge and allowed to heal for 4 to 5 months. When indicated, the implants were exposed during a second-stage surgery to allow visualization of the implant site. Occlusal loading was applied during the following 3 to 5 months by provisional prostheses. The final phase was the placement of the permanent prostheses. The results yielded a success rate of 98.8%.",
"title": ""
},
{
"docid": "e546f81fbdc57765956c22d94c9f54ac",
"text": "Internet technology is revolutionizing education. Teachers are developing massive open online courses (MOOCs) and using innovative practices such as flipped learning in which students watch lectures at home and engage in hands-on, problem solving activities in class. This work seeks to explore the design space afforded by these novel educational paradigms and to develop technology for improving student learning. Our design, based on the technique of adaptive content review, monitors student attention during educational presentations and determines which lecture topic students might benefit the most from reviewing. An evaluation of our technology within the context of an online art history lesson demonstrated that adaptively reviewing lesson content improved student recall abilities 29% over a baseline system and was able to match recall gains achieved by a full lesson review in less time. Our findings offer guidelines for a novel design space in dynamic educational technology that might support both teachers and online tutoring systems.",
"title": ""
},
{
"docid": "76c42d10b008bdcbfd90d6eb238280c9",
"text": "In this paper a review of architectures suitable for nonlinear real-time audio signal processing is presented. The computational and structural complexity of neural networks (NNs) represent in fact, the main drawbacks that can hinder many practical NNs multimedia applications. In particular e,cient neural architectures and their learning algorithm for real-time on-line audio processing are discussed. Moreover, applications in the -elds of (1) audio signal recovery, (2) speech quality enhancement, (3) nonlinear transducer linearization, (4) learning based pseudo-physical sound synthesis, are brie1y presented and discussed. c © 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b31ce7aa527336d10a5ddb2540e9c61c",
"text": "OBJECTIVE\nOptimal mental health care is dependent upon sensitive and early detection of mental health problems. We have introduced a state-of-the-art method for the current study for remote behavioral monitoring that transports assessment out of the clinic and into the environments in which individuals negotiate their daily lives. The objective of this study was to examine whether the information captured with multimodal smartphone sensors can serve as behavioral markers for one's mental health. We hypothesized that (a) unobtrusively collected smartphone sensor data would be associated with individuals' daily levels of stress, and (b) sensor data would be associated with changes in depression, stress, and subjective loneliness over time.\n\n\nMETHOD\nA total of 47 young adults (age range: 19-30 years) were recruited for the study. Individuals were enrolled as a single cohort and participated in the study over a 10-week period. Participants were provided with smartphones embedded with a range of sensors and software that enabled continuous tracking of their geospatial activity (using the Global Positioning System and wireless fidelity), kinesthetic activity (using multiaxial accelerometers), sleep duration (modeled using device-usage data, accelerometer inferences, ambient sound features, and ambient light levels), and time spent proximal to human speech (i.e., speech duration using microphone and speech detection algorithms). Participants completed daily ratings of stress, as well as pre- and postmeasures of depression (Patient Health Questionnaire-9; Spitzer, Kroenke, & Williams, 1999), stress (Perceived Stress Scale; Cohen et al., 1983), and loneliness (Revised UCLA Loneliness Scale; Russell, Peplau, & Cutrona, 1980).\n\n\nRESULTS\nMixed-effects linear modeling showed that sensor-derived geospatial activity (p < .05), sleep duration (p < .05), and variability in geospatial activity (p < .05), were associated with daily stress levels. Penalized functional regression showed associations between changes in depression and sensor-derived speech duration (p < .05), geospatial activity (p < .05), and sleep duration (p < .05). Changes in loneliness were associated with sensor-derived kinesthetic activity (p < .01).\n\n\nCONCLUSIONS AND IMPLICATIONS FOR PRACTICE\nSmartphones can be harnessed as instruments for unobtrusive monitoring of several behavioral indicators of mental health. Creative leveraging of smartphone sensing could provide novel opportunities for close-to-invisible psychiatric assessment at a scale and efficiency that far exceeds what is currently feasible with existing assessment technologies.",
"title": ""
},
{
"docid": "94f94af75b17c0b4a2ad59908e07e462",
"text": "Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.",
"title": ""
},
{
"docid": "5547f8ad138a724c2cc05ce65f50ebd2",
"text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.",
"title": ""
}
] |
scidocsrr
|
6cb4e14d677a2519baac016b6799858f
|
Impacts of implementing Enterprise Content Management Systems
|
[
{
"docid": "8e6677e03f964984e87530afad29aef3",
"text": "University of Jyväskylä, Department of Computer Science and Information Systems, PO Box 35, FIN-40014, Finland; Agder University College, Department of Information Systems, PO Box 422, 4604, Kristiansand, Norway; University of Toronto, Faculty of Information Studies, 140 St. George Street, Toronto, ON M5S 3G6, Canada; University of Oulu, Department of Information Processing Science, University of Oulu, PO Box 3000, FIN-90014, Finland Abstract Innovations in network technologies in the 1990’s have provided new ways to store and organize information to be shared by people and various information systems. The term Enterprise Content Management (ECM) has been widely adopted by software product vendors and practitioners to refer to technologies used to manage the content of assets like documents, web sites, intranets, and extranets In organizational or inter-organizational contexts. Despite this practical interest ECM has received only little attention in the information systems research community. This editorial argues that ECM provides an important and complex subfield of Information Systems. It provides a framework to stimulate and guide future research, and outlines research issues specific to the field of ECM. European Journal of Information Systems (2006) 15, 627–634. doi:10.1057/palgrave.ejis.3000648",
"title": ""
}
] |
[
{
"docid": "e08e42c8f146e6a74213643e306446c6",
"text": "Disclaimer The opinions and positions expressed in this practice guide are the authors' and do not necessarily represent the opinions and positions of the Institute of Education Sciences or the U.S. Department of Education. This practice guide should be reviewed and applied according to the specific needs of the educators and education agencies using it and with full realization that it represents only one approach that might be taken, based on the research that was available at the time of publication. This practice guide should be used as a tool to assist in decision-making rather than as a \" cookbook. \" Any references within the document to specific education products are illustrative and do not imply endorsement of these products to the exclusion of other products that are not referenced. Alternative Formats On request, this publication can be made available in alternative formats, such as Braille, large print, audiotape, or computer diskette. For more information, call the Alternative Format Center at (202) 205-8113.",
"title": ""
},
{
"docid": "edeefde21bbe1ace9a34a0ebe7bc6864",
"text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"title": ""
},
{
"docid": "134ecc62958fa9bb930ff934c5fad7a3",
"text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.",
"title": ""
},
{
"docid": "34f6603912c9775fc48329e596467107",
"text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.",
"title": ""
},
{
"docid": "646097feed29f603724f7ec6b8bbeb8b",
"text": "Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.",
"title": ""
},
{
"docid": "a203839d7ec2ca286ac93435aa552159",
"text": "Boxer is a semantic parser for English texts with many input and output possibilities, and various ways to perform meaning analysis based on Discourse Representation Theory. This involves the various ways that meaning representations can be computed, as well as their possible semantic ingredients.",
"title": ""
},
{
"docid": "c6ad38fa33666cf8d28722b9a1127d07",
"text": "Weakly-supervised semantic image segmentation suffers from lacking accurate pixel-level annotations. In this paper, we propose a novel graph convolutional network-based method, called GraphNet, to learn pixel-wise labels from weak annotations. Firstly, we construct a graph on the superpixels of a training image by combining the low-level spatial relation and high-level semantic content. Meanwhile, scribble or bounding box annotations are embedded into the graph, respectively. Then, GraphNet takes the graph as input and learns to predict high-confidence pseudo image masks by a convolutional network operating directly on graphs. At last, a segmentation network is trained supervised by these pseudo image masks. We comprehensively conduct experiments on the PASCAL VOC 2012 and PASCAL-CONTEXT segmentation benchmarks. Experimental results demonstrate that GraphNet is effective to predict the pixel labels with scribble or bounding box annotations. The proposed framework yields state-of-the-art results in the community.",
"title": ""
},
{
"docid": "cc34a912fb5e1fbb2a1b87d1c79ac01f",
"text": "Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disorder characterized by death of motor neurons leading to muscle wasting, paralysis, and death, usually within 2-3 years of symptom onset. The causes of ALS are not completely understood, and the neurodegenerative processes involved in disease progression are diverse and complex. There is substantial evidence implicating oxidative stress as a central mechanism by which motor neuron death occurs, including elevated markers of oxidative damage in ALS patient spinal cord and cerebrospinal fluid and mutations in the antioxidant enzyme superoxide dismutase 1 (SOD1) causing approximately 20% of familial ALS cases. However, the precise mechanism(s) by which mutant SOD1 leads to motor neuron degeneration has not been defined with certainty, and the ultimate trigger for increased oxidative stress in non-SOD1 cases remains unclear. Although some antioxidants have shown potential beneficial effects in animal models, human clinical trials of antioxidant therapies have so far been disappointing. Here, the evidence implicating oxidative stress in ALS pathogenesis is reviewed, along with how oxidative damage triggers or exacerbates other neurodegenerative processes, and we review the trials of a variety of antioxidants as potential therapies for ALS.",
"title": ""
},
{
"docid": "aeb56fbd60165c34c91fa0366c335f7d",
"text": "The advent of technology in the 1990s was seen as having the potential to revolutionise electronic management of student assignments. While there were advantages and disadvantages, the potential was seen as a necessary part of the future of this aspect of academia. A number of studies (including Dalgarno et al in 2006) identified issues that supported positive aspects of electronic assignment management but consistently identified drawbacks, suggesting that the maximum achievable potential for these processes may have been reached. To confirm the perception that the technology and process are indeed ‘marking time’ a further study was undertaken at the University of South Australia (UniSA). This paper deals with the study of online receipt, assessment and feedback of assessment utilizing UniSA technology referred to as AssignIT. The study identified that students prefer a paperless approach to marking however there are concerns with the nature, timing and quality of feedback. Staff have not embraced all of the potential elements of electronic management of assignments, identified Occupational Health Safety and Welfare issues, and tended to drift back to traditional manual marking processes through a lack of understanding or confidence in their ability to properly use the technology.",
"title": ""
},
{
"docid": "5aa8c418b63a3ecb71dc60d4863f35cc",
"text": "Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the Bengali sentences automatically into different groups in accordance with their underlying senses. The input sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL project of the Govt. of India, while information about the different senses of particular ambiguous lexical item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses that render sentences in different meanings. In our experiment we have achieved around 84% accurate result on the sense classification over the total input sentences. We have analyzed those residual sentences that did not comply with our experiment and did affect the results to note that in many cases, wrong syntactic structures and less semantic information are the main hurdles in semantic classification of sentences. The applicational relevance of this study is attested in automatic text classification, machine learning, information extraction, and word sense disambiguation.",
"title": ""
},
{
"docid": "9b470feac9ae4edd11b87921934c9fc2",
"text": "Cutaneous melanoma may in some instances be confused with seborrheic keratosis, which is a very common neoplasia, more often mistaken for actinic keratosis and verruca vulgaris. Melanoma may clinically resemble seborrheic keratosis and should be considered as its possible clinical simulator. We report a case of melanoma with dermatoscopic characteristics of seborrheic keratosis and emphasize the importance of the dermatoscopy algorithm in differentiating between a melanocytic and a non-melanocytic lesion, of the excisional biopsy for the establishment of the diagnosis of cutaneous tumors, and of the histopathologic examination in all surgically removed samples.",
"title": ""
},
{
"docid": "1564a94998151d52785dd0429b4ee77d",
"text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.",
"title": ""
},
{
"docid": "2343e18c8a36bc7da6357086c10f43d4",
"text": "Sensor networks offer a powerful combination of distributed sensing, computing and communication. They lend themselves to countless applications and, at the same time, offer numerous challenges due to their peculiarities, primarily the stringent energy constraints to which sensing nodes are typically subjected. The distinguishing traits of sensor networks have a direct impact on the hardware design of the nodes at at least four levels: power source, processor, communication hardware, and sensors. Various hardware platforms have already been designed to test the many ideas spawned by the research community and to implement applications to virtually all fields of science and technology. We are convinced that CAS will be able to provide a substantial contribution to the development of this exciting field.",
"title": ""
},
{
"docid": "a56d43bd191147170e1df87878ca1b11",
"text": "Although problem solving is regarded by most educators as among the most important learning outcomes, few instructional design prescriptions are available for designing problem-solving instruction and engaging learners. This paper distinguishes between well-structured problems and ill-structured problems. Well-structured problems are constrained problems with convergent solutions that engage the application of a limited number of rules and principles within welldefined parameters. Ill-structured problems possess multiple solutions, solution paths, fewer parameters which are less manipulable, and contain uncertainty about which concepts, rules, and principles are necessary for the solution or how they are organized and which solution is best. For both types of problems, this paper presents models for how learners solve them and models for designing instruction to support problem-solving skill development. The model for solving wellstructured problems is based on information processing theories of learning, while the model for solving ill-structured problems relies on an emerging theory of ill-structured problem solving and on constructivist and situated cognition approaches to learning. PROBLEM: INSTRUCTIONAL-DESIGN MODELS FOR PROBLEM SOLVING",
"title": ""
},
{
"docid": "a9d4c193693b060f6f2527e92c07e110",
"text": "We introduce a novel method for describing and controlling a 3D smoke simulation. Using harmonic analysis and principal component analysis, we define an underlying description of the fluid flow that is compact and meaningful to non-expert users. The motion of the smoke can be modified with high level tools, such as animated current curves, attractors and tornadoes. Our simulation is controllable, interactive and stable for arbitrarily long periods of time. The simulation's computational cost increases linearly in the number of motion samples and smoke particles. Our adaptive smoke particle representation conveniently incorporates the surface-like characteristics of real smoke.",
"title": ""
},
{
"docid": "3d173f723b4f60e2bb15efe22af5e450",
"text": "Microblogging websites such as twitter and Sina Weibo have attracted many users to share their experiences and express their opinions on a variety of topics. Sentiment classification of microblogging texts is of great significance in analyzing users' opinion on products, persons and hot topics. However, conventional bag-of-words-based sentiment classification methods may meet some problems in processing Chinese microblogging texts because they does not consider semantic meanings of texts. In this paper, we proposed a global RNN-based sentiment method, which use the outputs of all the time-steps as features to extract the global information of texts, for sentiment classification of Chinese microblogging texts and explored different RNN-models. The experiments on two Chinese microblogging datasets show that the proposed method achieves better performance than conventional bag-of-words-based methods.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "fdd63e1c0027f21af7dea9db9e084b26",
"text": "To bring down the number of traffic accidents and increase people’s mobility companies, such as Robot Engineering Systems (RES) try to put automated vehicles on the road. RES is developing the WEpod, a shuttle capable of autonomously navigating through mixed traffic. This research has been done in cooperation with RES to improve the localization capabilities of the WEpod. The WEpod currently localizes using its GPS and lidar sensors. These have proven to be not accurate and reliable enough to safely navigate through traffic. Therefore, other methods of localization and mapping have been investigated. The primary method investigated in this research is monocular Simultaneous Localization and Mapping (SLAM). Based on literature and practical studies, ORB-SLAM has been chosen as the implementation of SLAM. Unfortunately, ORB-SLAM is unable to initialize the setup when applied on WEpod images. Literature has shown that this problem can be solved by adding depth information to the inputs of ORB-SLAM. Obtaining depth information for the WEpod images is not an arbitrary task. The sensors on the WEpod are not capable of creating the required dense depth-maps. A Convolutional Neural Network (CNN) could be used to create the depth-maps. This research investigates whether adding a depth-estimating CNN solves this initialization problem and increases the tracking accuracy of monocular ORB-SLAM. A well performing CNN is chosen and combined with ORB-SLAM. Images pass through the depth estimating CNN to obtain depth-maps. These depth-maps together with the original images are used in ORB-SLAM, keeping the whole setup monocular. ORB-SLAM with the CNN is first tested on the Kitti dataset. The Kitti dataset is used since monocular ORBSLAM initializes on Kitti images and ground-truth depth-maps can be obtained for Kitti images. Monocular ORB-SLAM’s tracking accuracy has been compared to ORB-SLAM with ground-truth depth-maps and to ORB-SLAM with estimated depth-maps. This comparison shows that adding estimated depth-maps increases the tracking accuracy of ORB-SLAM, but not as much as the ground-truth depth images. The same setup is tested on WEpod images. The CNN is fine-tuned on 7481 Kitti images as well as on 642 WEpod images. The performance on WEpod images of both CNN versions are compared, and used in combination with ORB-SLAM. The CNN fine-tuned on the WEpod images does not perform well, missing details in the estimated depth-maps. However, this is enough to solve the initialization problem of ORB-SLAM. The combination of ORB-SLAM and the Kitti fine-tuned CNN has a better tracking accuracy than ORB-SLAM with the WEpod fine-tuned CNN. It has been shown that the initialization problem on WEpod images is solved as well as the tracking accuracy is increased. These results show that the initialization problem of monocular ORB-SLAM on WEpod images is solved by adding the CNN. This makes it applicable to improve the current localization methods on the WEpod. Using only this setup for localization on the WEpod is not possible yet, more research is necessary. Adding this setup to the current localization methods of the WEpod could increase the localization of the WEpod. This would make it safer for the WEpod to navigate through traffic. This research sets the next step into creating a fully autonomous vehicle which reduces traffic accidents and increases the mobility of people.",
"title": ""
},
{
"docid": "70cad4982e42d44eec890faf6ddc5c75",
"text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.",
"title": ""
},
{
"docid": "5f68e7d03c48d842add703ce0492c453",
"text": "This paper presents a summary of the available single-phase ac-dc topologies used for EV/PHEV, level-1 and -2 on-board charging and for providing reactive power support to the utility grid. It presents the design motives of single-phase on-board chargers in detail and makes a classification of the chargers based on their future vehicle-to-grid usage. The pros and cons of each different ac-dc topology are discussed to shed light on their suitability for reactive power support. This paper also presents and analyzes the differences between charging-only operation and capacitive reactive power operation that results in increased demand from the dc-link capacitor (more charge/discharge cycles and increased second harmonic ripple current). Moreover, battery state of charge is spared from losses during reactive power operation, but converter output power must be limited below its rated power rating to have the same stress on the dc-link capacitor.",
"title": ""
}
] |
scidocsrr
|
0a7dd51ff1a23ab6f28c0dcf7963e1eb
|
Radio Frequency Time-of-Flight Distance Measurement for Low-Cost Wireless Sensor Localization
|
[
{
"docid": "df09834abe25199ac7b3205d657fffb2",
"text": "In modern wireless communications products it is required to incorporate more and more different functions to comply with current market trends. A very attractive function with steadily growing market penetration is local positioning. To add this feature to low-cost mass-market devices without additional power consumption, it is desirable to use commercial communication chips and standards for localization of the wireless units. In this paper we present a concept to measure the distance between two IEEE 802.15.4 (ZigBee) compliant devices. The presented prototype hardware consists of a low- cost 2.45 GHz ZigBee chipset. For localization we use standard communication packets as transmit signals. Thus simultaneous data transmission and transponder localization is feasible. To achieve high positioning accuracy even in multipath environments, a coherent synthesis of measurements in multiple channels and a special signal phase evaluation concept is applied. With this technique the full available ISM bandwidth of 80 MHz is utilized. In first measurements with two different frequency references-a low-cost oscillator and a temperatur-compensated crystal oscillator-a positioning bias error of below 16 cm and 9 cm was obtained. The standard deviation was less than 3 cm and 1 cm, respectively. It is demonstrated that compared to signal correlation in time, the phase processing technique yields an accuracy improvement of roughly an order of magnitude.",
"title": ""
}
] |
[
{
"docid": "ff572d9c74252a70a48d4ba377f941ae",
"text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.",
"title": ""
},
{
"docid": "5ac63b0be4561f126c90b65e834e1d14",
"text": "Conventional security exploits have relied on overwriting the saved return pointer on the stack to hijack the path of execution. Under Sun Microsystem’s Sparc processor architecture, we were able to implement a kernel modification to transparently and automatically guard applications’ return pointers. Our implementation called StackGhost under OpenBSD 2.8 acts as a ghost in the machine. StackGhost advances exploit prevention in that it protects every application run on the system without their knowledge nor does it require their source or binary modification. We will document several of the methods devised to preserve the sanctity of the system and will explore the performance ramifications of StackGhost.",
"title": ""
},
{
"docid": "ad59ca3f7c945142baf9353eeb68e504",
"text": "This essay considers dynamic security design and corporate financing, with particular emphasis on informational micro-foundations. The central idea is that firm insiders must retain an appropriate share of firm risk, either to align their incentives with those of outside investors (moral hazard) or to signal favorable information about the quality of the firm’s assets. Informational problems lead to inevitable inefficiencies imperfect risk sharing, the possibility of bankruptcy, investment distortions, etc. The design of contracts that minimize these inefficiencies is a central question. This essay explores the implications of dynamic security design on firm operations and asset prices.",
"title": ""
},
{
"docid": "e0c3dfd45d422121e203955979e23719",
"text": "Machine Learning (ML) models are applied in a variety of tasks such as network intrusion detection or malware classification. Yet, these models are vulnerable to a class of malicious inputs known as adversarial examples. These are slightly perturbed inputs that are classified incorrectly by the ML model. The mitigation of these adversarial inputs remains an open problem. As a step towards a model-agnostic defense against adversarial examples, we show that they are not drawn from the same distribution than the original data, and can thus be detected using statistical tests. As the number of malicious points included in samples presented to the test diminishes, its detection confidence decreases. Hence, we introduce a complimentary approach to identify specific inputs that are adversarial among sets of inputs flagged by the statistical test. Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs. We evaluate our approach on multiple adversarial example crafting methods (including the fast gradient sign and Jacobian-based saliency map methods) with several datasets. The statistical test flags sample sets containing adversarial inputs with confidence above 80%. Furthermore, our augmented model either detects adversarial examples with high accuracy (> 80%) or increases the adversary’s cost—the perturbation added—by more than 150%. In this way, we show that statistical properties of adversarial examples are essential to their detection.",
"title": ""
},
{
"docid": "7091deeeea31ed1e2e8fba821e85db6e",
"text": "Protein folding is a complex process that can lead to disease when it fails. Especially poorly understood are the very early stages of protein folding, which are likely defined by intrinsic local interactions between amino acids close to each other in the protein sequence. We here present EFoldMine, a method that predicts, from the primary amino acid sequence of a protein, which amino acids are likely involved in early folding events. The method is based on early folding data from hydrogen deuterium exchange (HDX) data from NMR pulsed labelling experiments, and uses backbone and sidechain dynamics as well as secondary structure propensities as features. The EFoldMine predictions give insights into the folding process, as illustrated by a qualitative comparison with independent experimental observations. Furthermore, on a quantitative proteome scale, the predicted early folding residues tend to become the residues that interact the most in the folded structure, and they are often residues that display evolutionary covariation. The connection of the EFoldMine predictions with both folding pathway data and the folded protein structure suggests that the initial statistical behavior of the protein chain with respect to local structure formation has a lasting effect on its subsequent states.",
"title": ""
},
{
"docid": "d5665efd0e4a91e9be4c84fecd5fd4ad",
"text": "Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference. In this paper we propose Thundervolt, a new framework that enables aggressive voltage underscaling of high-performance DNN accelerators without compromising classification accuracy even in the presence of high timing error rates. Using post-synthesis timing simulations of a DNN acceleratormodeled on theGoogle TPU,we show that Thundervolt enables between 34%-57% energy savings on stateof-the-art speech and image recognition benchmarks with less than 1% loss in classification accuracy and no performance loss. Further, we show that Thundervolt is synergistic with and can further increase the energy efficiency of commonly used run-timeDNNpruning techniques like Zero-Skip.",
"title": ""
},
{
"docid": "06f421d0f63b9dc08777c573840654d5",
"text": "This paper presents the implementation of a modified state observer-based adaptive dynamic inverse controller for the Black Kite micro aerial vehicle. The pitch and velocity adaptations are computed by the modified state observer in the presence of turbulence to simulate atmospheric conditions. This state observer uses the estimation error to generate the adaptations and, hence, is more robust than model reference adaptive controllers which use modeling or tracking error. In prior work, a traditional proportional-integral-derivative control law was tested in simulation for its adaptive capability in the longitudinal dynamics of the Black Kite micro aerial vehicle. This controller tracks the altitude and velocity commands during normal conditions, but fails in the presence of both parameter uncertainties and system failures. The modified state observer-based adaptations, along with the proportional-integral-derivative controller enables tracking despite these conditions. To simulate flight of the micro aerial vehicle with turbulence, a Dryden turbulence model is included. The turbulence levels used are based on the absolute load factor experienced by the aircraft. The length scale was set to 2.0 meters with a turbulence intensity of 5.0 m/s that generates a moderate turbulence. Simulation results for various flight conditions show that the modified state observer-based adaptations were able to adapt to the uncertainties and the controller tracks the commanded altitude and velocity. The summary of results for all of the simulated test cases and the response plots of various states for typical flight cases are presented.",
"title": ""
},
{
"docid": "93810dab9ff258d6e11edaffa1e4a0ff",
"text": "Ishaq, O. 2016. Image Analysis and Deep Learning for Applications in Microscopy. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1371. 76 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9567-1. Quantitative microscopy deals with the extraction of quantitative measurements from samples observed under a microscope. Recent developments in microscopy systems, sample preparation and handling techniques have enabled high throughput biological experiments resulting in large amounts of image data, at biological scales ranging from subcellular structures such as fluorescently tagged nucleic acid sequences to whole organisms such as zebrafish embryos. Consequently, methods and algorithms for automated quantitative analysis of these images have become increasingly important. These methods range from traditional image analysis techniques to use of deep learning architectures. Many biomedical microscopy assays result in fluorescent spots. Robust detection and precise localization of these spots are two important, albeit sometimes overlapping, areas for application of quantitative image analysis. We demonstrate the use of popular deep learning architectures for spot detection and compare them against more traditional parametric model-based approaches. Moreover, we quantify the effect of pre-training and change in the size of training sets on detection performance. Thereafter, we determine the potential of training deep networks on synthetic and semi-synthetic datasets and their comparison with networks trained on manually annotated real data. In addition, we present a two-alternative forced-choice based tool for assisting in manual annotation of real image data. On a spot localization track, we parallelize a popular compressed sensing based localization method and evaluate its performance in conjunction with different optimizers, noise conditions and spot densities. We investigate its sensitivity to different point spread function estimates. Zebrafish is an important model organism, attractive for whole-organism image-based assays for drug discovery campaigns. The effect of drug-induced neuronal damage may be expressed in the form of zebrafish shape deformation. First, we present an automated method for accurate quantification of tail deformations in multi-fish micro-plate wells using image analysis techniques such as illumination correction, segmentation, generation of branch-free skeletons of partial tail-segments and their fusion to generate complete tails. Later, we demonstrate the use of a deep learning-based pipeline for classifying micro-plate wells as either drug-affected or negative controls, resulting in competitive performance, and compare the performance from deep learning against that from traditional image analysis approaches.",
"title": ""
},
{
"docid": "149ffd270f39a330f4896c7d3aa290be",
"text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.",
"title": ""
},
{
"docid": "7d0d68f2dd9e09540cb2ba71646c21d2",
"text": "INTRODUCTION: Back in time dentists used to place implants in locations with sufficient bone-dimensions only, with less regard to placement of final definitive restoration but most of the times, the placement of implant is not as accurate as intended and even a minor variation in comparison to ideal placement causes difficulties in fabrication of final prosthesis. The use of bone substitutes and membranes is now one of the standard therapeutic approaches. In order to accelerate healing of bone graft over the bony defect, numerous techniques utilizing platelet and fibrinogen concentrates have been introduced in the literature.. OBJECTIVES: This study was designed to evaluate the efficacy of using Autologous Concentrated Growth Factors (CGF) Enriched Bone Graft Matrix (Sticky Bone) and CGF-Enriched Fibrin Membrane in management of dehiscence defect around dental implant in narrow maxillary anterior ridge. MATERIALS AND METHODS: Eleven DIO implants were inserted in six adult patients presenting an upper alveolar ridge width of less than 4mm determined by cone beam computed tomogeraphy (CBCT). After implant placement, the resultant vertical labial dehiscence defect was augmented utilizing Sticky Bone and CGF-Enriched Fibrin Membrane. Three CBCTs were made, pre-operatively, immediately postoperatively and six-months post-operatively. The change in vertical defect size was calculated radiographically then statistically analyzed. RESULTS: Vertical dehiscence defect was sufficiently recovered in 5 implant-sites while in the other 6 sites it was decreased to mean value of 1.25 mm ± 0.69 SD, i.e the defect coverage in 6 implants occurred with mean value of 4.59 mm ±0.49 SD. Also the results of the present study showed that the mean of average implant stability was 59.89 mm ± 3.92 CONCLUSIONS: The combination of PRF mixed with CGF with bone graft (allograft) can increase the quality (density) of the newly formed bone and enhance the rate of new bone formation.",
"title": ""
},
{
"docid": "b045350bfb820634046bff907419d1bf",
"text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.",
"title": ""
},
{
"docid": "a7dff1f19690e31f90e0fa4a85db5d97",
"text": "This paper presents BOOM version 2, an updated version of the Berkeley Out-of-Order Machine first presented in [3]. The design exploration was performed through synthesis, place and route using the foundry-provided standard-cell library and the memory compiler in the TSMC 28 nm HPM process (high performance mobile). BOOM is an open-source processor that implements the RV64G RISC-V Instruction Set Architecture (ISA). Like most contemporary high-performance cores, BOOM is superscalar (able to execute multiple instructions per cycle) and out-oforder (able to execute instructions as their dependencies are resolved and not restricted to their program order). BOOM is implemented as a parameterizable generator written using the Chisel hardware construction language [2] that can used to generate synthesizable implementations targeting both FPGAs and ASICs. BOOMv2 is an update in which the design effort has been informed by analysis of synthesized, placed and routed data provided by a contemporary industrial tool flow. We also had access to standard singleand dual-ported memory compilers provided by the foundry, allowing us to explore design trade-offs using different SRAM memories and comparing against synthesized flip-flop arrays. The main distinguishing features of BOOMv2 include an updated 3-stage front-end design with a bigger set-associative Branch Target Buffer (BTB); a pipelined register rename stage; split floating point and integer register files; a dedicated floating point pipeline; separate issue windows for floating point, integer, and memory micro-operations; and separate stages for issue-select and register read. Managing the complexity of the register file was the largest obstacle to improving BOOM’s clock frequency. We spent considerable effort on placing-and-routing a semi-custom 9port register file to explore the potential improvements over a fully synthesized design, in conjunction with microarchitectural techniques to reduce the size and port count of the register file. BOOMv2 has a 37 fanout-of-four (FO4) inverter delay after synthesis and 50 FO4 after place-and-route, a 24% reduction from BOOMv1’s 65 FO4 after place-and-route. Unfortunately, instruction per cycle (IPC) performance drops up to 20%, mostly due to the extra latency between load instructions and dependent instructions. However, the new BOOMv2 physical design paves the way for IPC recovery later. BOOMv1-2f3i int/idiv/fdiv",
"title": ""
},
{
"docid": "66108bc186971cc1f69a20e7b7e0283f",
"text": "Mining frequent itemsets and association rules is a popular and well researched approach for discovering interesting relationships between variables in large databases. The R package arules presented in this paper provides a basic infrastructure for creating and manipulating input data sets and for analyzing the resulting itemsets and rules. The package also includes interfaces to two fast mining algorithms, the popular C implementations of Apriori and Eclat by Christian Borgelt. These algorithms can be used to mine frequent itemsets, maximal frequent itemsets, closed frequent itemsets and association rules.",
"title": ""
},
{
"docid": "f78fcf875104f8bab2fa465c414331c6",
"text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"title": ""
},
{
"docid": "53d1ddf4809ab735aa61f4059a1a38b1",
"text": "In this paper we present a wearable Haptic Feedback Device to convey intuitive motion direction to the user through haptic feedback based on vibrotactile illusions. Vibrotactile illusions occur on the skin when two or more vibrotactile actuators in proximity are actuated in coordinated sequence, causing the user to feel combined sensations, instead of separate ones. By combining these illusions we can produce various sensation patterns that are discernible by the user, thus allowing to convey different information with each pattern. A method to provide information about direction through vibrotactile illusions is introduced on this paper. This method uses a grid of vibrotactile actuators around the arm actuated in coordination. The sensation felt on the skin is consistent with the desired direction of motion, so the desired motion can be intuitively understood. We show that the users can recognize the conveyed direction, and implemented a proof of concept of the proposed method to guide users' elbow flexion/extension motion.",
"title": ""
},
{
"docid": "994f37328a1e27290af874769d41c5e7",
"text": "In the article by Powers et al, “2018 Guidelines for the Early Management of Patients With Acute Ischemic Stroke: A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association,” which published ahead of print January 24, 2018, and appeared in the March 2018 issue of the journal (Stroke. 2018;49:e46–e110. DOI: 10.1161/ STR.0000000000000158), a few corrections were needed. 1. On page e46, the text above the byline read: Reviewed for evidence-based integrity and endorsed by the American Association of Neurological Surgeons and Congress of Neurological Surgeons Endorsed by the Society for Academic Emergency Medicine It has been updated to read: Reviewed for evidence-based integrity and endorsed by the American Association of Neurological Surgeons and Congress of Neurological Surgeons Endorsed by the Society for Academic Emergency Medicine and Neurocritical Care Society The American Academy of Neurology affirms the value of this guideline as an educational tool for neurologists. 2. On page e60, in the section “2.2. Brain Imaging,” in the knowledge byte text below recommendation 12: • The seventh sentence read, “Therefore, only the eligibility criteria from these trials should be used for patient selection.” It has been updated to read, “Therefore, only the eligibility criteria from one or the other of these trials should be used for patient selection.” • The eighth sentence read, “...at this time, the DAWN and DEFUSE 3 eligibility should be strictly adhered to in clinical practice.” It has been updated to read, “...at this time, the DAWN or DEFUSE 3 eligibility should be strictly adhered to in clinical practice.” 3. On page e73, in the section “3.7. Mechanical Thrombectomy,” recommendation 8 read, “In selected patients with AIS within 6 to 24 hours....” It has been updated to read, “In selected patients with AIS within 16 to 24 hours....” 4. On page e73, in the section “3.7. Mechanical Thrombectomy,” in the knowledge byte text below recommendation 8: • The seventh sentence read, “Therefore, only the eligibility criteria from these trials should be used for patient selection.” It has been updated to read, “Therefore, only the eligibility criteria from one or the other of these trials should be used for patient selection.” • The eighth sentence read, “...at this time, the DAWN and DEFUSE-3 eligibility should be strictly adhered to in clinical practice.” It has been updated to read, “...at this time, the DAWN or DEFUSE-3 eligibility should be strictly adhered to in clinical practice.” 5. On page e76, in the section “3.10. Anticoagulants,” in the knowledge byte text below recommendation 1, the third sentence read, “...(LMWH, 64.2% versus aspirin, 6.52%; P=0.33).” It has been updated to read, “...(LMWH, 64.2% versus aspirin, 62.5%; P=0.33).” These corrections have been made to the current online version of the article, which is available at http://stroke.ahajournals.org/lookup/doi/10.1161/STR.0000000000000158. Correction",
"title": ""
},
{
"docid": "e67b9b48507dcabae92debdb9df9cb08",
"text": "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online.",
"title": ""
},
{
"docid": "4b4306cddcbf62a93dab81676e2b4461",
"text": "The use of drones in agriculture is becoming more and more popular. The paper presents a novel approach to distinguish between different field's plowing techniques by means of an RGB-D sensor. The presented system can be easily integrated in commercially available Unmanned Aerial Vehicles (UAVs). In order to successfully classify the plowing techniques, two different measurement algorithms have been developed. Experimental tests show that the proposed methodology is able to provide a good classification of the field's plowing depths.",
"title": ""
},
{
"docid": "e35669db2d6c016cf71107eb00db820d",
"text": "Mobile payments will gain significant traction in the coming years as the mobile and payment technologies mature and become widely available. Various technologies are competing to become the established standards for physical and virtual mobile payments, yet it is ultimately the users who will determine the level of success of the technologies through their adoption. Only if it becomes easier and cheaper to transact business using mobile payment applications than by using conventional methods will they become popular, either with users or providers. This document is a state of the art review of mobile payment technologies. It covers all of the technologies involved in a mobile payment solution, including mobile networks in section 2, mobile services in section 3, mobile platforms in section 4, mobile commerce in section 5 and different mobile payment solutions in sections 6 to 8.",
"title": ""
},
{
"docid": "f75ae6fedddde345109d33499853256d",
"text": "Deaths due to prescription and illicit opioid overdose have been rising at an alarming rate, particularly in the USA. Although naloxone injection is a safe and effective treatment for opioid overdose, it is frequently unavailable in a timely manner due to legal and practical restrictions on its use by laypeople. As a result, an effort spanning decades has resulted in the development of strategies to make naloxone available for layperson or \"take-home\" use. This has included the development of naloxone formulations that are easier to administer for nonmedical users, such as intranasal and autoinjector intramuscular delivery systems, efforts to distribute naloxone to potentially high-impact categories of nonmedical users, as well as efforts to reduce regulatory barriers to more widespread distribution and use. Here we review the historical and current literature on the efficacy and safety of naloxone for use by nonmedical persons, provide an evidence-based discussion of the controversies regarding the safety and efficacy of different formulations of take-home naloxone, and assess the status of current efforts to increase its public distribution. Take-home naloxone is safe and effective for the treatment of opioid overdose when administered by laypeople in a community setting, shortening the time to reversal of opioid toxicity and reducing opioid-related deaths. Complementary strategies have together shown promise for increased dissemination of take-home naloxone, including 1) provision of education and training; 2) distribution to critical populations such as persons with opioid addiction, family members, and first responders; 3) reduction of prescribing barriers to access; and 4) reduction of legal recrimination fears as barriers to use. Although there has been considerable progress in decreasing the regulatory and legal barriers to effective implementation of community naloxone programs, significant barriers still exist, and much work remains to be done to integrate these programs into efforts to provide effective treatment of opioid use disorders.",
"title": ""
}
] |
scidocsrr
|
be8810dc31c4b77df6092a2b3d52911e
|
YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer
|
[
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "aafda1cab832f1fe92ce406676e3760f",
"text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.",
"title": ""
}
] |
[
{
"docid": "530ef3f5d2f7cb5cc93243e2feb12b8e",
"text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.",
"title": ""
},
{
"docid": "d452700b9c919ba62156beecb0d50b91",
"text": "In this paper we propose a solution to the problem of body part segmentation in noisy silhouette images. In developing this solution we revisit the issue of insufficient labeled training data, by investigating how synthetically generated data can be used to train general statistical models for shape classification. In our proposed solution we produce sequences of synthetically generated images, using three dimensional rendering and motion capture information. Each image in these sequences is labeled automatically as it is generated and this labeling is based on the hand labeling of a single initial image.We use shape context features and Hidden Markov Models trained based on this labeled synthetic data. This model is then used to segment silhouettes into four body parts; arms, legs, body and head. Importantly, in all the experiments we conducted the same model is employed with no modification of any parameters after initial training.",
"title": ""
},
{
"docid": "f5d04dd0fe3e717bbbab23eb8330109c",
"text": "Unmanned Aerial Vehicle (UAV) surveillance systems allow for highly advanced and safe surveillance of hazardous locations. Further, multi-purpose drones can be widely deployed for not only gathering information but also analyzing the situation from sensed data. However, mobile drone systems have limited computing resources and battery power which makes it a challenge to use these systems for long periods of time or in fully autonomous modes. In this paper, we propose an Adaptive Computation Offloading Drone System (ACODS) architecture with reliable communication for increasing drone operating time. We design not only the response time prediction module for mission critical task offloading decision but also task offloading management module via the Multipath TCP (MPTCP). Through performance evaluation via our prototype implementation, we show that the proposed algorithm achieves significant increase in drone operation time and significantly reduces the response time.",
"title": ""
},
{
"docid": "b3962fd4000fced796f3764d009c929e",
"text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.",
"title": ""
},
{
"docid": "d54e33049b3f5170ec8bd09d8f17c05c",
"text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.",
"title": ""
},
{
"docid": "8433df9d46df33f1389c270a8f48195d",
"text": "BACKGROUND\nFingertip injuries involve varying degree of fractures of the distal phalanx and nail bed or nail plate disruptions. The treatment modalities recommended for these injuries include fracture fixation with K-wire and meticulous repair of nail bed after nail removal and later repositioning of nail or stent substitute into the nail fold by various methods. This study was undertaken to evaluate the functional outcome of vertical figure-of-eight tension band suture for finger nail disruptions with fractures of distal phalanx.\n\n\nMATERIALS AND METHODS\nA series of 40 patients aged between 4 and 58 years, with 43 fingernail disruptions and fracture of distal phalanges, were treated with vertical figure-of-eight tension band sutures without formal fixation of fracture fragments and the results were reviewed. In this method, the injuries were treated by thoroughly cleaning the wound, reducing the fracture fragments, anatomical replacement of nail plate, and securing it by vertical figure-of-eight tension band suture.\n\n\nRESULTS\nAll patients were followed up for a minimum of 3 months. The clinical evaluation of the patients was based on radiological fracture union and painless pinch to determine fingertip stability. Every single fracture united and every fingertip was clinically stable at the time of final followup. We also evaluated our results based on visual analogue scale for pain and range of motion of distal interphalangeal joint. Two sutures had to be revised due to over tensioning and subsequent vascular compromise within minutes of repair; however, this did not affect the final outcome.\n\n\nCONCLUSION\nThis technique is simple, secure, and easily reproducible. It neither requires formal repair of injured nail bed structures nor fixation of distal phalangeal fracture and results in uncomplicated reformation of nail plate and uneventful healing of distal phalangeal fractures.",
"title": ""
},
{
"docid": "175f82940aa18fe390d1ef03835de8cc",
"text": "We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the users active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.",
"title": ""
},
{
"docid": "ee80447709188fab5debfcf9b50a9dcb",
"text": "Prior research by Kornell and Bjork (2007) and Hartwig and Dunlosky (2012) has demonstrated that college students tend to employ study strategies that are far from optimal. We examined whether individuals in the broader—and typically older—population might hold different beliefs about how best to study and learn, given their more extensive experience outside of formal coursework and deadlines. Via a web-based survey, however, we found striking similarities: Learners’ study decisions tend to be driven by deadlines, and the benefits of activities such as self-testing and reviewing studied materials are elf-regulated learning etacognition indset tudy strategies mostly unappreciated. We also found evidence, however, that one’s mindset with respect to intelligence is related to one’s habits and beliefs: Individuals who believe that intelligence can be increased through effort were more likely to value the pedagogical benefits of self-testing, to restudy, and to be intrinsically motivated to learn, compared to individuals who believe that intelligence is fixed. © 2014 Society for Applied Research in Memory and Cognition. Published by Elsevier Inc. All rights With the world’s knowledge at our fingertips, there are increasng opportunities to learn on our own, not only during the years f formal education, but also across our lifespan as our careers, obbies, and interests change. The rapid pace of technological hange has also made such self-directed learning necessary: the bility to effectively self-regulate one’s learning—monitoring one’s wn learning and implementing beneficial study strategies—is, rguably, more important than ever before. Decades of research have revealed the efficacy of various study trategies (see Dunlosky, Rawson, Marsh, Nathan, & Willingham, 013, for a review of effective—and less effective—study techiques). Bjork (1994) coined the term, “desirable difficulties,” to efer to the set of study conditions or study strategies that appear to low down the acquisition of to-be-learned materials and make the earning process seem more effortful, but then enhance long-term etention and transfer, presumably because contending with those ifficulties engages processes that support learning and retention. xamples of desirable difficulties include generating information or esting oneself (instead of reading or re-reading information—a relPlease cite this article in press as: Yan, V. X., et al. Habits and beliefs Journal of Applied Research in Memory and Cognition (2014), http://dx.d tively passive activity), spacing out repeated study opportunities instead of cramming), and varying conditions of practice (rather han keeping those conditions constant and predictable). ∗ Corresponding author at: 1285 Franz Hall, Department of Psychology, University f California, Los Angeles, CA 90095, United States. Tel.: +1 310 954 6650. E-mail address: veronicayan@ucla.edu (V.X. Yan). ttp://dx.doi.org/10.1016/j.jarmac.2014.04.003 211-3681/© 2014 Society for Applied Research in Memory and Cognition. Published by reserved. Many recent findings, however—both survey-based and experimental—have revealed that learners continue to study in non-optimal ways. Learners do not appear, for example, to understand two of the most robust effects from the cognitive psychology literature—namely, the testing effect (that practicing retrieval leads to better long-term retention, compared even to re-reading; e.g., Roediger & Karpicke, 2006a) and the spacing effect (that spacing repeated study sessions leads to better long-term retention than does massing repetitions; e.g., Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006; Dempster, 1988). A survey of 472 undergraduate students by Kornell and Bjork (2007)—which was replicated by Hartwig and Dunlosky (2012)—showed that students underappreciate the learning benefits of testing. Similarly, Karpicke, Butler, and Roediger (2009) surveyed students’ study strategies and found that re-reading was by far the most popular study strategy and that self-testing tended to be used only to assess whether some level of learning had been achieved, not to enhance subsequent recall. Even when students have some appreciation of effective strategies they often do not implement those strategies. Susser and McCabe (2013), for example, showed that even though students reported understanding the benefits of spaced learning over massed learning, they often do not space their study sessions on a given topic, particularly if their upcoming test is going to have a that guide self-regulated learning: Do they vary with mindset? oi.org/10.1016/j.jarmac.2014.04.003 multiple-choice format, or if they think the material is relatively easy, or if they are simply too busy. In fact, Kornell and Bjork’s (2007) survey showed that students’ study decisions tended to be driven by impending deadlines, rather than by learning goals, Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "397036a265637f5a84256bdba80d93a2",
"text": "0167-4730/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.strusafe.2008.06.002 * Corresponding author. E-mail address: abliel@stanford.edu (A.B. Liel). The primary goal of seismic provisions in building codes is to protect life safety through the prevention of structural collapse. To evaluate the extent to which current and past building code provisions meet this objective, the authors have conducted detailed assessments of collapse risk of reinforced-concrete moment frame buildings, including both ‘ductile’ frames that conform to current building code requirements, and ‘non-ductile’ frames that are designed according to out-dated (pre-1975) building codes. Many aspects of the assessment process can have a significant impact on the evaluated collapse performance; this study focuses on methods of representing modeling parameter uncertainties in the collapse assessment process. Uncertainties in structural component strength, stiffness, deformation capacity, and cyclic deterioration are considered for non-ductile and ductile frame structures of varying heights. To practically incorporate these uncertainties in the face of the computationally intensive nonlinear response analyses needed to simulate collapse, the modeling uncertainties are assessed through a response surface, which describes the median collapse capacity as a function of the model random variables. The response surface is then used in conjunction with Monte Carlo methods to quantify the effect of these modeling uncertainties on the calculated collapse fragilities. Comparisons of the response surface based approach and a simpler approach, namely the first-order second-moment (FOSM) method, indicate that FOSM can lead to inaccurate results in some cases, particularly when the modeling uncertainties cause a shift in the prediction of the median collapse point. An alternate simplified procedure is proposed that combines aspects of the response surface and FOSM methods, providing an efficient yet accurate technique to characterize model uncertainties, accounting for the shift in median response. The methodology for incorporating uncertainties is presented here with emphasis on the collapse limit state, but is also appropriate for examining the effects of modeling uncertainties on other structural limit states. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0f79acbbac311e9005112da00ee4a692",
"text": "Eight years ago the journal Transcultural Psychiatry published the results of an epidemiological study (Chandler and Lalonde 1998) in which the highly variable rates of youth suicide among British Columbia’s First Nations were related to six markers of “cultural continuity” – community-level variables meant to document the extent to which each of the province’s almost 200 Aboriginal “bands” had taken steps to preserve their cultural past and to secure future control of their civic lives. Two key findings emerged from these earlier efforts. The first was that, although the province-wide rate of Aboriginal youth suicide was sharply elevated (more than 5 times the national average), this commonly reported summary statistic was labelled an “actuarial fiction” that failed to capture the local reality of even one of the province’s First Nations communities. Counting up all of the deaths by suicide and then simply dividing through by the total number of available Aboriginal youth obscures what is really interesting – the dramatic differences in the incidence of youth suicide that actually distinguish one band or tribal council from the next. In fact, more than half of the province’s bands reported no youth suicides during the 6-year period (1987-1992) covered by this study, while more than 90% of the suicides occurred in less than 10% of the bands. Clearly, our data demonstrated, youth suicide is not an “Aboriginal” problem per se but a problem confined to only some Aboriginal communities. Second, all six of the “cultural continuity” factors originally identified – measures intended to mark the degree to which individual Aboriginal communities had successfully taken steps to secure their cultural past in light of an imagined future – proved to be strongly related to the presence or absence of youth suicide. Every community characterized by all six of these protective factors experienced no youth suicides during the 6-year reporting period, whereas those bands in which none of these factors were present suffered suicide rates more than 10 times the national average. Because these findings were seen by us, and have come to be seen by others,1 not only as clarifying the link between cultural continuity and reduced suicide risk but also as having important policy implications, we have undertaken to replicate and broaden our earlier research efforts. We have done this in three ways. First, we have extended our earlier examination of the community-by-community incidence of Aboriginal youth suicides to include also the additional",
"title": ""
},
{
"docid": "14f127a8dd4a0fab5acd9db2a3924657",
"text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].",
"title": ""
},
{
"docid": "8cd666c0796c0fe764bc8de0d7a20fa3",
"text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.",
"title": ""
},
{
"docid": "28bc08b0e0f71b99a7f223b2285f2725",
"text": "Evidence has accrued to suggest that there are 2 distinct dimensions of narcissism, which are often labeled grandiose and vulnerable narcissism. Although individuals high on either of these dimensions interact with others in an antagonistic manner, they differ on other central constructs (e.g., Neuroticism, Extraversion). In the current study, we conducted an exploratory factor analysis of 3 prominent self-report measures of narcissism (N=858) to examine the convergent and discriminant validity of the resultant factors. A 2-factor structure was found, which supported the notion that these scales include content consistent with 2 relatively distinct constructs: grandiose and vulnerable narcissism. We then compared the similarity of the nomological networks of these dimensions in relation to indices of personality, interpersonal behavior, and psychopathology in a sample of undergraduates (n=238). Overall, the nomological networks of vulnerable and grandiose narcissism were unrelated. The current results support the need for a more explicit parsing of the narcissism construct at the level of conceptualization and assessment.",
"title": ""
},
{
"docid": "7e61652a45c490c230d368d653ef63e8",
"text": "Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.",
"title": ""
},
{
"docid": "ba302b1ee508edc2376160b3ad0a751f",
"text": "During the last years terrestrial laser scanning became a standard method of data acquisition for various applications in close range domain, like industrial production, forest inventories, plant engineering and construction, car navigation and – one of the most important fields – the recording and modelling of buildings. To use laser scanning data in an adequate way, a quality assessment of the laser scanner is inevitable. In the literature some publications can be found concerning the data quality of terrestrial laser scanners. Most of these papers concentrate on the geometrical accuracy of the scanner (errors of instrument axis, range accuracy using target etc.). In this paper a special aspect of quality assessment will be discussed: the influence of different materials and object colours on the recorded measurements of a TLS. The effects on the geometric accuracy as well as on the simultaneously acquired intensity values are the topics of our investigations. A TRIMBLE GX scanner was used for several test series. The study of different effects refer to materials commonly used at building façades, i.e. grey scaled and coloured sheets, various species of wood, a metal plate, plasters of different particle size, light-transmissive slides and surfaces of different conditions of wetness. The tests concerning a grey wedge show a dependence on the brightness where the mean square error (MSE) decrease from black to white, and therefore, confirm previous results of other research groups. Similar results had been obtained with coloured sheets. In this context an important result is that the accuracy of measurements at night-time has proved to be much better than at day time. While different species of wood and different conditions of wetness have no significant effect on the range accuracy the study of a metal plate delivers MSE values considerably higher than the accuracy of the scanner, if the angle of incidence is approximately orthogonal. Also light-transmissive slides cause enormous MSE values. It can be concluded that high precision measurements should be carried out at night-time and preferable on bright surfaces without specular characteristics.",
"title": ""
},
{
"docid": "4f1c2748a5f2e50ac1efe80c5bcd3a37",
"text": "Recently the RoboCup@Work league emerged in the world's largest robotics competition, intended for competitors wishing to compete in the field of mobile robotics for manipulation tasks in industrial environments. This competition consists of several tasks with one reflected in this work (Basic Navigation Test). This project involves the simulation in Virtual Robot Experimentation Platform (V-REP) of the behavior of a KUKA youBot. The goal is to verify that the robots can navigate in their environment, in a standalone mode, in a robust and secure way. To achieve the proposed objectives, it was necessary to create a program in Lua and test it in simulation. This involved the study of robot kinematics and mechanics, Simultaneous Localization And Mapping (SLAM) and perception from sensors. In this work is introduced an algorithm developed for a KUKA youBot platform to perform the SLAM while reaching for the goal position, which works according to the requirements of this competition BNT. This algorithm also minimizes the errors in the built map and in the path travelled by the robot.",
"title": ""
},
{
"docid": "fc26ebb8329c84d96a714065117dda02",
"text": "Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.",
"title": ""
},
{
"docid": "dd5c0dc27c0b195b1b8f2c6e6a5cea88",
"text": "The increasing dependence on information networks for business operations has focused managerial attention on managing risks posed by failure of these networks. In this paper, we develop models to assess the risk of failure on the availability of an information network due to attacks that exploit software vulnerabilities. Software vulnerabilities arise from software installed on the nodes of the network. When the same software stack is installed on multiple nodes on the network, software vulnerabilities are shared among them. These shared vulnerabilities can result in correlated failure of multiple nodes resulting in longer repair times and greater loss of availability of the network. Considering positive network effects (e.g., compatibility) alone without taking the risks of correlated failure and the resulting downtime into account would lead to overinvestment in homogeneous software deployment. Exploiting characteristics unique to information networks, we present a queuing model that allows us to quantify downtime loss faced by a rm as a function of (1) investment in security technologies to avert attacks, (2) software diversification to limit the risk of correlated failure under attacks, and (3) investment in IT resources to repair failures due to attacks. The novelty of this method is that we endogenize the failure distribution and the node correlation distribution, and show how the diversification strategy and other security measures/investments may impact these two distributions, which in turn determine the security loss faced by the firm. We analyze and discuss the effectiveness of diversification strategy under different operating conditions and in the presence of changing vulnerabilities. We also take into account the benefits and costs of a diversification strategy. Our analysis provides conditions under which diversification strategy is advantageous.",
"title": ""
},
{
"docid": "4e63f4a95d501641b80fcdf9bc0f89f6",
"text": "Streptococcus milleri was isolated from the active lesions of three patients with perineal hidradenitis suppurativa. In each patient, elimination of this organism by appropriate antibiotic therapy was accompanied by marked clinical improvement.",
"title": ""
},
{
"docid": "db597c88e71a8397b81216282d394623",
"text": "In many real applications, graph data is subject to uncertainties due to incompleteness and imprecision of data. Mining such uncertain graph data is semantically different from and computationally more challenging than mining conventional exact graph data. This paper investigates the problem of mining uncertain graph data and especially focuses on mining frequent subgraph patterns on an uncertain graph database. A novel model of uncertain graphs is presented, and the frequent subgraph pattern mining problem is formalized by introducing a new measure, called expected support. This problem is proved to be NP-hard. An approximate mining algorithm is proposed to find a set of approximately frequent subgraph patterns by allowing an error tolerance on expected supports of discovered subgraph patterns. The algorithm uses efficient methods to determine whether a subgraph pattern can be output or not and a new pruning method to reduce the complexity of examining subgraph patterns. Analytical and experimental results show that the algorithm is very efficient, accurate, and scalable for large uncertain graph databases. To the best of our knowledge, this paper is the first one to investigate the problem of mining frequent subgraph patterns from uncertain graph data.",
"title": ""
}
] |
scidocsrr
|
0693209386b1531a62d4e5726c021392
|
Loughborough University Institutional Repository Understanding Generation Y and their use of social media : a review and research agenda
|
[
{
"docid": "b4880ddb59730f465f585f3686d1d2b1",
"text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.",
"title": ""
}
] |
[
{
"docid": "fe397e4124ef517268aaabd999bc02c4",
"text": "A new frequency-reconfigurable quasi-Yagi dipole antenna is presented. It consists of a driven dipole element with two varactors in two arms, a director with an additional varactor, a truncated ground plane reflector, a microstrip-to-coplanar-stripline (CPS) transition, and a novel biasing circuit. The effective electrical length of the director element and that of the driven arms are adjusted together by changing the biasing voltages. A 35% continuously frequency-tuning bandwidth, from 1.80 to 2.45 GHz, is achieved. This covers a number of wireless communication systems, including 3G UMTS, US WCS, and WLAN. The length-adjustable director allows the endfire pattern with relatively high gain to be maintained over the entire tuning bandwidth. Measured results show that the gain varies from 5.6 to 7.6 dBi and the front-to-back ratio is better than 10 dB. The H-plane cross polarization is below -15 dB, and that in the E-plane is below -20 dB.",
"title": ""
},
{
"docid": "7e1c0505e40212ef0e8748229654169f",
"text": "This article addresses the concept of quality risk in outsourcing. Recent trends in outsourcing extend a contract manufacturer’s (CM’s) responsibility to several functional areas, such as research and development and design in addition to manufacturing. This trend enables an original equipment manufacturer (OEM) to focus on sales and pricing of its product. However, increasing CM responsibilities also suggest that the OEM’s product quality is mainly determined by its CM. We identify two factors that cause quality risk in this outsourcing relationship. First, the CM and the OEM may not be able to contract on quality; second, the OEM may not know the cost of quality to the CM. We characterize the effects of these two quality risk factors on the firms’ profits and on the resulting product quality. We determine how the OEM’s pricing strategy affects quality risk. We show, for example, that the effect of noncontractible quality is higher than the effect of private quality cost information when the OEM sets the sales price after observing the product’s quality. We also show that committing to a sales price mitigates the adverse effect of quality risk. To obtain these results, we develop and analyze a three-stage decision model. This model is also used to understand the impact of recent information technologies on profits and product quality. For example, we provide a decision tree that an OEM can use in deciding whether to invest in an enterprise-wide quality management system that enables accounting of quality-related activities across the supply chain. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 56: 669–685, 2009",
"title": ""
},
{
"docid": "46072702edbe5177e48510fe37b77943",
"text": "Due to the explosive increase of online images, content-based image retrieval has gained a lot of attention. The success of deep learning techniques such as convolutional neural networks have motivated us to explore its applications in our context. The main contribution of our work is a novel end-to-end supervised learning framework that learns probability-based semantic-level similarity and feature-level similarity simultaneously. The main advantage of our novel hashing scheme that it is able to reduce the computational cost of retrieval significantly at the state-of-the-art efficiency level. We report on comprehensive experiments using public available datasets such as Oxford, Holidays and ImageNet 2012 retrieval datasets.",
"title": ""
},
{
"docid": "7d0020ff1a7500df1458ddfd568db7b4",
"text": "In this position paper, we address the problems of automated road congestion detection and alerting systems and their security properties. We review different theoretical adaptive road traffic control approaches, and three widely deployed adaptive traffic control systems (ATCSs), namely, SCATS, SCOOT and InSync. We then discuss some related research questions, and the corresponding possible approaches, as well as the adversary model and potential attack scenarios. Two theoretical concepts of automated road congestion alarm systems (including system architecture, communication protocol, and algorithms) are proposed on top of ATCSs, such as SCATS, SCOOT and InSync, by incorporating secure wireless vehicle-to-infrastructure (V2I) communications. Finally, the security properties of the proposed system have been discussed and analysed using the ProVerif protocol verification tool.",
"title": ""
},
{
"docid": "0882fc46d918957e73d0381420277bdc",
"text": "The term ‘resource use efficiency in agriculture’ may be broadly defined to include the concepts of technical efficiency, allocative efficiency and environmental efficiency. An efficient farmer allocates his land, labour, water and other resources in an optimal manner, so as to maximise his income, at least cost, on sustainable basis. However, there are countless studies showing that farmers often use their resources sub-optimally. While some farmers may attain maximum physical yield per unit of land at a high cost, some others achieve maximum profit per unit of inputs used. Also in the process of achieving maximum yield and returns, some farmers may ignore the environmentally adverse consequences, if any, of their resource use intensity. Logically all enterprising farmers would try to maximise their farm returns by allocating resources in an efficient manner. But as resources (both qualitatively and quantitatively) and managerial efficiency of different farmers vary widely, the net returns per unit of inputs used also vary significantly from farm to farm. Also a farmer’s access to technology, credit, market and other infrastructure and policy support, coupled with risk perception and risk management capacity under erratic weather and price situations would determine his farm efficiency. Moreover, a farmer knowingly or unknowingly may over-exploit his land and water resources for maximising farm income in the short run, thereby resulting in soil and water degradation and rapid depletion of ground water, and also posing a problem of sustainability of agriculture in the long run. In fact, soil degradation, depletion of groundwater and water pollution due to farmers’ managerial inefficiency or otherwise, have a social cost, while farmers who forego certain agricultural practices which cause any such sustainability problem may have a high opportunity cost. Furthermore, a farmer may not be often either fully aware or properly guided and aided for alternative, albeit best possible uses of his scarce resources like land and water. Thus, there are economic as well as environmental aspects of resource use efficiency. In addition, from the point of view of public exchequer, the resource use efficiency would mean that public investment, subsidies and credit for agriculture are",
"title": ""
},
{
"docid": "f611ccffbe10acb7dcbd6cb8f7ffaeaa",
"text": "We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to help train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth, KITTI, and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.",
"title": ""
},
{
"docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4",
"text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.",
"title": ""
},
{
"docid": "b4ed57258b85ab4d81d5071fc7ad2cc9",
"text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.",
"title": ""
},
{
"docid": "04756d4dfc34215c8acb895ecfcfb406",
"text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.",
"title": ""
},
{
"docid": "9500dfc92149c5a808cec89b140fc0c3",
"text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.",
"title": ""
},
{
"docid": "a2258145e9366bfbf515b3949b2d70fa",
"text": "Affect intensity (AI) may reconcile 2 seemingly paradoxical findings: Women report more negative affect than men but equal happiness as men. AI describes people's varying response intensity to identical emotional stimuli. A college sample of 66 women and 34 men was assessed on both positive and negative affect using 4 measurement methods: self-report, peer report, daily report, and memory performance. A principal-components analysis revealed an affect balance component and an AI component. Multimeasure affect balance and AI scores were created, and t tests were computed that showed women to be as happy as and more intense than men. Gender accounted for less than 1% of the variance in happiness but over 13% in AI. Thus, depression findings of more negative affect in women do not conflict with well-being findings of equal happiness across gender. Generally, women's more intense positive emotions balance their higher negative affect.",
"title": ""
},
{
"docid": "47505c95f8a3cf136b3b5a76847990fc",
"text": "We present a hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces. Our formulation uses a GPU-based interior point filter to cull away many of the points that do not lie on the boundary. The convex hull of remaining points is computed on a CPU. The GPU-based filter proceeds in an incremental manner and computes a pseudo-hull that is contained inside the convex hull of the original points. The pseudo-hull computation involves only localized operations and maps well to GPU architectures. Furthermore, the underlying approach extends to high dimensional point sets and deforming points. In practice, our culling filter can reduce the number of candidate points by two orders of magnitude. We have implemented the hybrid algorithm on commodity GPUs, and evaluated its performance on several large point sets. In practice, the GPU-based filtering algorithm can cull up to 85M interior points per second on an NVIDIA GeForce GTX 580 and the hybrid algorithm improves the overall performance of convex hull computation by 10 − 27 times (for static point sets) and 22 − 46 times (for deforming point sets).",
"title": ""
},
{
"docid": "a83ba31bdf54c9dec09788bfb1c972fc",
"text": "In 1999, ISPOR formed the Quality of Life Special Interest group (QoL-SIG)--Translation and Cultural Adaptation group (TCA group) to stimulate discussion on and create guidelines and standards for the translation and cultural adaptation of patient-reported outcome (PRO) measures. After identifying a general lack of consistency in current methods and published guidelines, the TCA group saw a need to develop a holistic perspective that synthesized the full spectrum of published methods. This process resulted in the development of Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice (PGP), a report on current methods, and an appraisal of their strengths and weaknesses. The TCA Group undertook a review of evidence from current practice, a review of the literature and existing guidelines, and consideration of the issues facing the pharmaceutical industry, regulators, and the broader outcomes research community. Each approach to translation and cultural adaptation was considered systematically in terms of rationale, components, key actors, and the potential benefits and risks associated with each approach and step. The results of this review were subjected to discussion and challenge within the TCA group, as well as consultation with the outcomes research community at large. Through this review, a consensus emerged on a broad approach, along with a detailed critique of the strengths and weaknesses of the differing methodologies. The results of this review are set out as \"Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice\" and are reported in this document.",
"title": ""
},
{
"docid": "ba65c99adc34e05cf0cd1b5618a21826",
"text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.",
"title": ""
},
{
"docid": "70c6da9da15ad40b4f64386b890ccf51",
"text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.",
"title": ""
},
{
"docid": "0ec0b6797069ee5bd737ea787cba43ef",
"text": "Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented. MULLER, Henning, et al. Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals. Genève : 1999",
"title": ""
},
{
"docid": "c26e9f486621e37d66bf0925d8ff2a3e",
"text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.",
"title": ""
},
{
"docid": "c9c98e50a49bbc781047dc425a2d6fa1",
"text": "Understanding wound healing today involves much more than simply stating that there are three phases: \"inflammation, proliferation, and maturation.\" Wound healing is a complex series of reactions and interactions among cells and \"mediators.\" Each year, new mediators are discovered and our understanding of inflammatory mediators and cellular interactions grows. This article will attempt to provide a concise report of the current literature on wound healing by first reviewing the phases of wound healing followed by \"the players\" of wound healing: inflammatory mediators (cytokines, growth factors, proteases, eicosanoids, kinins, and more), nitric oxide, and the cellular elements. The discussion will end with a pictorial essay summarizing the wound-healing process.",
"title": ""
},
{
"docid": "ceedf70c92099fc8612a38f91f2c9507",
"text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.",
"title": ""
},
{
"docid": "fd32bf580b316634e44a8c37adfab2eb",
"text": "In a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental PL/I optimizing compiler. When the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. Previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoc techniques used took considerable amounts of compile time. We have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. Spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. This new approach produces better object code and takes much less compile time.",
"title": ""
}
] |
scidocsrr
|
794d168e82a8e468067707d0e2c62f40
|
Signed networks in social media
|
[
{
"docid": "31a1a5ce4c9a8bc09cbecb396164ceb4",
"text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.",
"title": ""
}
] |
[
{
"docid": "4d4219d8e4fd1aa86724f3561aea414b",
"text": "Trajectory search has long been an attractive and challenging topic which blooms various interesting applications in spatial-temporal databases. In this work, we study a new problem of searching trajectories by locations, in which context the query is only a small set of locations with or without an order specified, while the target is to find the k Best-Connected Trajectories (k-BCT) from a database such that the k-BCT best connect the designated locations geographically. Different from the conventional trajectory search that looks for similar trajectories w.r.t. shape or other criteria by using a sample query trajectory, we focus on the goodness of connection provided by a trajectory to the specified query locations. This new query can benefit users in many novel applications such as trip planning.\n In our work, we firstly define a new similarity function for measuring how well a trajectory connects the query locations, with both spatial distance and order constraint being considered. Upon the observation that the number of query locations is normally small (e.g. 10 or less) since it is impractical for a user to input too many locations, we analyze the feasibility of using a general-purpose spatial index to achieve efficient k-BCT search, based on a simple Incremental k-NN based Algorithm (IKNN). The IKNN effectively prunes and refines trajectories by using the devised lower bound and upper bound of similarity. Our contributions mainly lie in adapting the best-first and depth-first k-NN algorithms to the basic IKNN properly, and more importantly ensuring the efficiency in both search effort and memory usage. An in-depth study on the adaption and its efficiency is provided. Further optimization is also presented to accelerate the IKNN algorithm. Finally, we verify the efficiency of the algorithm by extensive experiments.",
"title": ""
},
{
"docid": "a65d1881f5869f35844064d38b684ac8",
"text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.",
"title": ""
},
{
"docid": "350f7694198d1b2c0a2c8cc1b75fc3c2",
"text": "We present a methodology, called fast repetition rate (FRR) fluorescence, that measures the functional absorption cross-section (sigmaPS II) of Photosystem II (PS II), energy transfer between PS II units (p), photochemical and nonphotochemical quenching of chlorophyll fluorescence, and the kinetics of electron transfer on the acceptor side of PS II. The FRR fluorescence technique applies a sequence of subsaturating excitation pulses ('flashlets') at microsecond intervals to induce fluorescence transients. This approach is extremely flexible and allows the generation of both single-turnover (ST) and multiple-turnover (MT) flashes. Using a combination of ST and MT flashes, we investigated the effect of excitation protocols on the measured fluorescence parameters. The maximum fluorescence yield induced by an ST flash applied shortly (10 &mgr;s to 5 ms) following an MT flash increased to a level comparable to that of an MT flash, while the functional absorption cross-section decreased by about 40%. We interpret this phenomenon as evidence that an MT flash induces an increase in the fluorescence-rate constant, concomitant with a decrease in the photosynthetic-rate constant in PS II reaction centers. The simultaneous measurements of sigmaPS II, p, and the kinetics of Q-A reoxidation, which can be derived only from a combination of ST and MT flash fluorescence transients, permits robust characterization of the processes of photosynthetic energy-conversion.",
"title": ""
},
{
"docid": "2f83b2ef8f71c56069304b0962074edc",
"text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.",
"title": ""
},
{
"docid": "5d851687f9a69db7419ff054623f03d8",
"text": "Attention mechanisms are a design trend of deep neural networks that stands out in various computer vision tasks. Recently, some works have attempted to apply attention mechanisms to single image super-resolution (SR) tasks. However, they apply the mechanisms to SR in the same or similar ways used for high-level computer vision problems without much consideration of the different nature between SR and other problems. In this paper, we propose a new attention method, which is composed of new channelwise and spatial attention mechanisms optimized for SR and a new fused attention to combine them. Based on this, we propose a new residual attention module (RAM) and a SR network using RAM (SRRAM). We provide in-depth experimental analysis of different attention mechanisms in SR. It is shown that the proposed method can construct both deep and lightweight SR networks showing improved performance in comparison to existing state-of-the-art methods.",
"title": ""
},
{
"docid": "8eb96feea999ce77f2b56b7941af2587",
"text": "The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. Moreover, the paper posits that cyber security goes beyond the boundaries of traditional information security to include not only the protection of information resources, but also that of other assets, including the person him/herself. In information security, reference to the human factor usually relates to the role(s) of humans in the security process. In cyber security this factor has an additional dimension, namely, the humans as potential targets of cyber attacks or even unknowingly participating in a cyber attack. This additional dimension has ethical implications for society as a whole, since the protection of certain vulnerable groups, for example children, could be seen as a societal responsibility. a 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0cd1f01d1b2a5afd8c6eba13ef5082fa",
"text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.",
"title": ""
},
{
"docid": "1c4e1feed1509e0a003dca23ad3a902c",
"text": "With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students’ engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition. Implications for research and practice are discussed.",
"title": ""
},
{
"docid": "0d30cfe8755f146ded936aab55cb80d3",
"text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).",
"title": ""
},
{
"docid": "e4c33ca67526cb083cae1543e5564127",
"text": "Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user's sequential behavior in the current session, whereas the user's main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user's sequential behavior and capture the user's main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user's sequential behavior and main purpose simultaneously.",
"title": ""
},
{
"docid": "9464f2e308b5c8ab1f2fac1c008042c0",
"text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.",
"title": ""
},
{
"docid": "96af2e34acf9f1e9c0c57cc24795d0f9",
"text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.",
"title": ""
},
{
"docid": "fcbb5b1adf14b443ef0d4a6f939140fe",
"text": "In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.",
"title": ""
},
{
"docid": "11a1c92620d58100194b735bfc18c695",
"text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.",
"title": ""
},
{
"docid": "080e7880623a09494652fd578802c156",
"text": "Whole-cell biosensors are a good alternative to enzyme-based biosensors since they offer the benefits of low cost and improved stability. In recent years, live cells have been employed as biosensors for a wide range of targets. In this review, we will focus on the use of microorganisms that are genetically modified with the desirable outputs in order to improve the biosensor performance. Different methodologies based on genetic/protein engineering and synthetic biology to construct microorganisms with the required signal outputs, sensitivity, and selectivity will be discussed.",
"title": ""
},
{
"docid": "8724a0d439736a419835c1527f01fe43",
"text": "Shuffled frog-leaping algorithm (SFLA) is a new memetic meta-heuristic algorithm with efficient mathematical function and global search capability. Traveling salesman problem (TSP) is a complex combinatorial optimization problem, which is typically used as benchmark for testing the effectiveness as well as the efficiency of a newly proposed optimization algorithm. When applying the shuffled frog-leaping algorithm in TSP, memeplex and submemeplex are built and the evolution of the algorithm, especially the local exploration in submemeplex is carefully adapted based on the prototype SFLA. Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP. Particularly for TSP with 51 cities, the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB. The shortest tour length is 428.87 instead of 429.98 which can be found cited elsewhere.",
"title": ""
},
{
"docid": "827396df94e0bca08cee7e4d673044ef",
"text": "Localization in Wireless Sensor Networks (WSNs) is regarded as an emerging technology for numerous cyberphysical system applications, which equips wireless sensors with the capability to report data that is geographically meaningful for location based services and applications. However, due to the increasingly pervasive existence of smart sensors in WSN, a single localization technique that affects the overall performance is not sufficient for all applications. Thus, there have been many significant advances on localization techniques in WSNs in the past few years. The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs. Specifically, we present the recent advances on localization techniques in WSNs by considering a wide variety of factors and categorizing them in terms of data processing (centralized vs. distributed), transmission range (range free vs. range based), mobility (static vs. mobile), operating environments (indoor vs. outdoor), node density (sparse vs dense), routing, algorithms, etc. The recent localization techniques in WSNs are also summarized in the form of tables. With this paper, readers can have a more thorough understanding of localization in sensor networks, as well as research trends and future research directions in this area.",
"title": ""
},
{
"docid": "fb7fc0398c951a584726a31ae307c53c",
"text": "In this paper, we use a advanced method called Faster R-CNN to detect traffic signs. This new method represents the highest level in object recognition, which don't need to extract image feature manually anymore and can segment image to get candidate region proposals automatically. Our experiment is based on a traffic sign detection competition in 2016 by CCF and UISEE company. The mAP(mean average precision) value of the result is 0.3449 that means Faster R-CNN can indeed be applied in this field. Even though the experiment did not achieve the best results, we explore a new method in the area of the traffic signs detection. We believe that we can get a better achievement in the future.",
"title": ""
},
{
"docid": "45885c7c86a05d2ba3979b689f7ce5c8",
"text": "Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or problem-specific proposals hand-crafted by an expert. In this paper, we propose ANICE-MC, a novel method to automatically design efficient Markov chain kernels tailored for a specific domain. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.",
"title": ""
},
{
"docid": "190ec7d12156c298e8a545a5655df969",
"text": "The Linked Movie Database (LinkedMDB) project provides a demonstration of the first open linked dataset connecting several major existing (and highly popular) movie web resources. The database exposed by LinkedMDB contains millions of RDF triples with hundreds of thousands of RDF links to existing web data sources that are part of the growing Linking Open Data cloud, as well as to popular movierelated web pages such as IMDb. LinkedMDB uses a novel way of creating and maintaining large quantities of high quality links by employing state-of-the-art approximate join techniques for finding links, and providing additional RDF metadata about the quality of the links and the techniques used for deriving them.",
"title": ""
}
] |
scidocsrr
|
7d2dcba86295187b3e3b788600ae3558
|
Model-based Software Testing
|
[
{
"docid": "e94596df0531345dcb3026e9d3edcf2b",
"text": "The use of context-free grammars to improve functional testing of very-large-scale integrated circuits is described. It is shown that enhanced context-free grammars are effective tools for generating test data. The discussion covers preliminary considerations, the first tests, generating systematic tests, and testing subroutines. The author's experience using context-free grammars to generate tests for VLSI circuit simulators indicates that they are remarkably effective tools that virtually anyone can use to debug virtually any program.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "d395193924613f6818511650d24cf9ae",
"text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.",
"title": ""
},
{
"docid": "c82c32d057557903184e55f0f76c7a4e",
"text": "An experimental program of steel panel shear walls is outlined and some results are presented. The tested specimens utilized low yield strength (LYS) steel infill panels and reduced beam sections (RBS) at the beam-ends. Two specimens make allowances for penetration of the panel by utilities, which would exist in a retrofit situation. The first, consisting of multiple holes, or perforations, in the steel panel, also has the characteristic of further reducing the corresponding solid panel strength (as compared with the use of traditional steel). The second such specimen utilizes quarter-circle cutouts in the panel corners, which are reinforced to transfer the panel forces to the adjacent framing.",
"title": ""
},
{
"docid": "659cc5b1999c962c9fb0b3544c8b928a",
"text": "During the recent years the mainstream framework for HCI research — the informationprocessing cognitive psychology —has gained more and more criticism because of serious problems in applying it both in research and practical design. In a debate within HCI research the capability of information processing psychology has been questioned and new theoretical frameworks searched. This paper presents an overview of the situation and discusses potentials of Activity Theory as an alternative framework for HCI research and design.",
"title": ""
},
{
"docid": "3fd7a368b1b35f96593ac79d8a1658bc",
"text": "Musical training has emerged as a useful framework for the investigation of training-related plasticity in the human brain. Learning to play an instrument is a highly complex task that involves the interaction of several modalities and higher-order cognitive functions and that results in behavioral, structural, and functional changes on time scales ranging from days to years. While early work focused on comparison of musical experts and novices, more recently an increasing number of controlled training studies provide clear experimental evidence for training effects. Here, we review research investigating brain plasticity induced by musical training, highlight common patterns and possible underlying mechanisms of such plasticity, and integrate these studies with findings and models for mechanisms of plasticity in other domains.",
"title": ""
},
{
"docid": "369cdea246738d5504669e2f9581ae70",
"text": "Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.",
"title": ""
},
{
"docid": "7eba71bb191a31bd87cd9d2678a7b860",
"text": "In winter, rainbow smelt (Osmerus mordax) accumulate glycerol and produce an antifreeze protein (AFP), which both contribute to freeze resistance. The role of differential gene expression in the seasonal pattern of these adaptations was investigated. First, cDNAs encoding smelt and Atlantic salmon (Salmo salar) phosphoenolpyruvate carboxykinase (PEPCK) and smelt glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were cloned so that all sequences required for expression analysis would be available. Using quantitative PCR, expression of beta actin in rainbow smelt liver was compared with that of GAPDH in order to determine its validity as a reference gene. Then, levels of glycerol-3-phosphate dehydrogenase (GPDH), PEPCK, and AFP relative to beta actin were measured in smelt liver over a fall-winter-spring interval. Levels of GPDH mRNA increased in the fall just before plasma glycerol accumulation, implying a driving role in glycerol synthesis. GPDH mRNA levels then declined during winter, well in advance of serum glycerol, suggesting the possibility of GPDH enzyme or glycerol conservation in smelt during the winter months. PEPCK mRNA levels rose in parallel with serum glycerol in the fall, consistent with an increasing requirement for amino acids as metabolic precursors, remained elevated for much of the winter, and then declined in advance of the decline in plasma glycerol. AFP mRNA was elevated at the onset of fall sampling in October and remained elevated until April, implying separate regulation from GPDH and PEPCK. Thus, winter freezing point depression in smelt appears to result from a seasonal cycle of GPDH gene expression, with an ensuing increase in the expression of PEPCK, and a similar but independent cycle of AFP gene expression.",
"title": ""
},
{
"docid": "da17a995148ffcb4e219bb3f56f5ce4a",
"text": "As education communities grow more interested in STEM (science, technology, engineering, and mathematics), schools have integrated more technology and engineering opportunities into their curricula. Makerspaces for all ages have emerged as a way to support STEM learning through creativity, community building, and hands-on learning. However, little research has evaluated the learning that happens in these spaces, especially in young children. One framework that has been used successfully as an evaluative tool in informal and technology-rich learning spaces is Positive Technological Development (PTD). PTD is an educational framework that describes positive behaviors children exhibit while engaging in digital learning experiences. In this exploratory case study, researchers observed children in a makerspace to determine whether the environment (the space and teachers) contributed to children’s Positive Technological Development. N = 20 children and teachers from a Kindergarten classroom were observed over 6 hours as they engaged in makerspace activities. The children’s activity, teacher’s facilitation, and the physical space were evaluated for alignment with the PTD framework. Results reveal that children showed high overall PTD engagement, and that teachers and the space supported children’s learning in complementary aspects of PTD. Recommendations for practitioners hoping to design and implement a young children’s makerspace are discussed.",
"title": ""
},
{
"docid": "f9ee2d57aa034ea14749de81e241d856",
"text": "Advances in computing technology and computer graphics engulfed with huge collections of data have introduced new visualization techniques. This gives users many choices of visualization techniques to gain an insight about the dataset at hand. However, selecting the most suitable visualization for a given dataset and the task to be performed on the data is subjective. The work presented here introduces a set of visualization metrics to quantify visualization techniques. Based on a comprehensive literature survey, we propose effectiveness, expressiveness, readability, and interactivity as the visualization metrics. Using these metrics, a framework for optimizing the layout of a visualization technique is also presented. The framework is based on an evolutionary algorithm (EA) which uses treemaps as a case study. The EA starts with a randomly initialized population, where each chromosome of the population represents one complete treemap. Using the genetic operators and the proposed visualization metrics as an objective function, the EA finds the optimum visualization layout. The visualizations that evolved are compared with the state-of-the-art treemap visualization tool through a user study. The user study utilizes benchmark tasks for the evaluation. A comparison is also performed using direct assessment, where internal and external visualization metrics are used. Results are further verified using analysis of variance (ANOVA) test. The results suggest better performance of the proposed metrics and the EA-based framework for optimizing visualization layout. The proposed methodology can also be extended to other visualization techniques. © 2017 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bfe58868ab05a6ba607ef1f288d37f33",
"text": "There is much debate as to whether online offenders are a distinct group of sex offenders or if they are simply typical sex offenders using a new technology. A meta-analysis was conducted to examine the extent to which online and offline offenders differ on demographic and psychological variables. Online offenders were more likely to be Caucasian and were slightly younger than offline offenders. In terms of psychological variables, online offenders had greater victim empathy, greater sexual deviancy, and lower impression management than offline offenders. Both online and offline offenders reported greater rates of childhood physical and sexual abuse than the general population. Additionally, online offenders were more likely to be Caucasian, younger, single, and unemployed compared with the general population. Many of the observed differences can be explained by assuming that online offenders, compared with offline offenders, have greater self-control and more psychological barriers to acting on their deviant interests.",
"title": ""
},
{
"docid": "7f0023af2f3df688aa58ae3317286727",
"text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.",
"title": ""
},
{
"docid": "1eac633f903fb5fa1d37405ef0ca59a5",
"text": "OBJECTIVE\nTo examine psychometric properties of the Self-Care Inventory-revised (SCI-R), a self-report measure of perceived adherence to diabetes self-care recommendations, among adults with diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nWe used three data sets of adult type 1 and type 2 diabetic patients to examine psychometric properties of the SCI-R. Principal component and factor analyses examined whether a general factor or common factors were present. Associations with measures of theoretically related concepts were examined to assess SCI-R concurrent and convergent validity. Internal reliability coefficients were calculated. Responsiveness was assessed using paired t tests, effect size, and Guyatt's statistic for type 1 patients who completed psychoeducation.\n\n\nRESULTS\nPrincipal component and factor analyses identified a general factor but no consistent common factors. Internal consistency of the SCI-R was alpha = 0.87. Correlation with a measure of frequency of diabetes self-care behaviors was r = 0.63, providing evidence for SCI-R concurrent validity. The SCI-R correlated with diabetes-related distress (r = -0.36), self-esteem (r = 0.25), self-efficacy (r = 0.47), depression (r = -0.22), anxiety (r = -0.24), and HbA(1c) (r = -0.37), supporting construct validity. Responsiveness analyses showed SCI-R scores improved with diabetes psychoeducation with a medium effect size of 0.62 and a Guyatt's statistic of 0.85.\n\n\nCONCLUSIONS\nThe SCI-R is a brief, psychometrically sound measure of perceptions of adherence to recommended diabetes self-care behaviors of adults with type 1 or type 2 diabetes.",
"title": ""
},
{
"docid": "9d34b66f9d387cb61e358c46568f03dd",
"text": "This and the companion paper present an analysis of the amplitude and time-dependent changes of the apparent frequency of a seven-story reinforced-concrete hotel building in Van Nuys, Calif. Data of recorded response to 12 earthquakes are used, representing very small, intermediate, and large excitations (peak ground velocity, vmax = 0.6 2 11, 23, and 57 cm/s, causing no minor and major damage). This paper presents a description of the building structure, foundation, and surrounding soil, the strong motion data used in the analysis, the soil-structure interaction model assumed, and results of Fourier analysis of the recorded response. The results show that the apparent frequency changes form one earthquake to another. The general trend is a reduction with increasing amplitudes of motion. The smallest values (measured during the damaging motions) are 0.4 and 0.5 Hz for the longitudinal and transverse directions. The largest values are 1.1 and 1.4 Hz, respectively, determined from response to ambient noise after the damage occurred. This implies 64% reduction of the system frequency, or a factor '3 change, from small to large response amplitudes, and is interpreted to be caused by nonlinearities in the soil.",
"title": ""
},
{
"docid": "201377a4c2d29c907c33f8cdfe6d7084",
"text": "•Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Unable to model unseen words, learns poor representations for infrequent words, and unable to capture character-level patterns.",
"title": ""
},
{
"docid": "cc56706151e027c89eea5639486d4cd3",
"text": "To refine user interest profiling, this paper focuses on extending scientific subject ontology via keyword clustering and on improving the accuracy and effectiveness of recommendation of the electronic academic publications in online services. A clustering approach is proposed for domain keywords for the purpose of the subject ontology extension. Based on the keyword clusters, the construction of user interest profiles is presented on a rather fine granularity level. In the construction of user interest profiles, we apply two types of interest profiles: explicit profiles and implicit profiles. The explicit eighted keyword graph",
"title": ""
},
{
"docid": "f4e67e19f5938f475a2757282082b695",
"text": "Classrooms are complex social systems, and student-teacher relationships and interactions are also complex, multicomponent systems. We posit that the nature and quality of relationship interactions between teachers and students are fundamental to understanding student engagement, can be assessed through standardized observation methods, and can be changed by providing teachers knowledge about developmental processes relevant for classroom interactions and personalized feedback/support about their interactive behaviors and cues. When these supports are provided to teachers’ interactions, student engagement increases. In this chapter, we focus on the theoretical and empirical links between interactions and engagement and present an approach to intervention designed to increase the quality of such interactions and, in turn, increase student engagement and, ultimately, learning and development. Recognizing general principles of development in complex systems, a theory of the classroom as a setting for development, and a theory of change specifi c to this social setting are the ultimate goals of this work. Engagement, in this context, is both an outcome in its own R. C. Pianta , Ph.D. (*) Curry School of Education , University of Virginia , PO Box 400260 , Charlottesville , VA 22904-4260 , USA e-mail: rcp4p@virginia.edu B. K. Hamre , Ph.D. Center for Advanced Study of Teaching and Learning , University of Virginia , Charlottesville , VA , USA e-mail: bkh3d@virginia.edu J. P. Allen , Ph.D. Department of Psychology , University of Virginia , Charlottesville , VA , USA e-mail: allen@virginia.edu Teacher-Student Relationships and Engagement: Conceptualizing, Measuring, and Improving the Capacity of Classroom Interactions* Robert C. Pianta , Bridget K. Hamre , and Joseph P. Allen *Preparation of this chapter was supported in part by the Wm. T. Grant Foundation, the Foundation for Child Development, and the Institute of Education Sciences. 366 R.C. Pianta et al.",
"title": ""
},
{
"docid": "34d16a5eb254846f431e2c716309e20a",
"text": "AIM\nWe investigated the uptake and pharmacokinetics of l-ergothioneine (ET), a dietary thione with free radical scavenging and cytoprotective capabilities, after oral administration to humans, and its effect on biomarkers of oxidative damage and inflammation.\n\n\nRESULTS\nAfter oral administration, ET is avidly absorbed and retained by the body with significant elevations in plasma and whole blood concentrations, and relatively low urinary excretion (<4% of administered ET). ET levels in whole blood were highly correlated to levels of hercynine and S-methyl-ergothioneine, suggesting that they may be metabolites. After ET administration, some decreasing trends were seen in biomarkers of oxidative damage and inflammation, including allantoin (urate oxidation), 8-hydroxy-2'-deoxyguanosine (DNA damage), 8-iso-PGF2α (lipid peroxidation), protein carbonylation, and C-reactive protein. However, most of the changes were non-significant.\n\n\nINNOVATION\nThis is the first study investigating the administration of pure ET to healthy human volunteers and monitoring its uptake and pharmacokinetics. This compound is rapidly gaining attention due to its unique properties, and this study lays the foundation for future human studies.\n\n\nCONCLUSION\nThe uptake and retention of ET by the body suggests an important physiological function. The decreasing trend of oxidative damage biomarkers is consistent with animal studies suggesting that ET may function as a major antioxidant but perhaps only under conditions of oxidative stress. Antioxid. Redox Signal. 26, 193-206.",
"title": ""
},
{
"docid": "883042a6004a5be3865da51da20fa7c9",
"text": "Green Mining is a field of MSR that studies software energy consumption and relies on software performance data. Unfortunately there is a severe lack of publicly available software power use performance data. This means that green mining researchers must generate this data themselves by writing tests, building multiple revisions of a product, and then running these tests multiple times (10+) for each software revision while measuring power use. Then, they must aggregate these measurements to estimate the energy consumed by the tests for each software revision. This is time consuming and is made more difficult by the constraints of mobile devices and their OSes. In this paper we propose, implement, and demonstrate Green Miner: the first dedicated hardware mining software repositories testbed. The Green Miner physically measures the energy consumption of mobile devices (Android phones) and automates the testing of applications, and the reporting of measurements back to developers and researchers. The Green Miner has already produced valuable results for commercial Android application developers, and has been shown to replicate other power studies' results.",
"title": ""
},
{
"docid": "ba9ee073a073c31bfa0d1845a90f12ca",
"text": "Nowadays, health disease are increasing day by day due to life style, hereditary. Especially, heart disease has become more common these days, i.e. life of people is at risk. Each individual has different values for Blood pressure, cholesterol and pulse rate. But according to medically proven results the normal values of Blood pressure is 120/90, cholesterol is and pulse rate is 72. This paper gives the survey about different classification techniques used for predicting the risk level of each person based on age, gender, Blood pressure, cholesterol, pulse rate. The patient risk level is classified using datamining classification techniques such as Naïve Bayes, KNN, Decision Tree Algorithm, Neural Network. etc., Accuracy of the risk level is high when using more number of attributes.",
"title": ""
},
{
"docid": "b5b08bdd830144741cf900f6d41fe87d",
"text": "A wealth of research has established that practice tests improve memory for the tested material. Although the benefits of practice tests are well documented, the mechanisms underlying testing effects are not well understood. We propose the mediator effectiveness hypothesis, which states that more-effective mediators (that is, information linking cues to targets) are generated during practice involving tests with restudy versus during restudy only. Effective mediators must be retrievable at time of test and must elicit the target response. We evaluated these two components of mediator effectiveness for learning foreign language translations during practice involving either test-restudy or restudy only. Supporting the mediator effectiveness hypothesis, test-restudy practice resulted in mediators that were more likely to be retrieved and more likely to elicit targets on a final test.",
"title": ""
},
{
"docid": "bf14f996f9013351aca1e9935157c0e3",
"text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.",
"title": ""
}
] |
scidocsrr
|
590b171dde0c348430ff6e9098d7a4c6
|
Machine learning, medical diagnosis, and biomedical engineering research - commentary
|
[
{
"docid": "ea8716e339cdc51210f64436a5c91c44",
"text": "Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970’s to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. (Intelligent Data Analysis, Vol. I, no. 3, http:llwwwelsevier.co&ocate/ida)",
"title": ""
}
] |
[
{
"docid": "a827d89c56521de7dff8a59039c52181",
"text": "A set of tools is being prepared in the frame of ESA activity [18191/04/NL] labelled: \"Mars Rover Chassis Evaluation Tools\" to support design, selection and optimisation of space exploration rovers in Europe. This activity is carried out jointly by Contraves Space as prime contractor, EPFL, DLR, Surrey Space Centre and EADS Space Transportation. This paper describes the current results of this study and its intended used for selection, design and optimisation on different wheeled vehicles. These tools would also allow future developments for a more efficient motion control on rover. INTRODUCTION AND MOTIVATION A set of tools is being developed to support the design of planetary rovers in Europe. The RCET will enable accurate predictions and characterisations of rover performances as related to the locomotion subsystem. This infrastructure consists of both S/W and H/W elements that will be interwoven to result in a user-friendly environment. The actual need for mobility increased in terms of range and duration. In this respect, redesigning specific aspects of the past rover concepts, in particular the development of most suitable all terrain performances is appropriate [9]. Analysis and design methodologies for terrestrial surface vehicles to operate on unprepared surfaces have been successfully applied to planet rover developments for the first time during the Apollo LRV manned lunar rover programme of the late 1960’s and early 1970’s [1,2]. Key to this accomplishment and to rational surface vehicle designs in general are quantitative descriptions of the terrain and of the interaction between the terrain and the vehicle. Not only the wheel/ground interaction is essential for efficient locomotion, but also the rover kinematics concepts. In recent terrestrial off-the-road vehicle development and acquisition, especially in the military, the so-called ‘Virtual Proving Ground’ (VPG) Simulation Technology has become essential. The integrated environments previously available to design engineers involved sophisticated hardware and software and cost hundreds of thousands of Euros. The experimentation and operational costs associated with the use of such instruments were even more alarming. The promise of VPG is to lower the risk and cost in vehicle definition and design by allowing early concept characterisation and trade-off’s based on numerical models without having to rely on prototyping for concept assessment. A similar approach is proposed for future European planetary rover programmes and is to be enabled by RCET. The first part of this paper describes the methodology used in the RCET activity and gives an overview of the different tools under development. The next section details the theory and modules used for the simulation. Finally the last section relates the first results, the future work and concludes this paper. In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2 4, 2004",
"title": ""
},
{
"docid": "2c328d1dd45733ad8063ea89a6b6df43",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "0b10bd76d0d78e609c6397b60257a2ed",
"text": "Persistent increase in population of world is demanding more and more supply of food. Hence there is a significant need of advancement in cultivation to meet up the future food needs. It is important to know moisture levels in soil to maximize the output. But most of farmers cannot afford high cost devices to measure soil moisture. Our research work in this paper focuses on home-made low cost moisture sensor with accuracy. In this paper we present a method to manufacture soil moisture sensor to estimate moisture content in soil hence by providing information about required water supply for good cultivation. This sensor is tested with several samples of soil and able to meet considerable accuracy. Measuring soil moisture is an effective way to determine condition of soil and get information about the quantity of water that need to be supplied for cultivation. Two separate methods are illustrated in this paper to determine soil moisture over an area and along the depth.",
"title": ""
},
{
"docid": "0038c1aaa5d9823f44c118a7048d574a",
"text": "We present the design and implementation of a system which allows a standard paper-based exam to be graded via tablet computers. The paper exam is given normally in a course, with a specialized footer that allows for automated recognition of each exam page. The exam pages are then scanned in via a high-speed scanner, graded by one or more people using tablet computers, and returned electronically to the students. The system provides many advantages over regular paper-based exam grading, and boasts a faster grading experience than traditional grading methods.",
"title": ""
},
{
"docid": "7e8feb5f8d816a0c0626f6fdc4db7c04",
"text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.",
"title": ""
},
{
"docid": "038e48bcae7346ef03a318bb3a280bcc",
"text": "Low back pain (LBP) is a problem worldwide with a lifetime prevalence reported to be as high as 84%. The lifetime prevalence of low back pain is reported to be as high as 84%, and the prevalence of chronic low back pain is about 23%, with 11–12% of the population being disabled by low back pain [1]. LBP is defined as pain experienced between the twelfth rib and the inferior gluteal fold, with or without associated leg pain [2]. Based on the etiology LBP is classified as Specific Low Back Pain and Non-specific Low Back Pain. Of all the LBP patients 10% are attributed to Specific and 90% are attributed to NonSpecific Low Back Pain (NSLBP) [3]. Specific LBP are those back pains which have specific etiology causes like Sponylolisthesis, Spondylosis, Ankylosing Spondylitis, Prolapsed disc etc.",
"title": ""
},
{
"docid": "67d8680a41939c58a866f684caa514a3",
"text": "Triboelectric effect works on the principle of triboelectrification and electrostatic induction. This principle is used to generate voltage by converting mechanical energy into electrical energy. This paper presents the charging behavior of different capacitors by rubbing of two different materials using mechanical motion. The numerical and simulation modeling, describes the charging performance of a TENG with a bridge rectifier. It is also demonstrated that a 10 μF capacitor can be charged to a maximum of 24.04 volt in 300 seconds and it is also provide 2800 μJ/cm3 maximum energy density. Such system can be used for ultralow power electronic devices, biomedical devices and self-powered appliances etc.",
"title": ""
},
{
"docid": "09f19a5e4751dc3ee4aa38817aafd3cf",
"text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013",
"title": ""
},
{
"docid": "44e28ba2149dce27fd0ccc9ed2065feb",
"text": "Flip chip assembly technology is an attractive solution for high I/O density and fine-pitch microelectronics packaging. Recently, high efficient GaN-based light-emitting diodes (LEDs) have undergone a rapid development and flip chip bonding has been widely applied to fabricate high-brightness GaN micro-LED arrays [1]. The flip chip GaN LED has some advantages over the traditional top-emission LED, including improved current spreading, higher light extraction efficiency, better thermal dissipation capability and the potential of further optical component integration [2, 3]. With the advantages of flip chip assembly, micro-LED (μLED) arrays with high I/O density can be performed with improved luminous efficiency than conventional p-side-up micro-LED arrays and are suitable for many potential applications, such as micro-displays, bio-photonics and visible light communications (VLC), etc. In particular, μLED array based selif-emissive micro-display has the promising to achieve high brightness and contrast, reliability, long-life and compactness, which conventional micro-displays like LCD, OLED, etc, cannot compete with. In this study, GaN micro-LED array device with flip chip assembly package process was presented. The bonding quality of flip chip high density micro-LED array is tested by daisy chain test. The p-n junction tests of the devices are measured for electrical characteristics. The illumination condition of each micro-diode pixel was examined under a forward bias. Failure mode analysis was performed using cross sectioning and scanning electron microscopy (SEM). Finally, the fully packaged micro-LED array device is demonstrated as a prototype of dice projector system.",
"title": ""
},
{
"docid": "a112cd88f637ecb0465935388bc65ca4",
"text": "This paper shows a Class-E RF power amplifier designed to obtain a flat-top transistor-voltage waveform whose peak value is 81% of the peak value of the voltage of a “Classical” Class-E amplifier.",
"title": ""
},
{
"docid": "1de10e40580ba019045baaa485f8e729",
"text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.",
"title": ""
},
{
"docid": "753a4af9741cd3fec4e0e5effaf5fc67",
"text": "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.",
"title": ""
},
{
"docid": "18c507d6624f153cb1b7beaf503b0d54",
"text": "The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.",
"title": ""
},
{
"docid": "84f9a6913a7689a5bbeb04f3173237b2",
"text": "BACKGROUND\nPsychosocial treatments are the mainstay of management of autism in the UK but there is a notable lack of a systematic evidence base for their effectiveness. Randomised controlled trial (RCT) studies in this area have been rare but are essential because of the developmental heterogeneity of the disorder. We aimed to test a new theoretically based social communication intervention targeting parental communication in a randomised design against routine care alone.\n\n\nMETHODS\nThe intervention was given in addition to existing care and involved regular monthly therapist contact for 6 months with a further 6 months of 2-monthly consolidation sessions. It aimed to educate parents and train them in adapted communication tailored to their child's individual competencies. Twenty-eight children with autism were randomised between this treatment and routine care alone, stratified for age and baseline severity. Outcome was measured at 12 months from commencement of intervention, using standardised instruments.\n\n\nRESULTS\nAll cases studied met full Autism Diagnostic Interview (ADI) criteria for classical autism. Treatment and controls had similar routine care during the study period and there were no study dropouts after treatment had started. The active treatment group showed significant improvement compared with controls on the primary outcome measure--Autism Diagnostic Observation Schedule (ADOS) total score, particularly in reciprocal social interaction--and on secondary measures of expressive language, communicative initiation and parent-child interaction. Suggestive but non-significant results were found in Vineland Adaptive Behaviour Scales (Communication Sub-domain) and ADOS stereotyped and restricted behaviour domain.\n\n\nCONCLUSIONS\nA Randomised Treatment Trial design of this kind in classical autism is feasible and acceptable to patients. This pilot study suggests significant additional treatment benefits following a targeted (but relatively non-intensive) dyadic social communication treatment, when compared with routine care. The study needs replication on larger and independent samples. It should encourage further RCT designs in this area.",
"title": ""
},
{
"docid": "2afb992058eb720ff0baf4216e3a22c2",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.",
"title": ""
},
{
"docid": "a740207cc7d4a0db263dae2b7c9402d9",
"text": "In this paper we propose a Deep Autoencoder Mixture Clustering (DAMIC) algorithm based on a mixture of deep autoencoders where each cluster is represented by an autoencoder. A clustering network transforms the data into another space and then selects one of the clusters. Next, the autoencoder associated with this cluster is used to reconstruct the data-point. The clustering algorithm jointly learns the nonlinear data representation and the set of autoencoders. The optimal clustering is found by minimizing the reconstruction loss of the mixture of autoencoder network. Unlike other deep clustering algorithms, no regularization term is needed to avoid data collapsing to a single point. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.",
"title": ""
},
{
"docid": "a8abc8da0f2d5f8055c4ed6ea2294c6c",
"text": "This paper presents the design of a modulated metasurface (MTS) antenna capable to provide both right-hand (RH) and left-hand (LH) circularly polarized (CP) boresight radiation at Ku-band (13.5 GHz). This antenna is based on the interaction of two cylindrical-wavefront surface wave (SW) modes of transverse electric (TE) and transverse magnetic (TM) types with a rotationally symmetric, anisotropic-modulated MTS placed on top of a grounded slab. A properly designed centered circular waveguide feed excites the two orthogonal (decoupled) SW modes and guarantees the balance of the power associated with each of them. By a proper selection of the anisotropy and modulation of the MTS pattern, the phase velocities of the two modes are synchronized, and leakage is generated in broadside direction with two orthogonal linear polarizations. When the circular waveguide is excited with two mutually orthogonal TE11 modes in phase-quadrature, an LHCP or RHCP antenna is obtained. This paper explains the feeding system and the MTS requirements that guarantee the balanced conditions of the TM/TE SWs and consequent generation of dual CP boresight radiation.",
"title": ""
},
{
"docid": "c5cfe386f6561eab1003d5572443612e",
"text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.",
"title": ""
},
{
"docid": "b24fa0e9c208bf8ea0ea5f3fe0453884",
"text": "Bacteria and fungi are ubiquitous in the atmosphere. The diversity and abundance of airborne microbes may be strongly influenced by atmospheric conditions or even influence atmospheric conditions themselves by acting as ice nucleators. However, few comprehensive studies have described the diversity and dynamics of airborne bacteria and fungi based on culture-independent techniques. We document atmospheric microbial abundance, community composition, and ice nucleation at a high-elevation site in northwestern Colorado. We used a standard small-subunit rRNA gene Sanger sequencing approach for total microbial community analysis and a bacteria-specific 16S rRNA bar-coded pyrosequencing approach (4,864 sequences total). During the 2-week collection period, total microbial abundances were relatively constant, ranging from 9.6 x 10(5) to 6.6 x 10(6) cells m(-3) of air, and the diversity and composition of the airborne microbial communities were also relatively static. Bacteria and fungi were nearly equivalent, and members of the proteobacterial groups Burkholderiales and Moraxellaceae (particularly the genus Psychrobacter) were dominant. These taxa were not always the most abundant in freshly fallen snow samples collected at this site. Although there was minimal variability in microbial abundances and composition within the atmosphere, the number of biological ice nuclei increased significantly during periods of high relative humidity. However, these changes in ice nuclei numbers were not associated with changes in the relative abundances of the most commonly studied ice-nucleating bacteria.",
"title": ""
},
{
"docid": "9bbf9422ae450a17e0c46d14acf3a3e3",
"text": "This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework.",
"title": ""
}
] |
scidocsrr
|
1d42f6b0d2e62e2463e4a2b36186afc3
|
Generation Alpha at the Intersection of Technology, Play and Motivation
|
[
{
"docid": "e0ef97db18a47ba02756ba97830a0d0c",
"text": "This article reviews the literature concerning the introduction of interactive whiteboards (IWBs) in educational settings. It identifies common themes to emerge from a burgeoning and diverse literature, which includes reports and summaries available on the Internet. Although the literature reviewed is overwhelmingly positive about the impact and the potential of IWBs, it is primarily based on the views of teachers and pupils. There is insufficient evidence to identify the actual impact of such technologies upon learning either in terms of classroom interaction or upon attainment and achievement. This article examines this issue in light of varying conceptions of interactivity and research into the effects of learning with verbal and visual information.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "9d6a0b31bf2b64f1ec624222a2222e2a",
"text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati",
"title": ""
}
] |
[
{
"docid": "d4c8e9ff4129b2e6e7671f11667c57d5",
"text": "Currently, the number of surveillance cameras is rapidly increasing responding to security issues. But constructing an intelligent detection system is not easy because it needs high computing performance. This study aims to construct a real-world video surveillance system that can effectively detect moving person using limited resources. To this end, we propose a simple framework to detect and recognize moving objects using outdoor CCTV video footages by combining background subtraction and Convolutional Neural Networks (CNNs). A background subtraction algorithm is first applied to each video frame to find the regions of interest (ROIs). A CNN classification is then carried out to classify the obtained ROIs into one of the predefined classes. Our approach much reduces the computation complexity in comparison to other object detection algorithms. For the experiments, new datasets are constructed by filming alleys and playgrounds, places where crimes are likely to occur. Different image sizes and experimental settings are tested to construct the best classifier for detecting people. The best classification accuracy of 0.85 was obtained for a test set from the same camera with training set and 0.82 with different cameras.",
"title": ""
},
{
"docid": "b40129a15767189a7a595db89c066cf8",
"text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.",
"title": ""
},
{
"docid": "41468ef8950c372586485725478c80db",
"text": "Sobolevicanthus transvaalensis n.sp. is described from the Cape Teal, Anas capensis Gmelin, 1789, collected in the Republic of South Africa. The new species possesses 8 skrjabinoid hooks 78–88 μm long (mean 85 μm) and a short claviform cirrus-sac 79–143 μm long and resembles S. javanensis (Davis, 1945) and S. terraereginae (Johnston, 1913). It can be distinguished from S. javanensis by its shorter cirrus-sac and smaller cirrus diameter, and by differences in the morphology of the accessory sac and vagina and in their position relative to the cirrus-sac. It can be separated from S. terraereginae on the basis of cirrus length and diameter. The basal diameter of the cirrus in S. terraereginae is three times that in S. transvaalensis. ac]19830414",
"title": ""
},
{
"docid": "1e57a3da54c0d37bc47134961feaf981",
"text": "Software Development Life Cycle (SDLC) is a process consisting of various phases like requirements analysis, designing, coding, testing and implementation & maintenance of a software system as well as the way, in which these phases are implemented. Research studies reveal that the initial two phases, viz. requirements and design are the skeleton of the entire development life cycle. Designing has several sub-activities such as Architectural, Function-Oriented and Object- Oriented design, which aim to transform the requirements into detailed specifications covering all facets of the system in a proper way, but at the same time, there exists various related challenges too. One of the foremost challenges is the minimum interaction between construction and design teams causing numerous problems during design such as: production delays, incomplete designs, rework, change orders, etc. Prior research studies reveal that Artificial Intelligence (AI) techniques may eliminate these problems by offering several tools/techniques to automate certain processes up to a certain extent. In this paper, our major aim is to identify the challenges in each of the stages of the design phase and possibility of AI techniques to overcome these identified issues. In addition, the paper also explores the relationship between these issues and their possible AI solution/s through Venn-Diagram. For some of the issues, there exist more than one AI techniques but for some issues, no AI technique/s have been found to overcome the same and accordingly, those issues are still open for further research.",
"title": ""
},
{
"docid": "9c9e3261c293aedea006becd2177a6d5",
"text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.",
"title": ""
},
{
"docid": "704961413b936703a1a6fe26bc64f256",
"text": "The rise of cloud computing brings virtualization technology continues to heat up. Based on Xen's I/O virtualization subsystem, under the virtual machine environment which has multi-type tasks, the existing schedulers can't achieve response with I/O-bound tasks in time. This paper presents ECredit scheduler combined complexity evaluation of I/O task in Xen's I/O virtualization subsystem. It prioritizes the I/O-bound task realizing fair scheduling. The experiments show that the optimized scheduling algorithm can reduce the response time of I/O-bound task and improve the performance of the virtual system.",
"title": ""
},
{
"docid": "bf333ff6237d875c34a5c62b0216d5d9",
"text": "The design of tall buildings essentially involves a conceptual design, approximate analysis, preliminary design and optimization, to safely carry gravity and lateral loads. The design criteria are, strength, serviceability, stability and human comfort. The strength is satisfied by limit stresses, while serviceability is satisfied by drift limits in the range of H/500 to H/1000. Stability is satisfied by sufficient factor of safety against buckling and P-Delta effects. The factor of safety is around 1.67 to 1.92. The human comfort aspects are satisfied by accelerations in the range of 10 to 25 milli-g, where g=acceleration due to gravity, about 981cms/sec^2. The aim of the structural engineer is to arrive at suitable structural schemes, to satisfy these criteria, and assess their structural weights in weight/unit area in square feet or square meters. This initiates structural drawings and specifications to enable construction engineers to proceed with fabrication and erection operations. The weight of steel in lbs/sqft or in kg/sqm is often a parameter the architects and construction managers are looking for from the structural engineer. This includes the weights of floor system, girders, braces and columns. The premium for wind, is optimized to yield drifts in the range of H/500, where H is the height of the tall building. Herein, some aspects of the design of gravity system, and the lateral system, are explored. Preliminary design and optimization steps are illustrated with examples of actual tall buildings designed by CBM Engineers, Houston, Texas, with whom the author has been associated with during the past 3 decades. Dr.Joseph P.Colaco, its President, has been responsible for the tallest buildings in Los Angeles, Houston, St. Louis, Dallas, New Orleans, and Washington, D.C, and with the author in its design staff as a Senior Structural Engineer. Research in the development of approximate methods of analysis, and preliminary design and optimization, has been conducted at WPI, with several of the author’s graduate students. These are also illustrated. Software systems to do approximate analysis of shear-wall frame, framed-tube, out rigger braced tall buildings are illustrated. Advanced Design courses in reinforced and pre-stressed concrete, as well as structural steel design at WPI, use these systems. Research herein, was supported by grants from NSF, Bethlehem Steel, and Army.",
"title": ""
},
{
"docid": "ddecb743bc098a3e31ca58bc17810cf1",
"text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.",
"title": ""
},
{
"docid": "b754b1d245aa68aeeb37cf78cf54682f",
"text": "This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. OPEN ACCESS Entropy 2013, 15 3823",
"title": ""
},
{
"docid": "473baf99a816e24cec8dec2b03eb0958",
"text": "We propose a method that allows an unskilled user to create an accurate physical replica of a digital 3D model. We use a projector/camera pair to scan a work in progress, and project multiple forms of guidance onto the object itself that indicate which areas need more material, which need less, and where any ridges, valleys or depth discontinuities are. The user adjusts the model using the guidance and iterates, making the shape of the physical object approach that of the target 3D model over time. We show how this approach can be used to create a duplicate of an existing object, by scanning the object and using that scan as the target shape. The user is free to make the reproduction at a different scale and out of different materials: we turn a toy car into cake. We extend the technique to support replicating a sequence of models to create stop-motion video. We demonstrate an end-to-end system in which real-world performance capture data is retargeted to claymation. Our approach allows users to easily and accurately create complex shapes, and naturally supports a large range of materials and model sizes.",
"title": ""
},
{
"docid": "3d1c1e507ed603488742666a9cfb45f2",
"text": "This page is dedicated to design science research in Information Systems (IS). Design science research is yet another \"lens\" or set of synthetic and analytical techniques and perspectives (complementing the Positivist and Interpretive perspectives) for performing research in IS. Design science research involves the creation of new knowledge through design of novel or innovative artifacts (things or processes that have or can have material existence) and analysis of the use and/or performance of such artifacts along with reflection and abstraction—to improve and understand the behavior of aspects of Information Systems. Such artifacts include—but certainly are not limited to—algorithms (e.g. for information retrieval), human/computer interfaces, and system design methodologies or languages. Design science researchers can be found in many disciplines and fields, notably Engineering and Computer Science; they use a variety of approaches, methods and techniques. In Information Systems, following a number of years of a general shift in IS research away from technological to managerial and organizational issues, an increasing number of observers are calling for a return to an exploration of the \"IT\" that underlies all IS research (Orlikowski and Iacono, 2001) thus underlining the need for IS design science research.",
"title": ""
},
{
"docid": "bd2c3ee69cda5c08eb106e0994a77186",
"text": "This paper explores the combination of self-organizing map (SOM) and feedback, in order to represent sequences of inputs. In general, neural networks with time-delayed feedback represent time implicitly, by combining current inputs and past activities. It has been difficult to apply this approach to SOM, because feedback generates instability during learning. We demonstrate a solution to this problem, based on a nonlinearity. The result is a generalization of SOM that learns to represent sequences recursively. We demonstrate that the resulting representations are adapted to the temporal statistics of the input series.",
"title": ""
},
{
"docid": "3188d901ab997dcabc795ad3da6af659",
"text": "This paper is about detecting incorrect arcs in a dependency parse for sentences that contain grammar mistakes. Pruning these arcs results in well-formed parse fragments that can still be useful for downstream applications. We propose two automatic methods that jointly parse the ungrammatical sentence and prune the incorrect arcs: a parser retrained on a parallel corpus of ungrammatical sentences with their corrections, and a sequence-to-sequence method. Experimental results show that the proposed strategies are promising for detecting incorrect syntactic dependencies as well as incorrect semantic dependencies.",
"title": ""
},
{
"docid": "7074c90ee464e4c1d0e3515834835817",
"text": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
"title": ""
},
{
"docid": "6fb8a5456a2bb0ce21f8ac0664aac6eb",
"text": "For autonomous driving, moving objects like vehicles and pedestrians are of critical importance as they primarily influence the maneuvering and braking of the car. Typically, they are detected by motion segmentation of dense optical flow augmented by a CNN based object detector for capturing semantics. In this paper, our aim is to jointly model motion and appearance cues in a single convolutional network. We propose a novel two-stream architecture for joint learning of object detection and motion segmentation. We designed three different flavors of our network to establish systematic comparison. It is shown that the joint training of tasks significantly improves accuracy compared to training them independently. Although motion segmentation has relatively fewer data than vehicle detection. The shared fusion encoder benefits from the joint training to learn a generalized representation. We created our own publicly available dataset (KITTI MOD) by extending KITTI object detection to obtain static/moving annotations on the vehicles. We compared against MPNet as a baseline, which is the current state of the art for CNN-based motion detection. It is shown that the proposed two-stream architecture improves the mAP score by 21.5% in KITTI MOD. We also evaluated our algorithm on the non-automotive DAVIS dataset and obtained accuracy close to the state-of-the-art performance. The proposed network runs at 8 fps on a Titan X GPU using a basic VGG16 encoder.",
"title": ""
},
{
"docid": "685e6338727b4ab899cffe2bbc1a20fc",
"text": "Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.",
"title": ""
},
{
"docid": "e6555beb963f40c39089959a1c417c2f",
"text": "In this paper, we consider the problem of insufficient runtime and memory-space complexities of deep convolutional neural networks for visual emotion recognition. A survey of recent compression methods and efficient neural networks architectures is provided. We experimentally compare the computational speed and memory consumption during the training and the inference stages of such methods as the weights matrix decomposition, binarization and hashing. It is shown that the most efficient optimization can be achieved with the matrices decomposition and hashing. Finally, we explore the possibility to distill the knowledge from the large neural network, if only large unlabeled sample of facial images is available.",
"title": ""
},
{
"docid": "9a4ca8c02ffb45013115124011e7417e",
"text": "Now, we come to offer you the right catalogues of book to open. multisensor data fusion a review of the state of the art is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
},
{
"docid": "b044bab52a36945cfc9d7948468a78ee",
"text": "Recently, the speech recognition is very attractive for researchers because of the very significant related applications. For this reason, the novel research has been of very importance in the academic community. The aim of this work is to find out a new and appropriate feature extraction method for Arabic language recognition. In the present study, wavelet packet transform (WPT) with modular arithmetic and neural network were investigated for Arabic vowels recognition. The number of repeating the remainder was carried out for a speech signal. 266 coefficients are given to probabilistic neural network (PNN) for classification. The claimed results showed that the proposed method can make an effectual analysis with classification rates may reach 97%. Four published methods were studied for comparison. The proposed modular wavelet packet and neural networks (MWNN) expert system could obtain the best recognition rate. [Emad F. Khalaf, Khaled Daqrouq Ali Morfeq. Arabic Vowels Recognition by Modular Arithmetic and Wavelets using Neural Network. Life Sci J 2014;11(3):33-41]. (ISSN:1097-8135). http://www.lifesciencesite.com. 6",
"title": ""
},
{
"docid": "9f1441bc10d7b0234a3736ce83d5c14b",
"text": "Conservation of genetic diversity, one of the three main forms of biodiversity, is a fundamental concern in conservation biology as it provides the raw material for evolutionary change and thus the potential to adapt to changing environments. By means of meta-analyses, we tested the generality of the hypotheses that habitat fragmentation affects genetic diversity of plant populations and that certain life history and ecological traits of plants can determine differential susceptibility to genetic erosion in fragmented habitats. Additionally, we assessed whether certain methodological approaches used by authors influence the ability to detect fragmentation effects on plant genetic diversity. We found overall large and negative effects of fragmentation on genetic diversity and outcrossing rates but no effects on inbreeding coefficients. Significant increases in inbreeding coefficient in fragmented habitats were only observed in studies analyzing progenies. The mating system and the rarity status of plants explained the highest proportion of variation in the effect sizes among species. The age of the fragment was also decisive in explaining variability among effect sizes: the larger the number of generations elapsed in fragmentation conditions, the larger the negative magnitude of effect sizes on heterozygosity. Our results also suggest that fragmentation is shifting mating patterns towards increased selfing. We conclude that current conservation efforts in fragmented habitats should be focused on common or recently rare species and mainly outcrossing species and outline important issues that need to be addressed in future research on this area.",
"title": ""
}
] |
scidocsrr
|
bb8fcd3d1a60426e69032232797ee101
|
An End-to-End Text-Independent Speaker Identification System on Short Utterances
|
[
{
"docid": "b83e537a2c8dcd24b096005ef0cb3897",
"text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.",
"title": ""
},
{
"docid": "83525470a770a036e9c7bb737dfe0535",
"text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.",
"title": ""
}
] |
[
{
"docid": "bb50f0ad981d3f81df6810322da7bd71",
"text": "Scale-model laboratory tests of a surface effect ship (SES) conducted in a near-shore transforming wave field are discussed. Waves approaching a beach in a wave tank were used to simulate transforming sea conditions and a series of experiments were conducted with a 1:30 scale model SES traversing in heads seas. Pitch and heave motion of the vehicle were recorded in support of characterizing the seakeeping response of the vessel in developing seas. The aircushion pressure and the vessel speed were varied over a range of values and the corresponding vehicle responses were analyzed to identify functional dependence on these parameters. The results show a distinct correlation between the air-cushion pressure and the response amplitude of both pitch and heave.",
"title": ""
},
{
"docid": "07e67cee1d0edcd3793bd2eb7520d864",
"text": "Content-based image retrieval (CBIR) has attracted much attention due to the exponential growth of digital image collections that have become available in recent years. Relevance feedback (RF) in the context of search engines is a query expansion technique, which is based on relevance judgments about the top results that are initially returned for a given query. RF can be obtained directly from end users, inferred indirectly from user interactions with a result list, or even assumed (aka pseudo relevance feedback). RF information is used to generate a new query, aiming to re-focus the query towards more relevant results.\n This paper presents a methodology for use of signature based image retrieval with a user in the loop to improve retrieval performance. The significance of this study is twofold. First, it shows how to effectively use explicit RF with signature based image retrieval to improve retrieval quality and efficiency. Second, this approach provides a mechanism for end users to refine their image queries. This is an important contribution because, to date, there is no effective way to reformulate an image query; our approach provides a solution to this problem.\n Empirical experiments have been carried out to study the behaviour and optimal parameter settings of this approach. Empirical evaluations based on standard benchmarks demonstrate the effectiveness of the proposed approach in improving the performance of CBIR in terms of recall, precision, speed and scalability.",
"title": ""
},
{
"docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06",
"text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.",
"title": ""
},
{
"docid": "a9f23b7a6e077d7e9ca1a3165948cdf3",
"text": "In most problem-solving activities, feedback is received at the end of an action sequence. This creates a credit-assignment problem where the learner must associate the feedback with earlier actions, and the interdependencies of actions require the learner to either remember past choices of actions (internal state information) or rely on external cues in the environment (external state information) to select the right actions. We investigated the nature of explicit and implicit learning processes in the credit-assignment problem using a probabilistic sequential choice task with and without external state information. We found that when explicit memory encoding was dominant, subjects were faster to select the better option in their first choices than in the last choices; when implicit reinforcement learning was dominant subjects were faster to select the better option in their last choices than in their first choices. However, implicit reinforcement learning was only successful when distinct external state information was available. The results suggest the nature of learning in credit assignment: an explicit memory encoding process that keeps track of internal state information and a reinforcement-learning process that uses state information to propagate reinforcement backwards to previous choices. However, the implicit reinforcement learning process is effective only when the valences can be attributed to the appropriate states in the system – either internally generated states in the cognitive system or externally presented stimuli in the environment.",
"title": ""
},
{
"docid": "73af8236cc76e386aa76c6d20378d774",
"text": "Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase1. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types, person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).",
"title": ""
},
{
"docid": "3a18b210d3e9f0f0cf883953b8fdd242",
"text": "Short-term traffic forecasting is becoming more important in intelligent transportation systems. The k-nearest neighbours (kNN) method is widely used for short-term traffic forecasting. However, the self-adjustment of kNN parameters has been a problem due to dynamic traffic characteristics. This paper proposes a fully automatic dynamic procedure kNN (DP-kNN) that makes the kNN parameters self-adjustable and robust without predefined models or training for the parameters. A real-world dataset with more than one year traffic records is used to conduct experiments. The results show that DP-kNN can perform better than manually adjusted kNN and other benchmarking methods in terms of accuracy on average. This study also discusses the difference between holiday and workday traffic prediction as well as the usage of neighbour distance measurement.",
"title": ""
},
{
"docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13",
"text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.",
"title": ""
},
{
"docid": "5e50ff15898a96b9dec220331c62820d",
"text": "BACKGROUND AND PURPOSE\nPatients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients with versus without previous IS/TIA.\n\n\nMETHODS\nENGAGE AF-TIMI 48 (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48) was a double-blind trial of 21 105 patients with atrial fibrillation randomized to warfarin (international normalized ratio, 2.0-3.0; median time-in-therapeutic range, 68.4%) versus once-daily edoxaban (higher-dose edoxaban regimen [HDER], 60/30 mg; lower-dose edoxaban regimen, 30/15 mg) with 2.8-year median follow-up. Primary end points included all stroke/systemic embolic events (efficacy) and major bleeding (safety). Because only HDER is approved, we focused on the comparison of HDER versus warfarin.\n\n\nRESULTS\nOf 5973 (28.3%) patients with previous IS/TIA, 67% had CHADS2 (congestive heart failure, hypertension, age, diabetes, prior stroke/transient ischemic attack) >3 and 36% were ≥75 years. Compared with 15 132 without previous IS/TIA, patients with previous IS/TIA were at higher risk of both thromboembolism and bleeding (stroke/systemic embolic events 2.83% versus 1.42% per year; P<0.001; major bleeding 3.03% versus 2.64% per year; P<0.001; intracranial hemorrhage, 0.70% versus 0.40% per year; P<0.001). Among patients with previous IS/TIA, annualized intracranial hemorrhage rates were lower with HDER than with warfarin (0.62% versus 1.09%; absolute risk difference, 47 [8-85] per 10 000 patient-years; hazard ratio, 0.57; 95% confidence interval, 0.36-0.92; P=0.02). No treatment subgroup interactions were found for primary efficacy (P=0.86) or for intracranial hemorrhage (P=0.28).\n\n\nCONCLUSIONS\nPatients with atrial fibrillation with previous IS/TIA are at high risk of recurrent thromboembolism and bleeding. HDER is at least as effective and is safer than warfarin, regardless of the presence or the absence of previous IS or TIA.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: http://www.clinicaltrials.gov. Unique identifier: NCT00781391.",
"title": ""
},
{
"docid": "13ad3f52725d8417668ca12d5070482b",
"text": "Decoronation of ankylosed teeth in infraposition was introduced in 1984 by Malmgren and co-workers (1). This method is used all over the world today. It has been clinically shown that the procedure preserves the alveolar width and rebuilds lost vertical bone of the alveolar ridge in growing individuals. The biological explanation is that the decoronated root serves as a matrix for new bone development during resorption of the root and that the lost vertical alveolar bone is rebuilt during eruption of adjacent teeth. First a new periosteum is formed over the decoronated root, allowing vertical alveolar growth. Then the interdental fibers that have been severed by the decoronation procedure are reorganized between adjacent teeth. The continued eruption of these teeth mediates marginal bone apposition via the dental-periosteal fiber complex. The erupting teeth are linked with the periosteum covering the top of the alveolar socket and indirectly via the alveolar gingival fibers, which are inserted in the alveolar crest and in the lamina propria of the interdental papilla. Both structures can generate a traction force resulting in bone apposition on top of the alveolar crest. This theoretical biological explanation is based on known anatomical features, known eruption processes and clinical observations.",
"title": ""
},
{
"docid": "39e9fe27f70f54424df1feec453afde3",
"text": "Ontology is a sub-field of Philosophy. It is the study of the nature of existence and a branch of metaphysics concerned with identifying the kinds of things that actually exists and how to describe them. It describes formally a domain of discourse. Ontology is used to capture knowledge about some domain of interest and to describe the concepts in the domain and also to express the relationships that hold between those concepts. Ontology consists of finite list of terms (or important concepts) and the relationships among the terms (or Classes of Objects). Relationships typically include hierarchies of classes. It is an explicit formal specification of conceptualization and the science of describing the kind of entities in the world and how they are related (W3C). Web Ontology Language (OWL) is a language for defining and instantiating web ontologies (a W3C Recommendation). OWL ontology includes description of classes, properties and their instances. OWL is used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. Such representation of terms and their interrelationships is called ontology. OWL has facilities for expressing meaning and semantics and the ability to represent machine interpretable content on the Web. OWL is designed for use by applications that need to process the content of information instead of just presenting information to humans. This is used for knowledge representation and also is useful to derive logical consequences from OWL formal semantics.",
"title": ""
},
{
"docid": "f3599d23a21ca906e615025ac3715131",
"text": "This literature review synthesized the existing research on cloud computing from a business perspective by investigating 60 sources and integrates their results in order to offer an overview about the existing body of knowledge. Using an established framework our results are structured according to the four dimensions following: cloud computing characteristics, adoption determinants, governance mechanisms, and business impact. This work reveals a shifting focus from technological aspects to a broader understanding of cloud computing as a new IT delivery model. There is a growing consensus about its characteristics and design principles. Unfortunately, research on factors driving or inhibiting the adoption of cloud services, as well as research investigating its business impact empirically, is still limited. This may be attributed to cloud computing being a rather recent research topic. Research on structures, processes and employee qualification to govern cloud services is at an early stage as well.",
"title": ""
},
{
"docid": "363236815299994c5d155ab2c64b4387",
"text": "The objective of this work is to infer the 3D shape of an object from a single image. We use sculptures as our training and test bed, as these have great variety in shape and appearance. To achieve this we build on the success of multiple view geometry (MVG) which is able to accurately provide correspondences between images of 3D objects under varying viewpoint and illumination conditions, and make the following contributions: first, we introduce a new loss function that can harness image-to-image correspondences to provide a supervisory signal to train a deep network to infer a depth map. The network is trained end-to-end by differentiating through the camera. Second, we develop a processing pipeline to automatically generate a large scale multi-view set of correspondences for training the network. Finally, we demonstrate that we can indeed obtain a depth map of a novel object from a single image for a variety of sculptures with varying shape/texture, and that the network generalises at test time to new domains (e.g. synthetic images).",
"title": ""
},
{
"docid": "b3da0c6745883ae3da10e341abc3bf4d",
"text": "Electrophysiological recording studies in the dorsocaudal region of medial entorhinal cortex (dMEC) of the rat reveal cells whose spatial firing fields show a remarkably regular hexagonal grid pattern (Fyhn et al., 2004; Hafting et al., 2005). We describe a symmetric, locally connected neural network, or spin glass model, that spontaneously produces a hexagonal grid of activity bumps on a two-dimensional sheet of units. The spatial firing fields of the simulated cells closely resemble those of dMEC cells. A collection of grids with different scales and/or orientations forms a basis set for encoding position. Simulations show that the animal's location can easily be determined from the population activity pattern. Introducing an asymmetry in the model allows the activity bumps to be shifted in any direction, at a rate proportional to velocity, to achieve path integration. Furthermore, information about the structure of the environment can be superimposed on the spatial position signal by modulation of the bump activity levels without significantly interfering with the hexagonal periodicity of firing fields. Our results support the conjecture of Hafting et al. (2005) that an attractor network in dMEC may be the source of path integration information afferent to hippocampus.",
"title": ""
},
{
"docid": "2adde1812974f2d5d35d4c7e31ca7247",
"text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]",
"title": ""
},
{
"docid": "4b7eb2b8f4d4ec135ab1978b4811eca4",
"text": "This paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared with existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.",
"title": ""
},
{
"docid": "e0e00fdfecc4a23994315579938f740e",
"text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.",
"title": ""
},
{
"docid": "3f7c16788bceba51f0cbf0e9c9592556",
"text": "Centralised patient monitoring systems are in huge demand as they not only reduce the labour work and cost but also the time of the clinical hospitals. Earlier wired communication was used but now Zigbee which is a wireless mesh network is preferred as it reduces the cost. Zigbee is also preferred over Bluetooth and infrared wireless communication because it is energy efficient, has low cost and long distance range (several miles). In this paper we proposed wireless transmission of data between a patient and centralised unit using Zigbee module. The paper is divided into two sections. First is patient monitoring system for multiple patients and second is the centralised patient monitoring system. These two systems are communicating using wireless transmission technology i.e. Zigbee. In the first section we have patient monitoring of multiple patients. Each patient's multiple physiological parameters like ECG, temperature, heartbeat are measured at their respective unit. If any physiological parameter value exceeds the threshold value, emergency alarm and LED blinks at each patient unit. This allows a doctor to read various physiological parameters of a patient in real time. The values are displayed on the LCD at each patient unit. Similarly multiple patients multiple physiological parameters are being measured using particular sensors and multiple patient's patient monitoring system is made. In the second section centralised patient monitoring system is made in which all multiple patients multiple parameters are displayed on a central monitor using MATLAB. ECG graph is also displayed on the central monitor using MATLAB software. The central LCD also displays parameters like heartbeat and temperature. The module is less expensive, consumes low power and has good range.",
"title": ""
},
{
"docid": "22c749b089f0bdd1a3296f59fa9cdfc5",
"text": "Inspection of printed circuit board (PCB) has been a crucial process in the electronic manufacturing industry to guarantee product quality & reliability, cut manufacturing cost and to increase production. The PCB inspection involves detection of defects in the PCB and classification of those defects in order to identify the roots of defects. In this paper, all 14 types of defects are detected and are classified in all possible classes using referential inspection approach. The proposed algorithm is mainly divided into five stages: Image registration, Pre-processing, Image segmentation, Defect detection and Defect classification. The algorithm is able to perform inspection even when captured test image is rotated, scaled and translated with respect to template image which makes the algorithm rotation, scale and translation in-variant. The novelty of the algorithm lies in its robustness to analyze a defect in its different possible appearance and severity. In addition to this, algorithm takes only 2.528 s to inspect a PCB image. The efficacy of the proposed algorithm is verified by conducting experiments on the different PCB images and it shows that the proposed afgorithm is suitable for automatic visual inspection of PCBs.",
"title": ""
},
{
"docid": "aff804f90fd1ffba5ee8c06e96ddd11b",
"text": "The area of machine learning has made considerable progress over the past decade, enabled by the widespread availability of large datasets, as well as by improved algorithms and models. Given the large computational demands of machine learning workloads, parallelism, implemented either through single-node concurrency or through multi-node distribution, has been a third key ingredient to advances in machine learning.\n The goal of this tutorial is to provide the audience with an overview of standard distribution techniques in machine learning, with an eye towards the intriguing trade-offs between synchronization and communication costs of distributed machine learning algorithms, on the one hand, and their convergence, on the other.The tutorial will focus on parallelization strategies for the fundamental stochastic gradient descent (SGD) algorithm, which is a key tool when training machine learning models, from classical instances such as linear regression, to state-of-the-art neural network architectures.\n The tutorial will describe the guarantees provided by this algorithm in the sequential case, and then move on to cover both shared-memory and message-passing parallelization strategies, together with the guarantees they provide, and corresponding trade-offs. The presentation will conclude with a broad overview of ongoing research in distributed and concurrent machine learning. The tutorial will assume no prior knowledge beyond familiarity with basic concepts in algebra and analysis.",
"title": ""
},
{
"docid": "9706819b5e4805b41e3907a7b1688578",
"text": "While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly. Thus, efficient computational methods for condensing and simplifying data are becoming vital for extracting actionable insights. In particular, while data summarization techniques have been studied extensively, only recently has summarizing interconnected data, or graphs, become popular. This survey is a structured, comprehensive overview of the state-of-the-art methods for summarizing graph data. We first broach the motivation behind and the challenges of graph summarization. We then categorize summarization approaches by the type of graphs taken as input and further organize each category by core methodology. Finally, we discuss applications of summarization on real-world graphs and conclude by describing some open problems in the field.",
"title": ""
}
] |
scidocsrr
|
ebe0ad792e0d01a575c7b500a962f2b5
|
Prevalence and Predictors of Video Game Addiction: A Study Based on a National Representative Sample of Gamers
|
[
{
"docid": "39682fc0385d7bc85267479bf20326b3",
"text": "This study assessed how problem video game playing (PVP) varies with game type, or \"genre,\" among adult video gamers. Participants (n=3,380) were adults (18+) who reported playing video games for 1 hour or more during the past week and completed a nationally representative online survey. The survey asked about characteristics of video game use, including titles played in the past year and patterns of (problematic) use. Participants self-reported the extent to which characteristics of PVP (e.g., playing longer than intended) described their game play. Five percent of our sample reported moderate to extreme problems. PVP was concentrated among persons who reported playing first-person shooter, action adventure, role-playing, and gambling games most during the past year. The identification of a subset of game types most associated with problem use suggests new directions for research into the specific design elements and reward mechanics of \"addictive\" video games and those populations at greatest risk of PVP with the ultimate goal of better understanding, preventing, and treating this contemporary mental health problem.",
"title": ""
}
] |
[
{
"docid": "d90add899632bab1c5c2637c7080f717",
"text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.",
"title": ""
},
{
"docid": "5921f0049596d52bd3aea33e4537d026",
"text": "Various lines of evidence indicate that men generally experience greater sexual arousal (SA) to erotic stimuli than women. Yet, little is known regarding the neurobiological processes underlying such a gender difference. To investigate this issue, functional magnetic resonance imaging was used to compare the neural correlates of SA in 20 male and 20 female subjects. Brain activity was measured while male and female subjects were viewing erotic film excerpts. Results showed that the level of perceived SA was significantly higher in male than in female subjects. When compared to viewing emotionally neutral film excerpts, viewing erotic film excerpts was associated, for both genders, with bilateral blood oxygen level dependent (BOLD) signal increases in the anterior cingulate, medial prefrontal, orbitofrontal, insular, and occipitotemporal cortices, as well as in the amygdala and the ventral striatum. Only for the group of male subjects was there evidence of a significant activation of the thalamus and hypothalamus, a sexually dimorphic area of the brain known to play a pivotal role in physiological arousal and sexual behavior. When directly compared between genders, hypothalamic activation was found to be significantly greater in male subjects. Furthermore, for male subjects only, the magnitude of hypothalamic activation was positively correlated with reported levels of SA. These findings reveal the existence of similarities and dissimilarities in the way the brain of both genders responds to erotic stimuli. They further suggest that the greater SA generally experienced by men, when viewing erotica, may be related to the functional gender difference found here with respect to the hypothalamus.",
"title": ""
},
{
"docid": "51979e7cca3940cb1629f58feb8712b4",
"text": "OBJECTIVES\nThe goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics.\n\n\nRESULTS\nMajor challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data.\n\n\nCONCLUSIONS\nThe use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system.",
"title": ""
},
{
"docid": "c6576bb8585fff4a9ac112943b1e0785",
"text": "Three-dimensional (3D) kinematic models are widely-used in videobased figure tracking. We show that these models can suffer from singularities when motion is directed along the viewing axis of a single camera. The single camera case is important because it arises in many interesting applications, such as motion capture from movie footage, video surveillance, and vision-based user-interfaces. We describe a novel two-dimensional scaled prismatic model (SPM) for figure registration. In contrast to 3D kinematic models, the SPM has fewer singularity problems and does not require detailed knowledge of the 3D kinematics. We fully characterize the singularities in the SPM and demonstrate tracking through singularities using synthetic and real examples. We demonstrate the application of our model to motion capture from movies. Fred Astaire is tracked in a clip from the film “Shall We Dance”. We also present the use of monocular hand tracking in a 3D user-interface. These results demonstrate the benefits of the SPM in tracking with a single source of video. KEY WORDS—AUTHOR: PLEASE PROVIDE",
"title": ""
},
{
"docid": "24b45f8f41daccf4bddb45f0e2b3d057",
"text": "Risk assessment is a systematic process for integrating professional judgments about relevant risk factors, their relative significance and probable adverse conditions and/or events leading to identification of auditable activities (IIA, 1995, SIAS No. 9). Internal auditors utilize risk measures to allocate critical audit resources to compliance, operational, or financial activities within the organization (Colbert, 1995). In information rich environments, risk assessment involves recognizing patterns in the data, such as complex data anomalies and discrepancies, that perhaps conceal one or more error or hazard conditions (e.g. Coakley and Brown, 1996; Bedard and Biggs, 1991; Libby, 1985). This research investigates whether neural networks can help enhance auditors’ risk assessments. Neural networks, an emerging artificial intelligence technology, are a powerful non-linear optimization and pattern recognition tool (Haykin, 1994; Bishop, 1995). Several successful, real-world business neural network application decision aids have already been built (Burger and Traver, 1996). Neural network modeling may prove invaluable in directing internal auditor attention to those aspects of financial, operating, and compliance data most informative of high-risk audit areas, thus enhancing audit efficiency and effectiveness. This paper defines risk in an internal auditing context, describes contemporary approaches to performing risk assessments, provides an overview of the backpropagation neural network architecture, outlines the methodology adopted for conducting this research project including a Delphi study and comparison with statistical approaches, and presents preliminary results, which indicate that internal auditors could benefit from using neural network technology for assessing risk. Copyright 1999 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "eb962e14f34ea53dec660dfe304756b0",
"text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.",
"title": ""
},
{
"docid": "354b35bb1c51442a7e855824ab7b91e0",
"text": "Educational games and intelligent tutoring systems (ITS) both support learning by doing, although often in different ways. The current classroom experiment compared a popular commercial game for equation solving, DragonBox and a research-based ITS, Lynnette with respect to desirable educational outcomes. The 190 participating 7th and 8th grade students were randomly assigned to work with either system for 5 class periods. We measured out-of-system transfer of learning with a paper and pencil pre- and post-test of students’ equation-solving skill. We measured enjoyment and accuracy of self-assessment with a questionnaire. The students who used DragonBox solved many more problems and enjoyed the experience more, but the students who used Lynnette performed significantly better on the post-test. Our analysis of the design features of both systems suggests possible explanations and spurs ideas for how the strengths of the two systems might be combined. The study shows that intuitions about what works, educationally, can be fallible. Therefore, there is no substitute for rigorous empirical evaluation of educational technologies.",
"title": ""
},
{
"docid": "50c0ebb4a984ea786eb86af9849436f3",
"text": "We systematically reviewed school-based skills building behavioural interventions for the prevention of sexually transmitted infections. References were sought from 15 electronic resources, bibliographies of systematic reviews/included studies and experts. Two authors independently extracted data and quality-assessed studies. Fifteen randomized controlled trials (RCTs), conducted in the United States, Africa or Europe, met the inclusion criteria. They were heterogeneous in terms of intervention length, content, intensity and providers. Data from 12 RCTs passed quality assessment criteria and provided evidence of positive changes in non-behavioural outcomes (e.g. knowledge and self-efficacy). Intervention effects on behavioural outcomes, such as condom use, were generally limited and did not demonstrate a negative impact (e.g. earlier sexual initiation). Beneficial effect on at least one, but never all behavioural outcomes assessed was reported by about half the studies, but this was sometimes limited to a participant subgroup. Sexual health education for young people is important as it increases knowledge upon which to make decisions about sexual behaviour. However, a number of factors may limit intervention impact on behavioural outcomes. Further research could draw on one of the more effective studies reviewed and could explore the effectiveness of 'booster' sessions as young people move from adolescence to young adulthood.",
"title": ""
},
{
"docid": "e3709e9df325e7a7927e882a40222b26",
"text": "In this paper, we present a system that automatically extracts the pros and cons from online reviews. Although many approaches have been developed for extracting opinions from text, our focus here is on extracting the reasons of the opinions, which may themselves be in the form of either fact or opinion. Leveraging online review sites with author-generated pros and cons, we propose a system for aligning the pros and cons to their sentences in review texts. A maximum entropy model is then trained on the resulting labeled set to subsequently extract pros and cons from online review sites that do not explicitly provide them. Our experimental results show that our resulting system identifies pros and cons with 66% precision and 76% recall.",
"title": ""
},
{
"docid": "e771009a5e1810c45db20ed70b314798",
"text": "BACKGROUND\nTo identify sources of race/ethnic differences related to post-traumatic stress disorder (PTSD), we compared trauma exposure, risk for PTSD among those exposed to trauma, and treatment-seeking among Whites, Blacks, Hispanics and Asians in the US general population.\n\n\nMETHOD\nData from structured diagnostic interviews with 34 653 adult respondents to the 2004-2005 wave of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) were analysed.\n\n\nRESULTS\nThe lifetime prevalence of PTSD was highest among Blacks (8.7%), intermediate among Hispanics and Whites (7.0% and 7.4%) and lowest among Asians (4.0%). Differences in risk for trauma varied by type of event. Whites were more likely than the other groups to have any trauma, to learn of a trauma to someone close, and to learn of an unexpected death, but Blacks and Hispanics had higher risk of child maltreatment, chiefly witnessing domestic violence, and Asians, Black men, and Hispanic women had higher risk of war-related events than Whites. Among those exposed to trauma, PTSD risk was slightly higher among Blacks [adjusted odds ratio (aOR) 1.22] and lower among Asians (aOR 0.67) compared with Whites, after adjustment for characteristics of trauma exposure. All minority groups were less likely to seek treatment for PTSD than Whites (aOR range: 0.39-0.61), and fewer than half of minorities with PTSD sought treatment (range: 32.7-42.0%).\n\n\nCONCLUSIONS\nWhen PTSD affects US race/ethnic minorities, it is usually untreated. Large disparities in treatment indicate a need for investment in accessible and culturally sensitive treatment options.",
"title": ""
},
{
"docid": "9a38b18bd69d17604b6e05b9da450c2d",
"text": "New invention of advanced technology, enhanced capacity of storage media, maturity of information technology and popularity of social media, business intelligence and Scientific invention, produces huge amount of data which made ample set of information that is responsible for birth of new concept well known as big data. Big data analytics is the process of examining large amounts of data. The analysis is done on huge amount of data which is structure, semi structure and unstructured. In big data, data is generated at exponentially for reason of increase use of social media, email, document and sensor data. The growth of data has affected all fields, whether it is business sector or the world of science. In this paper, the process of system is reviewed for managing "Big Data" and today's activities on big data tools and techniques.",
"title": ""
},
{
"docid": "51c14998480e2b1063b727bf3e4f4ad0",
"text": "With the rapid growth of multimedia information, the font library has become a part of people’s work life. Compared to the Western alphabet language, it is difficult to create new font due to huge quantity and complex shape. At present, most of the researches on automatic generation of fonts use traditional methods requiring a large number of rules and parameters set by experts, which are not widely adopted. This paper divides Chinese characters into strokes and generates new font strokes by fusing the styles of two existing font strokes and assembling them into new fonts. This approach can effectively improve the efficiency of font generation, reduce the costs of designers, and is able to inherit the style of existing fonts. In the process of learning to generate new fonts, the popular of deep learning areas, Generative Adversarial Nets has been used. Compared with the traditional method, it can generate higher quality fonts without well-designed and complex loss function.",
"title": ""
},
{
"docid": "3692954147d1a60fb683001bd379047f",
"text": "OBJECTIVE\nThe current study aimed to compare the Philadelphia collar and an open-design cervical collar with regard to user satisfaction and cervical range of motion in asymptomatic adults.\n\n\nDESIGN\nSeventy-two healthy subjects (36 women, 36 men) aged 18 to 29 yrs were recruited for this study. Neck movements, including active flexion, extension, right/left lateral flexion, and right/left axial rotation, were assessed in each subject under three conditions--without wearing a collar and while wearing two different cervical collars--using a dual digital inclinometer. Subject satisfaction was assessed using a five-item self-administered questionnaire.\n\n\nRESULTS\nBoth Philadelphia and open-design collars significantly reduced cervical motions (P < 0.05). Compared with the Philadelphia collar, the open-design collar more greatly reduced cervical motions in three planes and the differences were statistically significant except for limiting flexion. Satisfaction scores for Philadelphia and open-design collars were 15.89 (3.87) and 19.94 (3.11), respectively.\n\n\nCONCLUSION\nBased on the data of the 72 subjects presented in this study, the open-design collar adequately immobilized the cervical spine as a semirigid collar and was considered cosmetically acceptable, at least for subjects aged younger than 30 yrs.",
"title": ""
},
{
"docid": "6d2e7ce04b96a98cc2828dc33c111bd1",
"text": "This study explores how customer relationship management (CRM) systems support customer knowledge creation processes [48], including socialization, externalization, combination and internalization. CRM systems are categorized as collaborative, operational and analytical. An analysis of CRM applications in three organizations reveals that analytical systems strongly support the combination process. Collaborative systems provide the greatest support for externalization. Operational systems facilitate socialization with customers, while collaborative systems are used for socialization within an organization. Collaborative and analytical systems both support the internalization process by providing learning opportunities. Three-way interactions among CRM systems, types of customer knowledge, and knowledge creation processes are explored. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2c2e57a330157cf28e4d6d6466132432",
"text": "This paper presents an automatic method to track soccer players in soccer video recorded from a single camera where the occurrence of pan-tilt-zoom can take place. The automatic object tracking is intended to support texture extraction in a free viewpoint video authoring application for soccer video. To ensure that the identity of the tracked object can be correctly obtained, background segmentation is performed and automatically removes commercial billboards whenever it overlaps with the soccer player. Next, object tracking is performed by an attribute matching algorithm for all objects in the temporal domain to find and maintain the correlation of the detected objects. The attribute matching process finds the best match between two objects in different frames according to their pre-determined attributes: position, size, dominant color and motion information. Utilizing these attributes, the experimental results show that the tracking process can handle occlusion problems such as occlusion involving more than three objects and occluded objects with similar color and moving direction, as well as correctly identify objects in the presence of camera movements. key words: free viewpoint, attribute matching, automatic object tracking, soccer video",
"title": ""
},
{
"docid": "02dce03a41dbe6734cd3ce945db6fcb8",
"text": "Antigen-presenting, major histocompatibility complex (MHC) class II-rich dendritic cells are known to arise from bone marrow. However, marrow lacks mature dendritic cells, and substantial numbers of proliferating less-mature cells have yet to be identified. The methodology for inducing dendritic cell growth that was recently described for mouse blood now has been modified to MHC class II-negative precursors in marrow. A key step is to remove the majority of nonadherent, newly formed granulocytes by gentle washes during the first 2-4 d of culture. This leaves behind proliferating clusters that are loosely attached to a more firmly adherent \"stroma.\" At days 4-6 the clusters can be dislodged, isolated by 1-g sedimentation, and upon reculture, large numbers of dendritic cells are released. The latter are readily identified on the basis of their distinct cell shape, ultrastructure, and repertoire of antigens, as detected with a panel of monoclonal antibodies. The dendritic cells express high levels of MHC class II products and act as powerful accessory cells for initiating the mixed leukocyte reaction. Neither the clusters nor mature dendritic cells are generated if macrophage colony-stimulating factor rather than granulocyte/macrophage colony-stimulating factor (GM-CSF) is applied. Therefore, GM-CSF generates all three lineages of myeloid cells (granulocytes, macrophages, and dendritic cells). Since > 5 x 10(6) dendritic cells develop in 1 wk from precursors within the large hind limb bones of a single animal, marrow progenitors can act as a major source of dendritic cells. This feature should prove useful for future molecular and clinical studies of this otherwise trace cell type.",
"title": ""
},
{
"docid": "3192a76e421d37fbe8619a3bc01fb244",
"text": "• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)",
"title": ""
},
{
"docid": "ac6d474171bfe6bc2457bfb3674cc5a6",
"text": "The energy consumption problem in the mobile industry has become crucial. For the sustainable growth of the mobile industry, energy efficiency (EE) of wireless systems has to be significantly improved. Plenty of efforts have been invested in achieving green wireless communications. This article provides an overview of network energy saving studies currently conducted in the 3GPP LTE standard body. The aim is to gain a better understanding of energy consumption and identify key EE research problems in wireless access networks. Classifying network energy saving technologies into the time, frequency, and spatial domains, the main solutions in each domain are described briefly. As presently the attention is mainly focused on solutions involving a single radio base station, we believe network solutions involving multiple networks/systems will be the most promising technologies toward green wireless access networks.",
"title": ""
},
{
"docid": "bab429bf74fe4ce3f387a716964a867f",
"text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.",
"title": ""
},
{
"docid": "a4f960905077291bd6da9359fd803a9c",
"text": "In this paper, we propose a new framework named Data Augmentation for Domain-Invariant Learning (DADIL). In the field of manufacturing, labeling sensor data as normal or abnormal is helpful for improving productivity and avoiding problems. In practice, however, the status of equipment may change due to changes in maintenance and settings (referred to as a “domain change”), which makes it difficult to collect sufficient homogeneous data. Therefore, it is important to develop a discriminative model that can use a limited number of data samples. Moreover, real data might contain noise that could have a negative impact. We focus on the following aspect: The difficulties of a domain change are also due to the limited data. Although the number of data samples in each domain is low, we make use of data augmentation which is a promising way to mitigate the influence of noise and enhance the performance of discriminative models. In our data augmentation method, we generate “pseudo data” by combining the data for each label regardless of the domain and extract a domain-invariant representation for classification. We experimentally show that this representation is effective for obtaining the label precisely using real datasets.",
"title": ""
}
] |
scidocsrr
|
7ad2a261e3f57a43e48e1cc309174cfc
|
Degeneration in VAE: in the Light of Fisher Information Loss
|
[
{
"docid": "ff59e2a5aa984dec7805a4d9d55e69e5",
"text": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.",
"title": ""
}
] |
[
{
"docid": "03bd5c0e41aa5948a5545fa3fca75bc2",
"text": "In the application of lead-acid series batteries, the voltage imbalance of each battery should be considered. Therefore, additional balancer circuits must be integrated into the battery. An active battery balancing circuit with an auxiliary storage can employ a sequential battery imbalance detection algorithm by comparing the voltage of a battery and auxiliary storage. The system is being in balance if the battery voltage imbalance is less than 10mV/cell. In this paper, a new algorithm is proposed so that the battery voltage balancing time can be improved. The battery balancing system is based on the LTC3305 working principle. The simulation verifies that the proposed algorithm can achieve permitted battery voltage imbalance faster than that of the previous algorithm.",
"title": ""
},
{
"docid": "1dfe7a3e875436db76496931db34c7db",
"text": "Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.",
"title": ""
},
{
"docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4",
"text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.",
"title": ""
},
{
"docid": "97a7c48145d682a9ed45109d83c82a73",
"text": "We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community.",
"title": ""
},
{
"docid": "e60c295d02b87d4c88e159a3343e0dcb",
"text": "In 2163 personally interviewed female twins from a population-based registry, the pattern of age at onset and comorbidity of the simple phobias (animal and situational)--early onset and low rates of comorbidity--differed significantly from that of agoraphobia--later onset and high rates of comorbidity. Consistent with an inherited \"phobia proneness\" but not a \"social learning\" model of phobias, the familial aggregation of any phobia, agoraphobia, social phobia, and animal phobia appeared to result from genetic and not from familial-environmental factors, with estimates of heritability of liability ranging from 30% to 40%. The best-fitting multivariate genetic model indicated the existence of genetic and individual-specific environmental etiologic factors common to all four phobia subtypes and others specific for each of the individual subtypes. This model suggested that (1) environmental experiences that predisposed to all phobias were most important for agoraphobia and social phobia and relatively unimportant for the simple phobias, (2) environmental experiences that uniquely predisposed to only one phobia subtype had a major impact on simple phobias, had a modest impact on social phobia, and were unimportant for agoraphobia, and (3) genetic factors that predisposed to all phobias were most important for animal phobia and least important for agoraphobia. Simple phobias appear to arise from the joint effect of a modest genetic vulnerability and phobia-specific traumatic events in childhood, while agoraphobia and, to a somewhat lesser extent, social phobia result from the combined effect of a slightly stronger genetic influence and nonspecific environmental experiences.",
"title": ""
},
{
"docid": "d4cdea26217e90002a3c4522124872a2",
"text": "Recently, several methods for single image super-resolution(SISR) based on deep neural networks have obtained high performance with regard to reconstruction accuracy and computational performance. This paper details the methodology and results of the New Trends in Image Restoration and Enhancement (NTIRE) challenge. The task of this challenge is to restore rich details (high frequencies) in a high resolution image for a single low resolution input image based on a set of prior examples with low and corresponding high resolution images. The challenge has two tracks. We present a super-resolution (SR) method, which uses three losses assigned with different weights to be regarded as optimization target. Meanwhile, the residual blocks are also used for obtaining significant improvement in the evaluation. The final model consists of 9 weight layers with four residual blocks and reconstructs the low resolution image with three color channels simultaneously, which shows better performance on these two tracks and benchmark datasets.",
"title": ""
},
{
"docid": "e73de1e6f191fef625f75808d7fbfbb1",
"text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.",
"title": ""
},
{
"docid": "8f24898cb21a259d9260b67202141d49",
"text": "PROBLEM\nHow can human contributions to accidents be reconstructed? Investigators can easily take the position a of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.\n\n\nMETHOD AND RESULTS\nThis paper examines how investigators can reconstruct the role that people contribute to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to move controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper addresses one way in which investigators can begin to reconstruct people's unfolding mindsets.\n\n\nIMPACT ON INDUSTRY\nIn an era where a large portion of accidents are attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper helps investigators avoid the traps of hindsight by presenting a method with which investigators can begin to see how people's actions and assessments actually made sense at the time.",
"title": ""
},
{
"docid": "f9f26d8ff95aff0a361fcb321e57a779",
"text": "A novel algorithm for the detection of underwater man-made objects in forwardlooking sonar imagery is proposed. The algorithm takes advantage of the integral-image representation to quickly compute features, and progressively reduces the computational load by working on smaller portions of the image along the detection process phases. By adhering to the proposed scheme, real-time detection on sonar data onboard an autonomous vehicle is made possible. The proposed method does not require training data, as it dynamically takes into account environmental characteristics of the sensed sonar data. The proposed approach has been implemented and integrated into the software system of the Gemellina autonomous surface vehicle, and is able to run in real time. The validity of the proposed approach is demonstrated on real experiments carried out at sea with the Gemellina autonomous surface vehicle.",
"title": ""
},
{
"docid": "d0a6ca9838f8844077fdac61d1d75af1",
"text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-",
"title": ""
},
{
"docid": "e7e8fe5532d1cb32a7233bc4c99ac3b8",
"text": "The concept of network slicing opens the possibilities to address the complex requirements of multi-tenancy in 5G. To this end, SDN/NFV can act as technology enabler. This paper presents a centralised and dynamic approach for creating and provisioning network slices for virtual network operators' consumption to offer services to their end customers, focusing on an SDN wireless backhaul use case. We demonstrate our approach for dynamic end-to-end slice and service provisioning in a testbed.",
"title": ""
},
{
"docid": "f75b11bc21dc711b76a7a375c2a198d3",
"text": "In many application areas like e-science and data-warehousing detailed information about the origin of data is required. This kind of information is often referred to as data provenance or data lineage. The provenance of a data item includes information about the processes and source data items that lead to its creation and current representation. The diversity of data representation models and application domains has lead to a number of more or less formal definitions of provenance. Most of them are limited to a special application domain, data representation model or data processing facility. Not surprisingly, the associated implementations are also restricted to some application domain and depend on a special data model. In this paper we give a survey of data provenance models and prototypes, present a general categorization scheme for provenance models and use this categorization scheme to study the properties of the existing approaches. This categorization enables us to distinguish between different kinds of provenance information and could lead to a better understanding of provenance in general. Besides the categorization of provenance types, it is important to include the storage, transformation and query requirements for the different kinds of provenance information and application domains in our considerations. The analysis of existing approaches will assist us in revealing open research problems in the area of data provenance.",
"title": ""
},
{
"docid": "d1a4abaa57f978858edf0d7b7dc506ba",
"text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.",
"title": ""
},
{
"docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b",
"text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …",
"title": ""
},
{
"docid": "9ce232e2a49652ee7fbfe24c6913d52a",
"text": "Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.",
"title": ""
},
{
"docid": "56667d286f69f8429be951ccf5d61c24",
"text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.",
"title": ""
},
{
"docid": "886c284d72a01db9bc4eb9467e14bbbb",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "b4586447ef1536f23793651fcd9d71b8",
"text": "State monitoring is widely used for detecting critical events and abnormalities of distributed systems. As the scale of such systems grows and the degree of workload consolidation increases in Cloud data centers, node failures and performance interferences, especially transient ones, become the norm rather than the exception. Hence, distributed state monitoring tasks are often exposed to impaired communication caused by such dynamics on different nodes. Unfortunately, existing distributed state monitoring approaches are often designed under the assumption of always-online distributed monitoring nodes and reliable inter-node communication. As a result, these approaches often produce misleading results which in turn introduce various problems to Cloud users who rely on state monitoring results to perform automatic management tasks such as auto-scaling. This paper introduces a new state monitoring approach that tackles this challenge by exposing and handling communication dynamics such as message delay and loss in Cloud monitoring environments. Our approach delivers two distinct features. First, it quantitatively estimates the accuracy of monitoring results to capture uncertainties introduced by messaging dynamics. This feature helps users to distinguish trustworthy monitoring results from ones heavily deviated from the truth, yet significantly improves monitoring utility compared with simple techniques that invalidate all monitoring results generated with the presence of messaging dynamics. Second, our approach also adapts to non-transient messaging issues by reconfiguring distributed monitoring algorithms to minimize monitoring errors. Our experimental results show that, even under severe message loss and delay, our approach consistently improves monitoring accuracy, and when applied to Cloud application auto-scaling, outperforms existing state monitoring techniques in terms of the ability to correctly trigger dynamic provisioning.",
"title": ""
},
{
"docid": "c432a44e48e777a7a3316c1474f0aa12",
"text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.",
"title": ""
},
{
"docid": "0151ad8176711618e6cd5b0e20abf0cb",
"text": "Skeleton-based action recognition has made great progress recently, but many problems still remain unsolved. For example, the representations of skeleton sequences captured by most of the previous methods lack spatial structure information and detailed temporal dynamics features. In this paper, we propose a novel model with spatial reasoning and temporal stack learning (SR-TSL) for skeleton-based action recognition, which consists of a spatial reasoning network (SRN) and a temporal stack learning network (TSLN). The SRN can capture the high-level spatial structural information within each frame by a residual graph neural network, while the TSLN can model the detailed temporal dynamics of skeleton sequences by a composition of multiple skip-clip LSTMs. During training, we propose a clip-based incremental loss to optimize the model. We perform extensive experiments on the SYSU 3D Human-Object Interaction dataset and NTU RGB+D dataset and verify the effectiveness of each network of our model. The comparison results illustrate that our approach achieves much better results than the state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
78ef5df49c026a283f4a35ecc8afc66a
|
A vision system for traffic sign detection and recognition
|
[
{
"docid": "cdf2235bea299131929700406792452c",
"text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"title": ""
},
{
"docid": "0789a3b04923fb5d586971ccaf75aec6",
"text": "In this paper, we propose a high-performance traffic sign recognition (TSR) framework to rapidly detect and recognize multiclass traffic signs in high-resolution images. This framework includes three parts: a novel region-of-interest (ROI) extraction method called the high-contrast region extraction (HCRE), the split-flow cascade tree detector (SFC-tree detector), and a rapid occlusion-robust traffic sign classification method based on the extended sparse representation classification (ESRC). Unlike the color-thresholding or extreme region extraction methods used by previous ROI methods, the ROI extraction method of the HCRE is designed to extract ROI with high local contrast, which can keep a good balance of the detection rate and the extraction rate. The SFC-tree detector can detect a large number of different types of traffic signs in high-resolution images quickly. The traffic sign classification method based on the ESRC is designed to classify traffic signs with partial occlusion. Instead of solving the sparse representation problem using an overcomplete dictionary, the classification method based on the ESRC utilizes a content dictionary and an occlusion dictionary to sparsely represent traffic signs, which can largely reduce the dictionary size in the occlusion-robust dictionaries and achieve high accuracy. The experiments demonstrate the advantage of the proposed approach, and our TSR framework can rapidly detect and recognize multiclass traffic signs with high accuracy.",
"title": ""
}
] |
[
{
"docid": "68cf9884548278e2b4dcec62e29d3122",
"text": "BACKGROUND\nVitamin D is crucial for maintenance of musculoskeletal health, and might also have a role in extraskeletal tissues. Determinants of circulating 25-hydroxyvitamin D concentrations include sun exposure and diet, but high heritability suggests that genetic factors could also play a part. We aimed to identify common genetic variants affecting vitamin D concentrations and risk of insufficiency.\n\n\nMETHODS\nWe undertook a genome-wide association study of 25-hydroxyvitamin D concentrations in 33 996 individuals of European descent from 15 cohorts. Five epidemiological cohorts were designated as discovery cohorts (n=16 125), five as in-silico replication cohorts (n=9367), and five as de-novo replication cohorts (n=8504). 25-hydroxyvitamin D concentrations were measured by radioimmunoassay, chemiluminescent assay, ELISA, or mass spectrometry. Vitamin D insufficiency was defined as concentrations lower than 75 nmol/L or 50 nmol/L. We combined results of genome-wide analyses across cohorts using Z-score-weighted meta-analysis. Genotype scores were constructed for confirmed variants.\n\n\nFINDINGS\nVariants at three loci reached genome-wide significance in discovery cohorts for association with 25-hydroxyvitamin D concentrations, and were confirmed in replication cohorts: 4p12 (overall p=1.9x10(-109) for rs2282679, in GC); 11q12 (p=2.1x10(-27) for rs12785878, near DHCR7); and 11p15 (p=3.3x10(-20) for rs10741657, near CYP2R1). Variants at an additional locus (20q13, CYP24A1) were genome-wide significant in the pooled sample (p=6.0x10(-10) for rs6013897). Participants with a genotype score (combining the three confirmed variants) in the highest quartile were at increased risk of having 25-hydroxyvitamin D concentrations lower than 75 nmol/L (OR 2.47, 95% CI 2.20-2.78, p=2.3x10(-48)) or lower than 50 nmol/L (1.92, 1.70-2.16, p=1.0x10(-26)) compared with those in the lowest quartile.\n\n\nINTERPRETATION\nVariants near genes involved in cholesterol synthesis, hydroxylation, and vitamin D transport affect vitamin D status. Genetic variation at these loci identifies individuals who have substantially raised risk of vitamin D insufficiency.\n\n\nFUNDING\nFull funding sources listed at end of paper (see Acknowledgments).",
"title": ""
},
{
"docid": "24c00b40221b905943efbda6a7d5121f",
"text": "In four experiments, this research sheds light on aesthetic experiences by rigorously investigating behavioral, neural, and psychological properties of package design. We find that aesthetic packages significantly increase the reaction time of consumers' choice responses; that they are chosen over products with well-known brands in standardized packages, despite higher prices; and that they result in increased activation in the nucleus accumbens and the ventromedial prefrontal cortex, according to functional magnetic resonance imaging (fMRI). The results suggest that reward value plays an important role in aesthetic product experiences. Further, a closer look at psychometric and neuroimaging data finds that a paper-and-pencil measure of affective product involvement correlates with aesthetic product experiences in the brain. Implications for future aesthetics research, package designers, and product managers are discussed. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "6495f8c0217be9aea23e694abae248f1",
"text": "This paper describes the interactive narrative experiences in Babyz, an interactive entertainment product for the PC currently in development at PF Magic / Mindscape in San Francisco, to be released in October 1999. Babyz are believable agents designed and implemented in the tradition of Dogz and Catz, Your Virtual Petz. As virtual human characters, Babyz are more intelligent, expressive and communicative than their Petz predecessors, allowing for both broader and deeper narrative possibilities. Babyz are designed with behaviors to support entertaining short-term narrative experiences, as well as long-term emotional relationships and narratives.",
"title": ""
},
{
"docid": "d08529ef66abefda062a414acb278641",
"text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this inductive logic programming techniques and applications tends to be the representative book in this website.",
"title": ""
},
{
"docid": "dde695574d7007f6f6c5fc06b2d4468a",
"text": "A model of positive psychological functioning that emerges from diverse domains of theory and philosophy is presented. Six key dimensions of wellness are defined, and empirical research summarizing their empirical translation and sociodemographic correlates is presented. Variations in well-being are explored via studies of discrete life events and enduring human experiences. Life histories of the psychologically vulnerable and resilient, defined via the cross-classification of depression and well-being, are summarized. Implications of the focus on positive functioning for research on psychotherapy, quality of life, and mind/body linkages are reviewed.",
"title": ""
},
{
"docid": "bdd9760446a6412195e0742b5f1c7035",
"text": "Cyanobacteria are found globally due to their adaptation to various environments. The occurrence of cyanobacterial blooms is not a new phenomenon. The bloom-forming and toxin-producing species have been a persistent nuisance all over the world over the last decades. Evidence suggests that this trend might be attributed to a complex interplay of direct and indirect anthropogenic influences. To control cyanobacterial blooms, various strategies, including physical, chemical, and biological methods have been proposed. Nevertheless, the use of those strategies is usually not effective. The isolation of natural compounds from many aquatic and terrestrial plants and seaweeds has become an alternative approach for controlling harmful algae in aquatic systems. Seaweeds have received attention from scientists because of their bioactive compounds with antibacterial, antifungal, anti-microalgae, and antioxidant properties. The undesirable effects of cyanobacteria proliferations and potential control methods are here reviewed, focusing on the use of potent bioactive compounds, isolated from seaweeds, against microalgae and cyanobacteria growth.",
"title": ""
},
{
"docid": "a96209a2f6774062537baff5d072f72f",
"text": "In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.",
"title": ""
},
{
"docid": "4f509a4fdc6bbffa45c214bc9267ea79",
"text": "Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-ofthe-art quantitative results.",
"title": ""
},
{
"docid": "ec8f8f8611a4db6d70ba7913c3b80687",
"text": "Identifying building footprints is a critical and challenging problem in many remote sensing applications. Solutions to this problem have been investigated using a variety of sensing modalities as input. In this work, we consider the detection of building footprints from 3D Digital Surface Models (DSMs) created from commercial satellite imagery along with RGB orthorectified imagery. Recent public challenges (SpaceNet 1 and 2, DSTL Satellite Imagery Feature Detection Challenge, and the ISPRS Test Project on Urban Classification) approach this problem using other sensing modalities or higher resolution data. As a result of these challenges and other work, most publically available automated methods for building footprint detection using 2D and 3D data sources as input are meant for high-resolution 3D lidar and 2D airborne imagery, or make use of multispectral imagery as well to aid detection. Performance is typically degraded as the fidelity and post spacing of the 3D lidar data or the 2D imagery is reduced. Furthermore, most software packages do not work well enough with this type of data to enable a fully automated solution. We describe a public benchmark dataset consisting of 50 cm DSMs created from commercial satellite imagery, as well as coincident 50 cm RGB orthorectified imagery products. The dataset includes ground truth building outlines and we propose representative quantitative metrics for evaluating performance. In addition, we provide lessons learned and hope to promote additional research in this field by releasing this public benchmark dataset to the community.",
"title": ""
},
{
"docid": "085ec38c3e756504be93ac0b94483cea",
"text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.",
"title": ""
},
{
"docid": "a009fc320c5a61d8d8df33c19cd6037f",
"text": "Over the past decade, crowdsourcing has emerged as a cheap and efficient method of obtaining solutions to simple tasks that are difficult for computers to solve but possible for humans. The popularity and promise of crowdsourcing markets has led to both empirical and theoretical research on the design of algorithms to optimize various aspects of these markets, such as the pricing and assignment of tasks. Much of the existing theoretical work on crowdsourcing markets has focused on problems that fall into the broad category of online decision making; task requesters or the crowdsourcing platform itself make repeated decisions about prices to set, workers to filter out, problems to assign to specific workers, or other things. Often these decisions are complex, requiring algorithms that learn about the distribution of available tasks or workers over time and take into account the strategic (or sometimes irrational) behavior of workers.\n As human computation grows into its own field, the time is ripe to address these challenges in a principled way. However, it appears very difficult to capture all pertinent aspects of crowdsourcing markets in a single coherent model. In this paper, we reflect on the modeling issues that inhibit theoretical research on online decision making for crowdsourcing, and identify some steps forward. This paper grew out of the authors' own frustration with these issues, and we hope it will encourage the community to attempt to understand, debate, and ultimately address them.",
"title": ""
},
{
"docid": "ba41dfe1382ae0bc45d82d197b124382",
"text": "Business Intelligence (BI) deals with integrated approaches to management support. Currently, there are constraints to BI adoption and a new era of analytic data management for business intelligence these constraints are the integrated infrastructures that are subject to BI have become complex, costly, and inflexible, the effort required consolidating and cleansing enterprise data and Performance impact on existing infrastructure / inadequate IT infrastructure. So, in this paper Cloud computing will be used as a possible remedy for these issues. We will represent a new environment atmosphere for the business intelligence to make the ability to shorten BI implementation windows, reduced cost for BI programs compared with traditional on-premise BI software, Ability to add environments for testing, proof-of-concepts and upgrades, offer users the potential for faster deployments and increased flexibility. Also, Cloud computing enables organizations to analyze terabytes of data faster and more economically than ever before. Business intelligence (BI) in the cloud can be like a big puzzle. Users can jump in and put together small pieces of the puzzle but until the whole thing is complete the user will lack an overall view of the big picture. In this paper reading each section will fill in a piece of the puzzle.",
"title": ""
},
{
"docid": "5869ef6be3ca9a36dbf964c41e9b17b1",
"text": " The Short Messaging Service (SMS), one of the most successful cellular services, generating millions of dollars in revenue for mobile operators yearly. Current estimations indicate that billions of SMSs are sent every day. Nevertheless, text messaging is becoming a source of customer dissatisfaction due to the rapid surge of messaging abuse activities. Although spam is a well tackled problem in the email world, SMS spam experiences a yearly growth larger than 500%. In this paper we expand our previous analysis on SMS spam traffic from a tier-1 cellular operator presented in [1], aiming to highlight the main characteristics of such messaging fraud activity. Communication patterns of spammers are compared to those of legitimate cell-phone users and Machine to Machine (M2M) connected appliances. The results indicate that M2M systems exhibit communication profiles similar to spammers, which could mislead spam filters. We find the main geographical sources of messaging abuse in the US. We also find evidence of spammer mobility, voice and data traffic resembling the behavior of legitimate customers. Finally, we include new findings on the invariance of the main characteristics of spam messages and spammers over time. Also, we present results that indicate a clear device reuse strategy in SMS spam activities.",
"title": ""
},
{
"docid": "7c0ef25b2a4d777456facdfc526cf206",
"text": "The paper presents a novel approach to unsupervised text summarization. The novelty lies in exploiting the diversity of concepts in text for summarization, which has not received much attention in the summarization literature. A diversity-based approach here is a principled generalization of Maximal Marginal Relevance criterion by Carbonell and Goldstein \\cite{carbonell-goldstein98}.\nWe propose, in addition, aninformation-centricapproach to evaluation, where the quality of summaries is judged not in terms of how well they match human-created summaries but in terms of how well they represent their source documents in IR tasks such document retrieval and text categorization.\nTo find the effectiveness of our approach under the proposed evaluation scheme, we set out to examine how a system with the diversity functionality performs against one without, using the BMIR-J2 corpus, a test data developed by a Japanese research consortium. The results demonstrate a clear superiority of a diversity based approach to a non-diversity based approach.",
"title": ""
},
{
"docid": "45940a48b86645041726120fb066a1fa",
"text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"title": ""
},
{
"docid": "9379cad59abab5e12c97a9b92f4aeb93",
"text": "SigTur/E-Destination is a Web-based system that provides personalized recommendations of touristic activities in the region of Tarragona. The activities are properly classified and labeled according to a specific ontology, which guides the reasoning process. The recommender takes into account many different kinds of data: demographic information, travel motivations, the actions of the user on the system, the ratings provided by the user, the opinions of users with similar demographic characteristics or similar tastes, etc. The system has been fully designed and implemented in the Science and Technology Park of Tourism and Leisure. The paper presents a numerical evaluation of the correlation between the recommendations and the user’s motivations, and a qualitative evaluation performed by end users. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "641049f7bdf194b3c326298c5679c469",
"text": "Acknowledgements Research in areas where there are many possible paths to follow requires a keen eye for crucial issues. The study of learning systems is such an area. Through the years of working with Andy Barto and Rich Sutton, I have observed many instances of \" fluff cutting \" and the exposure of basic issues. I thank both Andy and Rich for the insights that have rubbed off on me. I also thank Andy for opening up an infinite world of perspectives on learning, ranging from engineering principles to neural processing theories. I thank Rich for showing me the most important step in doing \" science \" —simplify your questions by isolating the issues. Several people contributed to the readability of this dissertation. Andy spent much time carefully reading several drafts. Through his efforts the clarity is much improved. I thank Paul Utgoff, Michael Arbib, and Bill Kilmer for reading drafts of this dissertation and providing valuable criticisms. Paul provided a non-connectionist perspective that widened my view considerably. He never hesitated to work out differences in terms and methodologies that have been developed through research with connectionist vs. symbolic representations. I thank for commenting on an early draft and for many interesting discussions. and the AFOSR for starting and maintaining the research project that supported the work reported in this dis-sertation. I thank Susan Parker for the skill with which she administered the project. And I thank the COINS Department at UMass and the RCF Staff for the maintenance of the research computing environment. Much of the computer graphics software used to generate figures of this dissertation is based on graphics tools provided by Rich Sutton and Andy Cromarty. Most importantly, I thank Stacey and Joseph for always being there to lift my spirits while I pursued distant milestones and to share my excitement upon reaching them. Their faith and confidence helped me maintain a proper perspective. The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the issues of learning in a network's hidden units, and reviews methods for addressing these issues that have been developed through the years. Issues of learning in hidden units are shown to be analogous to learning issues for multilayer systems employing symbolic representations. Comparisons of a number of algorithms for learning in hidden units are made by applying them in …",
"title": ""
},
{
"docid": "d23c5fc626d0f7b1d9c6c080def550b8",
"text": "Gamification of education is a developing approach for increasing learners’ motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical evidence rather than on potentialities, beliefs or preferences. Accordingly, it critically examines the advancement in gamifying education. The discussion is structured around the used gamification mechanisms, the gamified subjects, the type of gamified learning activities, and the study goals, with an emphasis on the reliability and validity of the reported outcomes. To improve our understanding and offer a more realistic picture of the progress of gamification in education, consistent with the presented evidence, we examine both the outcomes reported in the papers and how they have been obtained. While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach.",
"title": ""
},
{
"docid": "2b745b41b0495ab7adad321080ce2228",
"text": "In any teaching and learning setting, there are some variables that play a highly significant role in both teachers’ and learners’ performance. Two of these influential psychological domains in educational context include self-efficacy and burnout. This study is conducted to investigate the relationship between the self-efficacy of Iranian teachers of English and their reports of burnout. The data was collected through application of two questionnaires. The Maslach Burnout Inventory (MBI; Maslach& Jackson 1981, 1986) and Teacher Efficacy Scales (Woolfolk& Hoy, 1990) were administered to ten university teachers. After obtaining the raw data, the SPSS software (version 16) was used to change the data into numerical interpretable forms. In order to determine the relationship between self-efficacy and teachers’ burnout, correlational analysis was employed. The results showed that participants’ self-efficacy has a reverse relationship with their burnout.",
"title": ""
},
{
"docid": "636ace52ca3377809326735810a08310",
"text": "BACKGROUND\nAlthough many patients with venous thromboembolism require extended treatment, it is uncertain whether it is better to use full- or lower-intensity anticoagulation therapy or aspirin.\n\n\nMETHODS\nIn this randomized, double-blind, phase 3 study, we assigned 3396 patients with venous thromboembolism to receive either once-daily rivaroxaban (at doses of 20 mg or 10 mg) or 100 mg of aspirin. All the study patients had completed 6 to 12 months of anticoagulation therapy and were in equipoise regarding the need for continued anticoagulation. Study drugs were administered for up to 12 months. The primary efficacy outcome was symptomatic recurrent fatal or nonfatal venous thromboembolism, and the principal safety outcome was major bleeding.\n\n\nRESULTS\nA total of 3365 patients were included in the intention-to-treat analyses (median treatment duration, 351 days). The primary efficacy outcome occurred in 17 of 1107 patients (1.5%) receiving 20 mg of rivaroxaban and in 13 of 1127 patients (1.2%) receiving 10 mg of rivaroxaban, as compared with 50 of 1131 patients (4.4%) receiving aspirin (hazard ratio for 20 mg of rivaroxaban vs. aspirin, 0.34; 95% confidence interval [CI], 0.20 to 0.59; hazard ratio for 10 mg of rivaroxaban vs. aspirin, 0.26; 95% CI, 0.14 to 0.47; P<0.001 for both comparisons). Rates of major bleeding were 0.5% in the group receiving 20 mg of rivaroxaban, 0.4% in the group receiving 10 mg of rivaroxaban, and 0.3% in the aspirin group; the rates of clinically relevant nonmajor bleeding were 2.7%, 2.0%, and 1.8%, respectively. The incidence of adverse events was similar in all three groups.\n\n\nCONCLUSIONS\nAmong patients with venous thromboembolism in equipoise for continued anticoagulation, the risk of a recurrent event was significantly lower with rivaroxaban at either a treatment dose (20 mg) or a prophylactic dose (10 mg) than with aspirin, without a significant increase in bleeding rates. (Funded by Bayer Pharmaceuticals; EINSTEIN CHOICE ClinicalTrials.gov number, NCT02064439 .).",
"title": ""
}
] |
scidocsrr
|
855900f2bbf809a36d65c33235267922
|
Manuka: A Batch-Shading Architecture for Spectral Path Tracing in Movie Production
|
[
{
"docid": "c491e39bbfb38f256e770d730a50b2e1",
"text": "Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.",
"title": ""
}
] |
[
{
"docid": "26d06b650cffb1bf50d059087b307119",
"text": "Algorithms and decision making based on Big Data have become pervasive in all aspects of our daily lives lives (offline and online), as they have become essential tools in personal finance, health care, hiring, housing, education, and policies. It is therefore of societal and ethical importance to ask whether these algorithms can be discriminative on grounds such as gender, ethnicity, or health status. It turns out that the answer is positive: for instance, recent studies in the context of online advertising show that ads for high-income jobs are presented to men much more often than to women [Datta et al., 2015]; and ads for arrest records are significantly more likely to show up on searches for distinctively black names [Sweeney, 2013]. This algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm. Sometimes it may be inherent to the data sources used (software making decisions based on data can reflect, or even amplify, the results of historical discrimination), but even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data. These considerations call for the development of data mining systems which are discrimination-conscious by-design. This is a novel and challenging research area for the data mining community.\n The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. We conclude by summarizing promising paths for future research.",
"title": ""
},
{
"docid": "110a60612f701575268fe3dbcf0d338f",
"text": "The Danish and Swedish male top football divisions were studied prospectively from January to June 2001. Exposure to football and injury incidence, severity and distribution were compared between the countries. Swedish players had greater exposure to training (171 vs. 123 h per season, P<0.001), whereas exposure to matches did not differ between the countries. There was a higher risk for injury during training in Denmark than in Sweden (11.8 vs. 6.0 per 1000 h, P<0.01), whereas for match play there was no difference (28.2 vs. 26.2 per 1000 h). The risk for incurring a major injury (absence from football more than 4 weeks) was greater in Denmark (1.8 vs. 0.7 per 1000 h, P = 0.002). The distribution of injuries according to type and location was similar in both countries. Of all injuries in Denmark and Sweden, overuse injury accounted for 39% and 38% (NS), and re-injury for 30% and 24% (P = 0.032), respectively. The greater training exposure and the long pre-season period in Sweden may explain some of the reported differences.",
"title": ""
},
{
"docid": "4deea3312fe396f81919b07462551682",
"text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent",
"title": ""
},
{
"docid": "3171587b5b4554d151694f41206bcb4e",
"text": "Embedded systems are ubiquitous in society and can contain information that could be used in criminal cases for example in a serious road traffic accident where the car management systems could provide vital forensic information concerning the engine speed etc. A critical review of a number of methods and procedures for the analysis of embedded systems were compared against a ‘standard’ methodology for use in a Forensic Computing Investigation. A Unified Forensic Methodology (UFM) has been developed that is forensically sound and capable of dealing with the analysis of a wide variety of Embedded Systems.",
"title": ""
},
{
"docid": "9c2609adae64ec8d0b4e2cc987628c05",
"text": "We propose a novel method capable of retrieving clips from untrimmed videos based on natural language queries. This cross-modal retrieval task plays a key role in visual-semantic understanding, and requires localizing clips in time and computing their similarity to the query sentence. Current methods generate sentence and video embeddings and then compare them using a late fusion approach, but this ignores the word order in queries and prevents more fine-grained comparisons. Motivated by the need for fine-grained multi-modal feature fusion, we propose a novel early fusion embedding approach that combines video and language information at the word level. Furthermore, we use the inverse task of dense video captioning as a side-task to improve the learned embedding. Our full model combines these components with an efficient proposal pipeline that performs accurate localization of potential video clips. We present a comprehensive experimental validation on two large-scale text-to-clip datasets (Charades-STA and DiDeMo) and attain state-ofthe-art retrieval results with our model.",
"title": ""
},
{
"docid": "106fefb169c7e95999fb411b4e07954e",
"text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.",
"title": ""
},
{
"docid": "1783f837b61013391f3ff4f03ac6742e",
"text": "Nowadays, many methods have been applied for data transmission of MWD system. Magnetic induction is one of the alternative technique. In this paper, detailed discussion on magnetic induction communication system is provided. The optimal coil configuration is obtained by theoretical analysis and software simulations. Based on this coil arrangement, communication characteristics of path loss and bit error rate are derived.",
"title": ""
},
{
"docid": "758692d2c0f1c2232a4c705b0a14c19f",
"text": "Process-driven spreadsheet queuing simulation is a better vehicle for understanding queue behavior than queuing theory or dedicated simulation software. Spreadsheet queuing simulation has many pedagogical benefits in a business school end-user modeling course, including developing students' intuition , giving them experience with active modeling skills, and providing access to tools. Spreadsheet queuing simulations are surprisingly easy to program, even for queues with balking and reneging. The ease of prototyping in spreadsheets invites thoughtless design, so careful spreadsheet programming practice is important. Spreadsheet queuing simulation is inferior to dedicated simulation software for analyzing queues but is more likely to be available to managers and students. Q ueuing theory has always been a staple in survey courses on management science. Although it is a powerful tool for computing certain steady-state performance measures, queuing theory is a poor vehicle for teaching students about what transpires in queues. Process-driven spreadsheet queuing simulation is a much better vehicle. Although Evans and Olson [1998, p. 170] state that \" a serious limitation of spreadsheets for waiting-line models is that it is not possible to include behavior such as balking \" and Liberatore and Ny-dick [forthcoming] indicate that a limitation of spreadsheet simulation is the in",
"title": ""
},
{
"docid": "b7b3690f547e479627cc1262ae080b8f",
"text": "This article investigates the vulnerabilities of Supervisory Control and Data Acquisition (SCADA) systems which monitor and control the modern day irrigation canal systems. This type of monitoring and control infrastructure is also common for many other water distribution systems. We present a linearized shallow water partial differential equation (PDE) system that can model water flow in a network of canal pools which are equipped with lateral offtakes for water withdrawal and are connected by automated gates. The knowledge of the system dynamics enables us to develop a deception attack scheme based on switching the PDE parameters and proportional (P) boundary control actions, to withdraw water from the pools through offtakes. We briefly discuss the limits on detectability of such attacks. We use a known formulation based on low frequency approximation of the PDE model and an associated proportional integral (PI) controller, to create a stealthy deception scheme capable of compromising the performance of the closed-loop system. We test the proposed attack scheme in simulation, using a shallow water solver; and show that the attack is indeed realizable in practice by implementing it on a physical canal in Southern France: the Gignac canal. A successful field experiment shows that the attack scheme enables us to steal water stealthily from the canal until the end of the attack.",
"title": ""
},
{
"docid": "ec6f93bdc15283b46bc4c1a0ce1a38c8",
"text": "This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games’ state (not only player’s orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions’ mixtures components.",
"title": ""
},
{
"docid": "00fdc3da831aadbad0fd3410ffb0f8bb",
"text": "Removing undesired reflections from a photo taken in front of a glass is of great importance for enhancing the efficiency of visual computing systems. Various approaches have been proposed and shown to be visually plausible on small datasets collected by their authors. A quantitative comparison of existing approaches using the same dataset has never been conducted due to the lack of suitable benchmark data with ground truth. This paper presents the first captured Single-image Reflection Removal dataset ‘SIR2’ with 40 controlled and 100 wild scenes, ground truth of background and reflection. For each controlled scene, we further provide ten sets of images under varying aperture settings and glass thicknesses. We perform quantitative and visual quality comparisons for four state-of-the-art single-image reflection removal algorithms using four error metrics. Open problems for improving reflection removal algorithms are discussed at the end.",
"title": ""
},
{
"docid": "3d490d7d30dcddc3f1c0833794a0f2df",
"text": "Purpose-This study attempts to investigate (1) the effect of meditation experience on employees’ self-directed learning (SDL) readiness and organizational innovative (OI) ability as well as organizational performance (OP), and (2) the relationships among SDL, OI, and OP. Design/methodology/approach-This study conducts an empirical study of 15 technological companies (n = 412) in Taiwan, utilizing the collected survey data to test the relationships among the three dimensions. Findings-Results show that: (1) The employees’ meditation experience significantly and positively influenced employees’ SDL readiness, companies’ OI capability and OP; (2) The study found that SDL has a direct and significant impact on OI; and OI has direct and significant influences on OP. Research limitation/implications-The generalization of the present study is constrained by (1) the existence of possible biases of the participants, (2) the variations of length, type and form of meditation demonstrated by the employees in these high tech companies, and (3) the fact that local data collection in Taiwan may present different cultural characteristics which may be quite different from those in other areas or countries. Managerial implications are presented at the end of the work. Practical implications-The findings indicate that SDL can only impact organizational innovation through employees “openness to a challenge”, “inquisitive nature”, self-understanding and acceptance of responsibility for learning. Such finding implies better organizational innovative capability under such conditions, thus organizations may encourage employees to take risks or accept new opportunities through various incentives, such as monetary rewards or public recognitions. More specifically, the present study discovers that while administration innovation is the most important element influencing an organization’s financial performance, market innovation is the key component in an organization’s market performance. Social implications-The present study discovers that meditation experience positively",
"title": ""
},
{
"docid": "dc20661ca4dbf21e4dcdeeabbab7cf14",
"text": "We present our approach for developing a laboratory information management system (LIMS) software by combining Björners software triptych methodology (from domain models via requirements to software) with Arlow and Neustadt archetypes and archetype patterns based initiative. The fundamental hypothesis is that through this Archetypes Based Development (ABD) approach to domains, requirements and software, it is possible to improve the software development process as well as to develop more dependable software. We use ADB in developing LIMS software for the Clinical and Biomedical Proteomics Group (CBPG), University of Leeds.",
"title": ""
},
{
"docid": "2aabe5c6f1ccb8dfd241f0c208609738",
"text": "Exposing the weaknesses of neural models is crucial for improving their performance and robustness in real-world applications. One common approach is to examine how input perturbations affect the output. Our analysis takes this to an extreme on natural language processing tasks by removing as many words as possible from the input without changing the model prediction. For question answering and natural language inference, this often reduces the inputs to just one or two words, while model confidence remains largely unchanged. This is an undesireable behavior: the model gets the Right Answer for the Wrong Reason (RAWR). We introduce a simple training technique that mitigates this problem while maintaining performance on regular examples.",
"title": ""
},
{
"docid": "bf7cd2303c325968879da72966054427",
"text": "Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has considerable inference speed. Hence, it is imperative to fuse their metrics for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve the state-of-the-art results and a better accuracy vs. speed trade-off, i.e., 81.4% mAP vs. 42.3 FPS on VOC2007 test set. Codes will be publicly available.",
"title": ""
},
{
"docid": "567f48fef5536e9f44a6c66deea5375b",
"text": "The principle of control signal amplification is found in all actuation systems, from engineered devices through to the operation of biological muscles. However, current engineering approaches require the use of hard and bulky external switches or valves, incompatible with both the properties of emerging soft artificial muscle technology and those of the bioinspired robotic systems they enable. To address this deficiency a biomimetic molecular-level approach is developed that employs light, with its excellent spatial and temporal control properties, to actuate soft, pH-responsive hydrogel artificial muscles. Although this actuation is triggered by light, it is largely powered by the resulting excitation and runaway chemical reaction of a light-sensitive acid autocatalytic solution in which the actuator is immersed. This process produces actuation strains of up to 45% and a three-fold chemical amplification of the controlling light-trigger, realising a new strategy for the creation of highly functional soft actuating systems.",
"title": ""
},
{
"docid": "728bab0ecb94c38368e867545bfea88e",
"text": "We present a hierarchical control approach that can be used to fulfill autonomous flight, including vertical takeoff, landing, hovering, transition, and level flight, of a quadrotor tail-sitter vertical takeoff and landing unmanned aerial vehicle (VTOL UAV). A unified attitude controller, together with a moment allocation scheme between elevons and motor differential thrust, is developed for all flight modes. A comparison study via real flight tests is performed to verify the effectiveness of using elevons in addition to motor differential thrust. With the well-designed switch scheme proposed in this paper, the aircraft can transit between different flight modes with negligible altitude drop or gain. Intensive flight tests have been performed to verify the effectiveness of the proposed control approach in both manual and fully autonomous flight mode.",
"title": ""
},
{
"docid": "64bbb86981bf3cc575a02696f64109f6",
"text": "We use computational techniques to extract a large number of different features from the narrative speech of individuals with primary progressive aphasia (PPA). We examine several different types of features, including part-of-speech, complexity, context-free grammar, fluency, psycholinguistic, vocabulary richness, and acoustic, and discuss the circumstances under which they can be extracted. We consider the task of training a machine learning classifier to determine whether a participant is a control, or has the fluent or nonfluent variant of PPA. We first evaluate the individual feature sets on their classification accuracy, then perform an ablation study to determine the optimal combination of feature sets. Finally, we rank the features in four practical scenarios: given audio data only, given unsegmented transcripts only, given segmented transcripts only, and given both audio and segmented transcripts. We find that psycholinguistic features are highly discriminative in most cases, and that acoustic, context-free grammar, and part-of-speech features can also be important in some circumstances.",
"title": ""
},
{
"docid": "e42357ff2f957f6964bab00de4722d52",
"text": "We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system.",
"title": ""
},
{
"docid": "7c2cb105e5fad90c90aea0e59aae5082",
"text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when",
"title": ""
}
] |
scidocsrr
|
d2dfa1f211432efc034679bb1662c5c5
|
Advances in Game Accessibility from 2005 to 2010
|
[
{
"docid": "e4f62bc47ca11c5e4c7aff5937d90c88",
"text": "CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.",
"title": ""
}
] |
[
{
"docid": "82daa2740da14a2508138ccb6e2e2554",
"text": "In this paper, we introduce an Iterative Kalman Smoother (IKS) for tracking the 3D motion of a mobile device in real-time using visual and inertial measurements. In contrast to existing Extended Kalman Filter (EKF)-based approaches, smoothing can better approximate the underlying nonlinear system and measurement models by re-linearizing them. Additionally, by iteratively optimizing over all measurements available, the IKS increases the convergence rate of critical parameters (e.g., IMU-camera clock drift) and improves the positioning accuracy during challenging conditions (e.g., scarcity of visual features). Furthermore, and in contrast to existing inverse filters, the proposed IKS's numerical stability allows for efficient 32-bit implementations on resource-constrained devices, such as cell phones and wearables. We validate the IKS for performing vision-aided inertial navigation on Google Glass, a wearable device with limited sensing and processing, and demonstrate positioning accuracy comparable to that achieved on cell phones. To the best of our knowledge, this work presents the first proof-of-concept real-time 3D indoor localization system on a commercial-grade wearable computer.",
"title": ""
},
{
"docid": "6bcd4a5e41d300e75d877de1b83e0a18",
"text": "Medical training has traditionally depended on patient contact. However, changes in healthcare delivery coupled with concerns about lack of objectivity or standardization of clinical examinations lead to the introduction of the 'simulated patient' (SP). SPs are now used widely for teaching and assessment purposes. SPs are usually, but not necessarily, lay people who are trained to portray a patient with a specific condition in a realistic way, sometimes in a standardized way (where they give a consistent presentation which does not vary from student to student). SPs can be used for teaching and assessment of consultation and clinical/physical examination skills, in simulated teaching environments or in situ. All SPs play roles but SPs have also been used successfully to give feedback and evaluate student performance. Clearly, given this potential level of involvement in medical training, it is critical to recruit, train and use SPs appropriately. We have provided a detailed overview on how to do so, for both teaching and assessment purposes. The contents include: how to monitor and assess SP performance, both in terms of validity and reliability, and in terms of the impact on the SP; and an overview of the methods, staff costs and routine expenses required for recruiting, administrating and training an SP bank, and finally, we provide some intercultural comparisons, a 'snapshot' of the use of SPs in medical education across Europe and Asia, and briefly discuss some of the areas of SP use which require further research.",
"title": ""
},
{
"docid": "20086cff7c26a1ae4d981fc512124f94",
"text": "Commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems. Cloud computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work provides a comprehensive evaluation of EC2 cloud in different aspects. We first analyze the potentials of the cloud by evaluating the raw performance of different services of AWS such as compute, memory, network and I/O. Based on the findings on the raw performance, we then evaluate the performance of the scientific applications running in the cloud. Finally, we compare the performance of AWS with a private cloud, in order to find the root cause of its limitations while running scientific applications. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud in terms of both raw performance and scientific applications performance. Furthermore, we evaluate other services including S3, EBS and DynamoDB among many AWS services in order to assess the abilities of those to be used by scientific applications and frameworks. We also evaluate a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.",
"title": ""
},
{
"docid": "3a52576a2fdaa7f6f9632dc8c4bf0971",
"text": "As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.",
"title": ""
},
{
"docid": "57d1648391cac4ccfefd85aacef6b5ba",
"text": "Competition in the wireless telecommunications industry is fierce. To maintain profitability, wireless carriers must control churn, which is the loss of subscribers who switch from one carrier to another.We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include logit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47,000 U.S. domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments.",
"title": ""
},
{
"docid": "b8fa50df3c76c2192c67cda7ae4d05f5",
"text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.",
"title": ""
},
{
"docid": "deed140862c62fa8be4a8a58ffc1d7dc",
"text": "Gender-affirmation surgery is often the final gender-confirming medical intervention sought by those patients suffering from gender dysphoria. In the male-to-female (MtF) transgendered patient, the creation of esthetic and functional external female genitalia with a functional vaginal channel is of the utmost importance. The aim of this review and meta-analysis is to evaluate the epidemiology, presentation, management, and outcomes of neovaginal complications in the MtF transgender reassignment surgery patients. PUBMED was searched in accordance with PRISMA guidelines for relevant articles (n = 125). Ineligible articles were excluded and articles meeting all inclusion criteria went on to review and analysis (n = 13). Ultimately, studies reported on 1,684 patients with an overall complication rate of 32.5% and a reoperation rate of 21.7% for non-esthetic reasons. The most common complication was stenosis of the neo-meatus (14.4%). Wound infection was associated with an increased risk of all tissue-healing complications. Use of sacrospinous ligament fixation (SSL) was associated with a significantly decreased risk of prolapse of the neovagina. Gender-affirmation surgery is important in the treatment of gender dysphoric patients, but there is a high complication rate in the reported literature. Variability in technique and complication reporting standards makes it difficult to assess the accurately the current state of MtF gender reassignment surgery. Further research and implementation of standards is necessary to improve patient outcomes. Clin. Anat. 31:191-199, 2018. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "515edceed7d7bb8a3d2a40f8a9ef405e",
"text": "BACKGROUND\nThe rate of bacterial meningitis declined by 55% in the United States in the early 1990s, when the Haemophilus influenzae type b (Hib) conjugate vaccine for infants was introduced. More recent prevention measures such as the pneumococcal conjugate vaccine and universal screening of pregnant women for group B streptococcus (GBS) have further changed the epidemiology of bacterial meningitis.\n\n\nMETHODS\nWe analyzed data on cases of bacterial meningitis reported among residents in eight surveillance areas of the Emerging Infections Programs Network, consisting of approximately 17.4 million persons, during 1998-2007. We defined bacterial meningitis as the presence of H. influenzae, Streptococcus pneumoniae, GBS, Listeria monocytogenes, or Neisseria meningitidis in cerebrospinal fluid or other normally sterile site in association with a clinical diagnosis of meningitis.\n\n\nRESULTS\nWe identified 3188 patients with bacterial meningitis; of 3155 patients for whom outcome data were available, 466 (14.8%) died. The incidence of meningitis changed by -31% (95% confidence interval [CI], -33 to -29) during the surveillance period, from 2.00 cases per 100,000 population (95% CI, 1.85 to 2.15) in 1998-1999 to 1.38 cases per 100,000 population (95% CI 1.27 to 1.50) in 2006-2007. The median age of patients increased from 30.3 years in 1998-1999 to 41.9 years in 2006-2007 (P<0.001 by the Wilcoxon rank-sum test). The case fatality rate did not change significantly: it was 15.7% in 1998-1999 and 14.3% in 2006-2007 (P=0.50). Of the 1670 cases reported during 2003-2007, S. pneumoniae was the predominant infective species (58.0%), followed by GBS (18.1%), N. meningitidis (13.9%), H. influenzae (6.7%), and L. monocytogenes (3.4%). An estimated 4100 cases and 500 deaths from bacterial meningitis occurred annually in the United States during 2003-2007.\n\n\nCONCLUSIONS\nThe rates of bacterial meningitis have decreased since 1998, but the disease still often results in death. With the success of pneumococcal and Hib conjugate vaccines in reducing the risk of meningitis among young children, the burden of bacterial meningitis is now borne more by older adults. (Funded by the Emerging Infections Programs, Centers for Disease Control and Prevention.).",
"title": ""
},
{
"docid": "c7dd6824c8de3e988bb7f58141458ef9",
"text": "We present a method to classify images into different categories of pornographic content to create a system for filtering pornographic images from network traffic. Although different systems for this application were presented in the past, most of these systems are based on simple skin colour features and have rather poor performance. Recent advances in the image recognition field in particular for the classification of objects have shown that bag-of-visual-words-approaches are a good method for many image classification problems. The system we present here, is based on this approach, uses a task-specific visual vocabulary and is trained and evaluated on an image database of 8500 images from different categories. It is shown that it clearly outperforms earlier systems on this dataset and further evaluation on two novel web-traffic collections shows the good performance of the proposed system.",
"title": ""
},
{
"docid": "5656c77061a3f678172ea01e226ede26",
"text": "BACKGROUND\nIn 2010, overweight and obesity were estimated to cause 3·4 million deaths, 3·9% of years of life lost, and 3·8% of disability-adjusted life-years (DALYs) worldwide. The rise in obesity has led to widespread calls for regular monitoring of changes in overweight and obesity prevalence in all populations. Comparable, up-to-date information about levels and trends is essential to quantify population health effects and to prompt decision makers to prioritise action. We estimate the global, regional, and national prevalence of overweight and obesity in children and adults during 1980-2013.\n\n\nMETHODS\nWe systematically identified surveys, reports, and published studies (n=1769) that included data for height and weight, both through physical measurements and self-reports. We used mixed effects linear regression to correct for bias in self-reports. We obtained data for prevalence of obesity and overweight by age, sex, country, and year (n=19,244) with a spatiotemporal Gaussian process regression model to estimate prevalence with 95% uncertainty intervals (UIs).\n\n\nFINDINGS\nWorldwide, the proportion of adults with a body-mass index (BMI) of 25 kg/m(2) or greater increased between 1980 and 2013 from 28·8% (95% UI 28·4-29·3) to 36·9% (36·3-37·4) in men, and from 29·8% (29·3-30·2) to 38·0% (37·5-38·5) in women. Prevalence has increased substantially in children and adolescents in developed countries; 23·8% (22·9-24·7) of boys and 22·6% (21·7-23·6) of girls were overweight or obese in 2013. The prevalence of overweight and obesity has also increased in children and adolescents in developing countries, from 8·1% (7·7-8·6) to 12·9% (12·3-13·5) in 2013 for boys and from 8·4% (8·1-8·8) to 13·4% (13·0-13·9) in girls. In adults, estimated prevalence of obesity exceeded 50% in men in Tonga and in women in Kuwait, Kiribati, Federated States of Micronesia, Libya, Qatar, Tonga, and Samoa. Since 2006, the increase in adult obesity in developed countries has slowed down.\n\n\nINTERPRETATION\nBecause of the established health risks and substantial increases in prevalence, obesity has become a major global health challenge. Not only is obesity increasing, but no national success stories have been reported in the past 33 years. Urgent global action and leadership is needed to help countries to more effectively intervene.\n\n\nFUNDING\nBill & Melinda Gates Foundation.",
"title": ""
},
{
"docid": "4147fee030667122923f420ab55e38f7",
"text": "In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.",
"title": ""
},
{
"docid": "ca3ea61314d43abeac81546e66ff75e4",
"text": "OBJECTIVE\nTo describe and discuss the process used to write a narrative review of the literature for publication in a peer-reviewed journal. Publication of narrative overviews of the literature should be standardized to increase their objectivity.\n\n\nBACKGROUND\nIn the past decade numerous changes in research methodology pertaining to reviews of the literature have occurred. These changes necessitate authors of review articles to be familiar with current standards in the publication process.\n\n\nMETHODS\nNarrative overview of the literature synthesizing the findings of literature retrieved from searches of computerized databases, hand searches, and authoritative texts.\n\n\nDISCUSSION\nAn overview of the use of three types of reviews of the literature is presented. Step by step instructions for how to conduct and write a narrative overview utilizing a 'best-evidence synthesis' approach are discussed, starting with appropriate preparatory work and ending with how to create proper illustrations. Several resources for creating reviews of the literature are presented and a narrative overview critical appraisal worksheet is included. A bibliography of other useful reading is presented in an appendix.\n\n\nCONCLUSION\nNarrative overviews can be a valuable contribution to the literature if prepared properly. New and experienced authors wishing to write a narrative overview should find this article useful in constructing such a paper and carrying out the research process. It is hoped that this article will stimulate scholarly dialog amongst colleagues about this research design and other complex literature review methods.",
"title": ""
},
{
"docid": "d67a93dde102bdcd2dd1a72c80aacd6b",
"text": "Network intrusion detection systems have become a standard component in security infrastructures. Unfortunately, current systems are poor at detecting novel attacks without an unacceptable level of false alarms. We propose that the solution to this problem is the application of an ensemble of data mining techniques which can be applied to network connection data in an offline environment, augmenting existing real-time sensors. In this paper, we expand on our motivation, particularly with regard to running in an offline environment, and our interest in multisensor and multimethod correlation. We then review existing systems, from commercial systems, to research based intrusion detection systems. Next we survey the state of the art in the area. Standard datasets and feature extraction turned out to be more important than we had initially anticipated, so each can be found under its own heading. Next, we review the actual data mining methods that have been proposed or implemented. We conclude by summarizing the open problems in this area and proposing a new research project to answer some of these open problems.",
"title": ""
},
{
"docid": "3176f0a4824b2dd11d612d55b4421881",
"text": "This article reviews some of the criticisms directed towards the eclectic paradigm of international production over the past decade, and restates its main tenets. The second part of the article considers a number of possible extensions of the paradigm and concludes by asserting that it remains \"a robust general framework for explaining and analysing not only the economic rationale of economic production but many organisational nd impact issues in relation to MNE activity as well.\"",
"title": ""
},
{
"docid": "42167e7708bb73b08972e15a44a6df02",
"text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"title": ""
},
{
"docid": "9019e71123230c6e2f58341d4912a0dd",
"text": "How to effectively manage increasingly complex enterprise computing environments is one of the hardest challenges that most organizations have to face in the era of cloud computing, big data and IoT. Advanced automation and orchestration systems are the most valuable solutions helping IT staff to handle large-scale cloud data centers. Containers are the new revolution in the cloud computing world, they are more lightweight than VMs, and can radically decrease both the start up time of instances and the processing and storage overhead with respect to traditional VMs. The aim of this paper is to provide a comprehensive description of cloud orchestration approaches with containers, analyzing current research efforts, existing solutions and presenting issues and challenges facing this topic.",
"title": ""
},
{
"docid": "6b203b7a8958103b30701ac139eb1fb8",
"text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.",
"title": ""
},
{
"docid": "8123ab525ce663e44b104db2cacd59a9",
"text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.",
"title": ""
},
{
"docid": "dbf5d0f6ce7161f55cf346e46150e8d7",
"text": "Loan fraud is a critical factor in the insolvency of financial institutions, so companies make an effort to reduce the loss from fraud by building a model for proactive fraud prediction. However, there are still two critical problems to be resolved for the fraud detection: (1) the lack of cost sensitivity between type I error and type II error in most prediction models, and (2) highly skewed distribution of class in the dataset used for fraud detection because of sparse fraud-related data. The objective of this paper is to examine whether classification cost is affected both by the cost-sensitive approach and by skewed distribution of class. To that end, we compare the classification cost incurred by a traditional cost-insensitive classification approach and two cost-sensitive classification approaches, Cost-Sensitive Classifier (CSC) and MetaCost. Experiments were conducted with a credit loan dataset from a major financial institution in Korea, while varying the distribution of class in the dataset and the number of input variables. The experiments showed that the lowest classification cost was incurred when the MetaCost approach was used and when non-fraud data and fraud data were balanced. In addition, the dataset that includes all delinquency variables was shown to be most effective on reducing the classification cost. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "34af5ac483483fa59eda7804918bdb1c",
"text": "Automatic spelling and grammatical correction systems are one of the most widely used tools within natural language applications. In this thesis, we assume the task of error correction as a type of monolingual machine translation where the source sentence is potentially erroneous and the target sentence should be the corrected form of the input. Our main focus in this project is building neural network models for the task of error correction. In particular, we investigate sequence-to-sequence and attention-based models which have recently shown a higher performance than the state-of-the-art of many language processing problems. We demonstrate that neural machine translation models can be successfully applied to the task of error correction. While the experiments of this research are performed on an Arabic corpus, our methods in this thesis can be easily applied to any language. Keywords— natural language error correction, recurrent neural networks, encoderdecoder models, attention mechanism",
"title": ""
}
] |
scidocsrr
|
52cb1aabd581bc09562d69de103e864e
|
Refining faster-RCNN for accurate object detection
|
[
{
"docid": "d88523afba42431989f5d3bd22f2ad85",
"text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.",
"title": ""
}
] |
[
{
"docid": "c2f3cebd614fff668e80fa0d77e34972",
"text": "In this paper, the unknown parameters of the photovoltaic (PV) module are determined using Genetic Algorithm (GA) method. This algorithm based on minimizing the absolute difference between the maximum powers obtained from module datasheet and the maximum power obtained from the mathematical model of the PV module, at different operating conditions. This method does not need to initial values, so these parameters of the PV module are easily obtained with high accuracy. To validate the proposed method, the results obtained from it are compared with the experimental results obtained from the PV module datasheet for different operating conditions. The results obtained from the proposed model are found to be very close compared to the results given in the datasheet of the PV module.",
"title": ""
},
{
"docid": "2b4b639973f54bdd7b987d5bc9bb3978",
"text": "Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution.",
"title": ""
},
{
"docid": "c9b9ac230838ffaff404784b66862013",
"text": "On the Mathematical Foundations of Theoretical Statistics. Author(s): R. A. Fisher. Source: Philosophical Transactions of the Royal Society of London. Series A Solutions to Exercises. 325. Bibliography. 347. Index Discrete mathematics is an essential part of the foundations of (theoretical) computer science, statistics . 2) Statistical Methods by S.P.Gupta. 3) Mathematical Statistics by Saxena & Kapoor. 4) Statistics by Sancheti & Kapoor. 5) Introduction to Mathematical Statistics Fundamentals of Mathematical statistics by Guptha, S.C &Kapoor, V.K (Sulthan chand. &sons). 2. Introduction to Mathematical statistics by Hogg.R.V and and .",
"title": ""
},
{
"docid": "df833f98f7309a5ab5f79fae2f669460",
"text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.",
"title": ""
},
{
"docid": "eb101664f08f0c5c7cf6bcf8e058b180",
"text": "Rapidly progressive renal failure (RPRF) is an initial clinical diagnosis in patients who present with progressive renal impairment of short duration. The underlying etiology may be a primary renal disease or a systemic disorder. Important differential diagnoses include vasculitis (systemic or renal-limited), systemic lupus erythematosus, multiple myeloma, thrombotic microangiopathy and acute interstitial nephritis. Good history taking, clinical examination and relevant investigations including serology and ultimately kidney biopsy are helpful in clinching the diagnosis. Early definitive diagnosis of RPRF is essential to reverse the otherwise relentless progression to end-stage kidney disease.",
"title": ""
},
{
"docid": "6761bd757cdd672f60c980b081d4dbc8",
"text": "Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.",
"title": ""
},
{
"docid": "450a0ffcd35400f586e766d68b75cc98",
"text": "While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied. In this paper, we tackle the 3D human pose estimation task with end-to-end learning using CNNs. Relative 3D positions between one joint and the other joints are learned via CNNs. The proposed method improves the performance of CNN with two novel ideas. First, we added 2D pose information to estimate a 3D pose from an image by concatenating 2D pose estimation result with the features from an image. Second, we have found that more accurate 3D poses are obtained by combining information on relative positions with respect to multiple joints, instead of just one root joint. Experimental results show that the proposed method achieves comparable performance to the state-of-the-art methods on Human 3.6m dataset.",
"title": ""
},
{
"docid": "cf5e6ce7313d15f33afa668f27a5e9e2",
"text": "Researchers have designed a variety of systems that promote wellness. However, little work has been done to examine how casual mobile games can help adults learn how to live healthfully. To explore this design space, we created OrderUP!, a game in which players learn how to make healthier meal choices. Through our field study, we found that playing OrderUP! helped participants engage in four processes of change identified by a well-established health behavior theory, the Transtheoretical Model: they improved their understanding of how to eat healthfully and engaged in nutrition-related analytical thinking, reevaluated the healthiness of their real life habits, formed helping relationships by discussing nutrition with others and started replacing unhealthy meals with more nutritious foods. Our research shows the promise of using casual mobile games to encourage adults to live healthier lifestyles.",
"title": ""
},
{
"docid": "e0551738e41a48ce9105b1dc44dfa980",
"text": "Abnormality detection in biomedical images is a one-class classification problem, where methods learn a statistical model to characterize the inlier class using training data solely from the inlier class. Typical methods (i) need well-curated training data and (ii) have formulations that are unable to utilize expert feedback through (a small amount of) labeled outliers. In contrast, we propose a novel deep neural network framework that (i) is robust to corruption and outliers in the training data, which are inevitable in real-world deployment, and (ii) can leverage expert feedback through high-quality labeled data. We introduce an autoencoder formulation that (i) gives robustness through a non-convex loss and a heavy-tailed distribution model on the residuals and (ii) enables semi-supervised learning with labeled outliers. Results on three large medical datasets show that our method outperforms the state of the art in abnormality-detection accuracy.",
"title": ""
},
{
"docid": "5d417375c4ce7c47a90808971f215c91",
"text": "While the RGB2GRAY conversion with fixed parameters is a classical and widely used tool for image decolorization, recent studies showed that adapting weighting parameters in a two-order multivariance polynomial model has great potential to improve the conversion ability. In this paper, by viewing the two-order model as the sum of three subspaces, it is observed that the first subspace in the two-order model has the dominating importance and the second and the third subspace can be seen as refinement. Therefore, we present a semiparametric strategy to take advantage of both the RGB2GRAY and the two-order models. In the proposed method, the RGB2GRAY result on the first subspace is treated as an immediate grayed image, and then the parameters in the second and the third subspace are optimized. Experimental results show that the proposed approach is comparable to other state-of-the-art algorithms in both quantitative evaluation and visual quality, especially for images with abundant colors and patterns. This algorithm also exhibits good resistance to noise. In addition, instead of the color contrast preserving ratio using the first-order gradient for decolorization quality metric, the color contrast correlation preserving ratio utilizing the second-order gradient is calculated as a new perceptual quality metric.",
"title": ""
},
{
"docid": "ca4e3f243b2868445ecb916c081e108e",
"text": "The task in the multi-agent path finding problem (MAPF) is to find paths for multiple agents, each with a different start and goal position, such that agents do not collide. It is possible to solve this problem optimally with algorithms that are based on the A* algorithm. Recently, we proposed an alternative algorithm called Conflict-Based Search (CBS) (Sharon et al. 2012), which was shown to outperform the A*-based algorithms in some cases. CBS is a two-level algorithm. At the high level, a search is performed on a tree based on conflicts between agents. At the low level, a search is performed only for a single agent at a time. While in some cases CBS is very efficient, in other cases it is worse than A*-based algorithms. This paper focuses on the latter case by generalizing CBS to Meta-Agent CBS (MA-CBS). The main idea is to couple groups of agents into meta-agents if the number of internal conflicts between them exceeds a given bound. MACBS acts as a framework that can run on top of any complete MAPF solver. We analyze our new approach and provide experimental results demonstrating that it outperforms basic CBS and other A*-based optimal solvers in many cases. Introduction and Background In the multi-agent path finding (MAPF) problem, we are given a graph, G(V,E), and a set of k agents labeled a1 . . . ak. Each agent ai has a start position si ∈ V and goal position gi ∈ V . At each time step an agent can either move to a neighboring location or can wait in its current location. The task is to return the least-cost set of actions for all agents that will move each of the agents to its goal without conflicting with other agents (i.e., without being in the same location at the same time or crossing the same edge simultaneously in opposite directions). MAPF has practical applications in robotics, video games, vehicle routing, and other domains (Silver 2005; Dresner & Stone 2008). In its general form, MAPF is NPcomplete, because it is a generalization of the sliding tile puzzle, which is NP-complete (Ratner & Warrnuth 1986). There are many variants to the MAPF problem. In this paper we consider the following common setting. The cumulative cost function to minimize is the sum over all agents of the number of time steps required to reach the goal location (Standley 2010; Sharon et al. 2011a). Both move Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and wait actions cost one. A centralized computing setting with a single CPU that controls all the agents is assumed. Note that a centralized computing setting is logically equivalent to a decentralized setting where each agent has its own computing power but agents are fully cooperative with full knowledge sharing and free communication. There are two main approaches for solving the MAPF in the centralized computing setting: the coupled and the decoupled approaches. In the decoupled approach, paths are planned for each agent separately. Algorithms from the decoupled approach run relatively fast, but optimality and even completeness are not always guaranteed (Silver 2005; Wang & Botea 2008; Jansen & Sturtevant 2008). New complete (but not optimal) decoupled algorithms were recently introduced for trees (Khorshid, Holte, & Sturtevant 2011) and for general graphs (Luna & Bekris 2011). Our aim is to solve the MAPF problem optimally and therefore the focus of this paper is on the coupled approach. In this approach MAPF is formalized as a global, singleagent search problem. One can activate an A*-based algorithm that searches a state space that includes all the different ways to permute the k agents into |V | locations. Consequently, the state space that is searched by the A*-based algorithms grow exponentially with the number of agents. Hence, finding the optimal solutions with A*-based algorithms requires significant computational expense. Previous optimal solvers dealt with this large search space in several ways. Ryan (2008; 2010) abstracted the problem into pre-defined structures such as cliques, halls and rings. He then modeled and solved the problem as a CSP problem. Note that the algorithm Ryan proposed does not necessarily returns the optimal solutions. Standley (2010; 2011) partitioned the given problem into smaller independent problems, if possible. Sharon et. al. (2011a; 2011b) suggested the increasing cost search tree (ICTS) a two-level framework where the high-level phase searches a tree with exact path costs for each of the agents and the low-level phase aims to verify whether there is a solution of this cost. In this paper we focus on the new Conflict Based Search algorithm (CBS) (Sharon et al. 2012) which optimally solves MAPF. CBS is a two-level algorithm where the highlevel search is performed on a constraint tree (CT) whose nodes include constraints on time and locations of a single agent. At each node in the constraint tree a low-level search is performed to find individual paths for all agents under the constraints given by the high-level node. Sharon et al. (2011a; 2011b; 2012) showed that the behavior of optimal MAPF algorithms can be very sensitive to characteristics of the given problem instance such as the topology and size of the graph, the number of agents, the branching factor etc. There is no universally dominant algorithm; different algorithms work well in different circumstances. In particular, experimental results have shown that CBS can significantly outperform all existing optimal MAPF algorithms on some domains (Sharon et al. 2012). However, Sharon et al. (2012) also identified cases where the CBS algorithm performs poorly. In such cases, CBS may even perform exponentially worse than A*. In this paper we aim at mitigating the worst-case performance of CBS by generalizing CBS into a new algorithm called Meta-agent CBS (MA-CBS). In MA-CBS the number of conflicts allowed at the high-level phase between any pair of agents is bounded by a predefined parameter B. When the number of conflicts exceed B, the conflicting agents are merged into a meta-agent and then treated as a joint composite agent by the low-level solver. By bounding the number of conflicts between any pair of agents, we prevent the exponential worst-case of basic CBS. This results in an new MAPF solver that significantly outperforms existing algorithms in a variety of domains. We present both theoretical and empirical support for this claim. In the low-level search, MA-CBS can use any complete MAPF solver. Thus, MA-CBS can be viewed as a solving framework and future MAPF algorithms could also be used by MA-CBS to improve its performance. Furthermore, we show that the original CBS algorithm corresponds to the extreme cases where B = ∞ (never merge agents), and the Independence Dependence (ID) framework (Standley 2010) is the other extreme case where B = 0 (always merge agents when conflicts occur). Thus, MA-CBS allows a continuum between CBS and ID, by setting different values of B between these two extremes. The Conflict Based Search Algorithm (CBS) The MA-CBS algorithm presented in this paper is based on the CBS algorithm (Sharon et al. 2012). We thus first describe the CBS algorithm in detail. Definitions for CBS We use the term path only in the context of a single agent and use the term solution to denote a set of k paths for the given set of k agents. A constraint for a given agent ai is a tuple (ai, v, t) where agent ai is prohibited from occupying vertex v at time step t.1 During the course of the algorithm, agents are associated with constraints. A consistent path for agent ai is a path that satisfies all its constraints. Likewise, a consistent solution is a solution that is made up from paths, such that the path for agent ai is consistent with the constraints of ai. A conflict is a tuple (ai, aj , v, t) where agent ai and agent aj occupy vertex v at time point t. A solution (of k paths) is valid if all its A conflict (as well as a constraint) may apply also to an edge when two agents traverse the same edge in opposite directions. paths have no conflicts. A consistent solution can be invalid if, despite the fact that the paths are consistent with their individual agent constraints, these paths still have conflicts. The key idea of CBS is to grow a set of constraints for each of the agents and find paths that are consistent with these constraints. If these paths have conflicts, and are thus invalid, the conflicts are resolved by adding new constraints. CBS works in two levels. At the high-level phase conflicts are found and constraints are added. At the low-level phase, the paths of the agents are updated to be consistent with the new constraints. We now describe each part of this process. High-level: Search the Constraint Tree (CT) At the high-level, CBS searches a constraint tree (CT). A CT is a binary tree. Each node N in the CT contains the following fields of data: 1. A set of constraints (N.constraints). The root of the CT contains an empty set of constraints. The child of a node in the CT inherits the constraints of the parent and adds one new constraint for one agent. 2. A solution (N.solution). A set of k paths, one path for each agent. The path for agent ai must be consistent with the constraints of ai. Such paths are found by the lowlevel search algorithm. 3. The total cost (N.cost). The cost of the current solution (summation over all the single-agent path costs). We denote this cost the f -value of the node. Node N in the CT is a goal node when N.solution is valid, i.e., the set of paths for all agents have no conflicts. The high-level phase performs a best-first search on the CT where nodes are ordered by their costs. Processing a node in the CT Given the list of constraints for a node N of the CT, the low-level search is invoked. This search returns one shortest path for each agent, ai, that is consistent with all the constraints associated with ai in node N . Once a consistent path has be",
"title": ""
},
{
"docid": "348c62670a729da42654f0cf685bba53",
"text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.",
"title": ""
},
{
"docid": "f4b06b8993396fc099abf857d5155730",
"text": "The Self-Organizing Map (SOM) forms a nonlinear projection from a high-dimensional data manifold onto a low-dimensional grid. A representative model of some subset of data is associated with each grid point. The SOM algorithm computes an optimal collection of models that approximates the data in the sense of some error criterion and also takes into account the similarity relations of the models. The models then become ordered on the grid according to their similarity. When the SOM is used for the exploration of statistical data, the data vectors can be approximated by models of the same dimensionality. When mapping documents, one can represent them statistically by their word frequency histograms or some reduced representations of the histograms that can be regarded as data vectors. We have made SOMs of collections of over one million documents. Each document is mapped onto some grid point, with a link from this point to the document database. The documents are ordered on the grid according to their contents and neighboring documents can be browsed readily. Keywords or key texts can be used to search for the most relevant documents rst. New eeective coding and computing schemes of the mapping are described.",
"title": ""
},
{
"docid": "f2334ce1d717a8f6e91771f95a00b46e",
"text": "High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1.",
"title": ""
},
{
"docid": "33f0a2bbda3f701dab66a8ffb67d5252",
"text": "Microglia, the resident macrophages of the CNS, are exquisitely sensitive to brain injury and disease, altering their morphology and phenotype to adopt a so-called activated state in response to pathophysiological brain insults. Morphologically activated microglia, like other tissue macrophages, exist as many different phenotypes, depending on the nature of the tissue injury. Microglial responsiveness to injury suggests that these cells have the potential to act as diagnostic markers of disease onset or progression, and could contribute to the outcome of neurodegenerative diseases. The persistence of activated microglia long after acute injury and in chronic disease suggests that these cells have an innate immune memory of tissue injury and degeneration. Microglial phenotype is also modified by systemic infection or inflammation. Evidence from some preclinical models shows that systemic manipulations can ameliorate disease progression, although data from other models indicates that systemic inflammation exacerbates disease progression. Systemic inflammation is associated with a decline in function in patients with chronic neurodegenerative disease, both acutely and in the long term. The fact that diseases with a chronic systemic inflammatory component are risk factors for Alzheimer disease implies that crosstalk occurs between systemic inflammation and microglia in the CNS.",
"title": ""
},
{
"docid": "020545bf4a1050c8c45d5df57df2fed5",
"text": "Relational XQuery systems try to re-use mature relational data management infrastructures to create fast and scalable XML database technology. This paper describes the main features, key contributions, and lessons learned while implementing such a system. Its architecture consists of (i) a range-based encoding of XML documents into relational tables, (ii) a compilation technique that translates XQuery into a basic relational algebra, (iii) a restricted (order) property-aware peephole relational query optimization strategy, and (iv) a mapping from XML update statements into relational updates. Thus, this system implements all essential XML database functionalities (rather than a single feature) such that we can learn from the full consequences of our architectural decisions. While implementing this system, we had to extend the state-of-the-art with a number of new technical contributions, such as loop-lifted staircase join and efficient relational query evaluation strategies for XQuery theta-joins with existential semantics. These contributions as well as the architectural lessons learned are also deemed valuable for other relational back-end engines. The performance and scalability of the resulting system is evaluated on the XMark benchmark up to data sizes of 11GB. The performance section also provides an extensive benchmark comparison of all major XMark results published previously, which confirm that the goal of purely relational XQuery processing, namely speed and scalability, was met.",
"title": ""
},
{
"docid": "b13ccc915f81eca45048ffe9d5da5d4f",
"text": "Mobile robots are increasingly being deployed in the real world in response to a heightened demand for applications such as transportation, delivery and inspection. The motion planning systems for these robots are expected to have consistent performance across the wide range of scenarios that they encounter. While state-of-the-art planners, with provable worst-case guarantees, can be employed to solve these planning problems, their finite time performance varies across scenarios. This thesis proposes that the planning module for a robot must adapt its search strategy to the distribution of planning problems encountered to achieve real-time performance. We address three principal challenges of this problem. Firstly, we show that even when the planning problem distribution is fixed, designing a nonadaptive planner can be challenging as the performance of planning strategies fluctuates with small changes in the environment. We characterize the existence of complementary strategies and propose to hedge our bets by executing a diverse ensemble of planners. Secondly, when the distribution is varying, we require a meta-planner that can automatically select such an ensemble from a library of black-box planners. We show that greedily training a list of predictors to focus on failure cases leads to an effective meta-planner. For situations where we have no training data, we show that we can learn an ensemble on-the-fly by adopting algorithms from online paging theory. Thirdly, in the interest of efficiency, we require a white-box planner that directly adapts its search strategy during a planning cycle. We propose an efficient procedure for training adaptive search heuristics in a data-driven imitation learning framework. We also draw a novel connection to Bayesian active learning, and propose algorithms to adaptively evaluate edges of a graph. Our approach leads to the synthesis of a robust real-time planning module that allows a UAV to navigate seamlessly across environments and speed-regimes. We evaluate our framework on a spectrum of planning problems and show closed-loop results on 3 UAV platforms a full-scale autonomous helicopter, a large scale hexarotor and a small quadrotor. While the thesis was motivated by mobile robots, we have shown that the individual algorithms are broadly applicable to other problem domains such as informative path planning and manipulation planning. We also establish novel connections between the disparate fields of motion planning and active learning, imitation learning and online paging which opens doors to several new research problems.",
"title": ""
},
{
"docid": "2d5dba872d7cd78a9e2d57a494a189ea",
"text": "In this chapter, we give an overview of what ontologies are and how they can be used. We discuss the impact of the expressiveness, the number of domain elements, the community size, the conceptual dynamics, and other variables on the feasibility of an ontology project. Then, we break down the general promise of ontologies of facilitating the exchange and usage of knowledge to six distinct technical advancements that ontologies actually provide, and discuss how this should influence design choices in ontology projects. Finally, we summarize the main challenges of ontology management in real-world applications, and explain which expectations from practitioners can be met as",
"title": ""
},
{
"docid": "4ecc1775c64b7ccc2904070d3657948d",
"text": "Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models. However, EM, which is an iterative algorithm based on the maximum likelihood principle, is generally only guaranteed to find stationary points of the likelihood objective, and these points may be far from any maximizer. This article addresses this disconnect between the statistical principles behind EM and its algorithmic properties. Specifically, it provides a global analysis of EM for specific models in which the observations comprise an i.i.d. sample from a mixture of two Gaussians. This is achieved by (i) studying the sequence of parameters from idealized execution of EM in the infinite sample limit, and fully characterizing the limit points of the sequence in terms of the initial parameters; and then (ii) based on this convergence analysis, establishing statistical consistency (or lack thereof) for the actual sequence of parameters produced by EM.",
"title": ""
},
{
"docid": "c175910d1809ad6dc073f79e4ca15c0c",
"text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.",
"title": ""
}
] |
scidocsrr
|
c15c96d13564b43d356f34dba3b66f10
|
Neural Joking Machine : Humorous image captioning
|
[
{
"docid": "81b242e3c98eaa20e3be0a9777aa3455",
"text": "Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.",
"title": ""
}
] |
[
{
"docid": "f94ef71233db13830d29ef9a0802f140",
"text": "In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.",
"title": ""
},
{
"docid": "268a86c25f1974630fada777790b162b",
"text": "The paper presents a novel method and system for personalised (individualised) modelling of spatio/spectro-temporal data (SSTD) and prediction of events. A novel evolving spiking neural network reservoir system (eSNNr) is proposed for the purpose. The system consists of: spike-time encoding module of continuous value input information into spike trains; a recurrent 3D SNNr; eSNN as an evolving output classifier. Such system is generated for every new individual, using existing data of similar individuals. Subject to proper training and parameter optimisation, the system is capable of accurate spatiotemporal pattern recognition (STPR) and of early prediction of individual events. The method and the system are generic, applicable to various SSTD and classification and prediction problems. As a case study, the method is applied for early prediction of occurrence of stroke on an individual basis. Preliminary experiments demonstrated a significant improvement in accuracy and time of event prediction when using the proposed method when compared with standard machine learning methods, such as MLR, SVM, MLP. Future development and applications are discussed.",
"title": ""
},
{
"docid": "1521052e24aca6db9d2a03df05089c88",
"text": "In this paper we suggest advanced IEEE 802.11ax TCP-aware scheduling strategies for optimizing the AP operation under transmission of unidirectional TCP traffic. Our scheduling strategies optimize the performance using the capability for Multi User transmissions over the Uplink, first introduced in IEEE 802.11ax, together with Multi User transmissions over the Downlink. They are based on Transmission Opportunities (TXOP) and we suggest three scheduling strategies determining the TXOP formation parameters. In one of the strategies one can control the achieved Goodput vs. the delay. We also assume saturated WiFi transmission queues. We show that with minimal Goodput degradation one can avoid considerable delays.",
"title": ""
},
{
"docid": "7867544be1b36ffab85b02c63cb03922",
"text": "In this paper a general theory of multistage decimators and interpolators for sampling rate reduction and sampling rate increase is presented. A set of curves and the necessary relations for optimally designing multistage decimators is also given. It is shown that the processes of decimation and interpolation are duals and therefore the same set of design curves applies to both problems. Further, it is shown that highly efficient implementations of narrow-band finite impulse response (FIR) fiiters can be obtained by cascading the processes of decimation and interpolation. Examples show that the efficiencies obtained are comparable to those of recursive elliptic filter designs.",
"title": ""
},
{
"docid": "7c4768707a3efd3791520576a8a78e23",
"text": "The aim of this paper is to research the effectiveness of SMS verification by understanding the correlation between notification and verification of flood early warning messages. This study contributes to the design of the dissemination techniques for SMS as an early warning messages. The metrics used in this study are using user perceptions of tasks, which include the ease of use (EOU) perception for using SMS and confidence with SMS skills perception, as well as, the users' positive perceptions, which include users' perception of usefulness and satisfaction perception towards using SMS as an early warning messages for floods. Experiments and surveys were conducted in flood-prone areas in Semarang, Indonesia. The results showed that the correlation is in users' perceptions of tasks for the confidence with skill.",
"title": ""
},
{
"docid": "2d37baab58e7dd5c442b9041d0995134",
"text": "With the growing problem of childhood obesity, recent research has begun to focus on family and social influences on children's eating patterns. Research has demonstrated that children's eating patterns are strongly influenced by characteristics of both the physical and social environment. With regard to the physical environment, children are more likely to eat foods that are available and easily accessible, and they tend to eat greater quantities when larger portions are provided. Additionally, characteristics of the social environment, including various socioeconomic and sociocultural factors such as parents' education, time constraints, and ethnicity influence the types of foods children eat. Mealtime structure is also an important factor related to children's eating patterns. Mealtime structure includes social and physical characteristics of mealtimes including whether families eat together, TV-viewing during meals, and the source of foods (e.g., restaurants, schools). Parents also play a direct role in children's eating patterns through their behaviors, attitudes, and feeding styles. Interventions aimed at improving children's nutrition need to address the variety of social and physical factors that influence children's eating patterns.",
"title": ""
},
{
"docid": "3deced64cd17210f7e807e686c0221af",
"text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.",
"title": ""
},
{
"docid": "38a10f18aa943c53892ee995173e773d",
"text": "This project aims at studying how recent interactive and interactions technologies would help extend how we play the guitar, thus defining the “multimodal guitar”. Our contributions target three main axes: audio analysis, gestural control and audio synthesis. For this purpose, we designed and developed a freely-available toolbox for augmented guitar performances, compliant with the PureData and Max/MSP environments, gathering tools for: polyphonic pitch estimation, fretboard visualization and grouping, pressure sensing, modal synthesis, infinite sustain, rearranging looping and “smart” harmonizing.",
"title": ""
},
{
"docid": "680fa29fcd41421a2b3b235555f0cb91",
"text": "Brown adipose tissue (BAT) is the main site of adaptive thermogenesis and experimental studies have associated BAT activity with protection against obesity and metabolic diseases, such as type 2 diabetes mellitus and dyslipidaemia. Active BAT is present in adult humans and its activity is impaired in patients with obesity. The ability of BAT to protect against chronic metabolic disease has traditionally been attributed to its capacity to utilize glucose and lipids for thermogenesis. However, BAT might also have a secretory role, which could contribute to the systemic consequences of BAT activity. Several BAT-derived molecules that act in a paracrine or autocrine manner have been identified. Most of these factors promote hypertrophy and hyperplasia of BAT, vascularization, innervation and blood flow, processes that are all associated with BAT recruitment when thermogenic activity is enhanced. Additionally, BAT can release regulatory molecules that act on other tissues and organs. This secretory capacity of BAT is thought to be involved in the beneficial effects of BAT transplantation in rodents. Fibroblast growth factor 21, IL-6 and neuregulin 4 are among the first BAT-derived endocrine factors to be identified. In this Review, we discuss the current understanding of the regulatory molecules (the so-called brown adipokines or batokines) that are released by BAT that influence systemic metabolism and convey the beneficial metabolic effects of BAT activation. The identification of such adipokines might also direct drug discovery approaches for managing obesity and its associated chronic metabolic diseases.",
"title": ""
},
{
"docid": "1ff4d4588826459f1d8d200d658b9907",
"text": "BACKGROUND\nHealth promotion organizations are increasingly embracing social media technologies to engage end users in a more interactive way and to widely disseminate their messages with the aim of improving health outcomes. However, such technologies are still in their early stages of development and, thus, evidence of their efficacy is limited.\n\n\nOBJECTIVE\nThe study aimed to provide a current overview of the evidence surrounding consumer-use social media and mobile software apps for health promotion interventions, with a particular focus on the Australian context and on health promotion targeted toward an Indigenous audience. Specifically, our research questions were: (1) What is the peer-reviewed evidence of benefit for social media and mobile technologies used in health promotion, intervention, self-management, and health service delivery, with regard to smoking cessation, sexual health, and otitis media? and (2) What social media and mobile software have been used in Indigenous-focused health promotion interventions in Australia with respect to smoking cessation, sexual health, or otitis media, and what is the evidence of their effectiveness and benefit?\n\n\nMETHODS\nWe conducted a scoping study of peer-reviewed evidence for the effectiveness of social media and mobile technologies in health promotion (globally) with respect to smoking cessation, sexual health, and otitis media. A scoping review was also conducted for Australian uses of social media to reach Indigenous Australians and mobile apps produced by Australian health bodies, again with respect to these three areas.\n\n\nRESULTS\nThe review identified 17 intervention studies and seven systematic reviews that met inclusion criteria, which showed limited evidence of benefit from these interventions. We also found five Australian projects with significant social media health components targeting the Indigenous Australian population for health promotion purposes, and four mobile software apps that met inclusion criteria. No evidence of benefit was found for these projects.\n\n\nCONCLUSIONS\nAlthough social media technologies have the unique capacity to reach Indigenous Australians as well as other underserved populations because of their wide and instant disseminability, evidence of their capacity to do so is limited. Current interventions are neither evidence-based nor widely adopted. Health promotion organizations need to gain a more thorough understanding of their technologies, who engages with them, why they engage with them, and how, in order to be able to create successful social media projects.",
"title": ""
},
{
"docid": "6f18fbbd62f753807ba77141f21d0cf6",
"text": "[1] The Mw 6.6, 26 December 2003 Bam (Iran) earthquake was one of the first earthquakes for which Envisat advanced synthetic aperture radar (ASAR) data were available. Using interferograms and azimuth offsets from ascending and descending tracks, we construct a three-dimensional displacement field of the deformation due to the earthquake. Elastic dislocation modeling shows that the observed deformation pattern cannot be explained by slip on a single planar fault, which significantly underestimates eastward and upward motions SE of Bam. We find that the deformation pattern observed can be best explained by slip on two subparallel faults. Eighty-five percent of moment release occurred on a previously unknown strike-slip fault running into the center of Bam, with peak slip of over 2 m occurring at a depth of 5 km. The remainder occurred as a combination of strike-slip and thrusting motion on a southward extension of the previously mapped Bam Fault 5 km to the east.",
"title": ""
},
{
"docid": "40479536efec6311cd735f2bd34605d7",
"text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.",
"title": ""
},
{
"docid": "95196bd9be49b426217b7d81fc51a04b",
"text": "This paper builds on the idea that private sector logistics can and should be applied to improve the performance of disaster logistics but that before embarking on this the private sector needs to understand the core capabilities of humanitarian logistics. With this in mind, the paper walks us through the complexities of managing supply chains in humanitarian settings. It pinpoints the cross learning potential for both the humanitarian and private sectors in emergency relief operations as well as possibilities of getting involved through corporate social responsibility. It also outlines strategies for better preparedness and the need for supply chains to be agile, adaptable and aligned—a core competency of many humanitarian organizations involved in disaster relief and an area which the private sector could draw on to improve their own competitive edge. Finally, the article states the case for closer collaboration between humanitarians, businesses and academics to achieve better and more effective supply chains to respond to the complexities of today’s logistics be it the private sector or relieving the lives of those blighted by disaster. Journal of the Operational Research Society (2006) 57, 475–489. doi:10.1057/palgrave.jors.2602125 Published online 14 December 2005",
"title": ""
},
{
"docid": "24297f719741f6691e5121f33bafcc09",
"text": "The hypothesis that cancer is driven by tumour-initiating cells (popularly known as cancer stem cells) has recently attracted a great deal of attention, owing to the promise of a novel cellular target for the treatment of haematopoietic and solid malignancies. Furthermore, it seems that tumour-initiating cells might be resistant to many conventional cancer therapies, which might explain the limitations of these agents in curing human malignancies. Although much work is still needed to identify and characterize tumour-initiating cells, efforts are now being directed towards identifying therapeutic strategies that could target these cells. This Review considers recent advances in the cancer stem cell field, focusing on the challenges and opportunities for anticancer drug discovery.",
"title": ""
},
{
"docid": "71bc346237c5f97ac245dd7b7bbb497f",
"text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.",
"title": ""
},
{
"docid": "3b285e3bd36dfeabb80a2ab57470bdc5",
"text": "This paper presents algorithms and a prototype system for hand tracking and hand posture recognition. Hand postures are represented in terms of hierarchies of multi-scale colour image features at different scales, with qualitative inter-relations in terms of scale, position and orientation. In each image, detection of multi-scale colour features is performed. Hand states are then simultaneously detected and tracked using particle filtering, with an extension of layered sampling referred to as hierarchical layered sampling. Experiments are presented showing that the performance of the system is substantially improved by performing feature detection in colour space and including a prior with respect to skin colour. These components have been integrated into a real-time prototype system, applied to a test problem of controlling consumer electronics using hand gestures. In a simplified demo scenario, this system has been successfully tested by participants at two fairs during 2001.",
"title": ""
},
{
"docid": "dcd919590e0b6b52ea3a6be7378d5d25",
"text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.",
"title": ""
},
{
"docid": "c1b6934a3d18915a466aa69b6fe78bd4",
"text": "The mucous gel maintains a neutral microclimate at the epithelial cell surface, which may play a role in both the prevention of gastroduodenal injury and the provision of an environment essential for epithelial restitution and regeneration after injury. Enhancement of the components of the mucous barrier by sucralfate may explain its therapeutic efficacy for upper gastrointestinal tract protection, repai, and healing. We studied the effect of sucralfate and its major soluble component, sucrose octasulfate (SOS), on the synthesis and release of gastric mucin and surface active phospholipid, utilizing an isolated canine gastric mucous cells in culture. We correlated these results with the effect of the agents on mucin synthesis and secretion utilizing explants of canine fundusin vitro. Sucralfate and SOS significantly stimulated phospholipid secretion by isolated canine mucous cells in culture (123% and 112% of control, respectively.) Indomethacin pretreatment siginificantly inhibited the effect of sucralfate, but not SOS, on the stimulation of phospholipid release. Administration of either sucralfate or SOS to the isolated canine mucous cells had no effect upon mucin synthesis or secretion using a sensitive immunoassay. Sucralfate and SOS did not stimulate mucin release in the canine explants; sucralfate significantly stimulated the synthesis of mucin, but only to 108% of that observed in untreated explants. No increase in PGE2 release was observed after sucralfate or SOS exposure to the isolated canine mucous cells. Our results suggest sucralfate affects the mucus barrier largely in a qualitative manner. No increase in mucin secretion or major effect on synthesis was notd, although a significant increase in surface active phospholipid release was observed. The lack of dose dependency of this effect, along with the results of the PGE2 assay, suggests the drug may act through a non-receptor-mediated mechanism to perturb the cell membrane and release surface active phospholipid. The enhancement of phospholipid release by sucralfate to augment the barrier function of gastric mucus may, in concert with other effects of the drug, strrengthen mucosal barrier function.",
"title": ""
},
{
"docid": "5096194bcbfebd136c74c30b998fb1f3",
"text": "This present study is designed to propose a conceptual framework extended from the previously advanced Theory of Acceptance Model (TAM). The framework makes it possible to examine the effects of social media, and perceived risk as the moderating effects between intention and actual purchase to be able to advance the Theory of Acceptance Model (TAM). 400 samples will be randomly selected among Saudi in Jeddah, Dammam and Riyadh. Data will be collected using questionnaire survey. As the research involves the analysis of numerical data, the assessment is carried out using Structural Equation Model (SEM). The hypothesis will be tested and the result is used to explain the proposed TAM. The findings from the present study will be beneficial for marketers to understand the intrinsic behavioral factors that influence consumers' selection hence avoid trial and errors in their advertising drives.",
"title": ""
},
{
"docid": "92d5ebd49670681a5d43ba90731ae013",
"text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.",
"title": ""
}
] |
scidocsrr
|
6facc49979ae27f41164bba62992f4c6
|
Emotional Human Machine Conversation Generation Based on SeqGAN
|
[
{
"docid": "f7696fca636f8959a1d0fbeba9b2fb67",
"text": "With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a difficult task. The key factor in human-machine conversation is whether the machine can give good responses that are appropriate not only at the content level (relevant and grammatical) but also at the emotion level (consistent emotional expression). In our paper, we propose a new model based on long short-term memory, which is used to achieve an encoder-decoder framework, and we address the emotional factor of conversation generation by changing the model’s input using a series of input transformations: a sequence without an emotional category, a sequence with an emotional category for the input sentence, and a sequence with an emotional category for the output responses. We perform a comparison between our work and related work and find that we can obtain slightly better results with respect to emotion consistency. Although in terms of content coherence our result is lower than those of related work, in the present stage of research, our method can generally generate emotional responses in order to control and improve the user’s emotion. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion.",
"title": ""
},
{
"docid": "9b9181c7efd28b3e407b5a50f999840a",
"text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In",
"title": ""
},
{
"docid": "33468c214408d645651871bd8018ed82",
"text": "In this paper, we carry out two experiments on the TIMIT speech corpus with bidirectional and unidirectional Long Short Term Memory (LSTM) networks. In the first experiment (framewise phoneme classification) we find that bidirectional LSTM outperforms both unidirectional LSTM and conventional Recurrent Neural Networks (RNNs). In the second (phoneme recognition) we find that a hybrid BLSTM-HMM system improves on an equivalent traditional HMM system, as well as unidirectional LSTM-HMM.",
"title": ""
}
] |
[
{
"docid": "d38e5fa4adadc3e979c5de812599c78a",
"text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.",
"title": ""
},
{
"docid": "affbc18a3ba30c43959e37504b25dbdc",
"text": "ion for Falsification Thomas Ball , Orna Kupferman , and Greta Yorsh 3 1 Microsoft Research, Redmond, WA, USA. Email: tball@microsoft.com, URL: research.microsoft.com/ ∼tball 2 Hebrew University, School of Eng. and Comp. Sci., Jerusalem 91904, Israel. Email: orna@cs.huji.ac.il, URL: www.cs.huji.ac.il/ ∼orna 3 Tel-Aviv University, School of Comp. Sci., Tel-Aviv 69978, Israel. Email:gretay@post.tau.ac.il, URL: www.math.tau.ac.il/ ∼gretay Microsoft Research Technical Report MSR-TR-2005-50 Abstract. Abstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the conAbstraction is traditionally used in the process of verification. There, an abstraction of a concrete system is sound if properties of the abstract system also hold in the concrete system. Specifically, if an abstract state satisfies a property ψ thenall the concrete states that correspond to a satisfyψ too. Since the ideal goal of proving a system correct involves many obstacles, the primary use of formal methods nowadays is fal ification. There, as intesting, the goal is to detect errors, rather than to prove correctness. In the falsification setting, we can say that an abstraction is sound if errors of the abstract system exist also in the concrete system. Specifically, if an abstract state a violates a propertyψ, thenthere existsa concrete state that corresponds to a and violatesψ too. An abstraction that is sound for falsification need not be sound for verification. This suggests that existing frameworks for abstraction for verification may be too restrictive when used for falsification, and that a new framework is needed in order to take advantage of the weaker definition of soundness in the falsification setting. We present such a framework, show that it is indeed stronger (than other abstraction frameworks designed for verification), demonstrate that it can be made even stronger by parameterizing its transitions by predicates, and describe how it can be used for falsification of branching-time and linear-time temporal properties, as well as for generating testing goals for a concrete system by reasoning about its abstraction.",
"title": ""
},
{
"docid": "fbecc8c4a8668d403df85b4e52348f6e",
"text": "Honeypots are more and more used to collect data on malicious activities on the Internet and to better understand the strategies and techniques used by attackers to compromise target systems. Analysis and modeling methodologies are needed to support the characterization of attack processes based on the data collected from the honeypots. This paper presents some empirical analyses based on the data collected from the Leurré.com honeypot platforms deployed on the Internet and presents some preliminary modeling studies aimed at fulfilling such objectives.",
"title": ""
},
{
"docid": "f00b9a311fb8b14100465c187c9e4659",
"text": "We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.",
"title": ""
},
{
"docid": "fba0ff24acbe07e1204b5fe4c492ab72",
"text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.",
"title": ""
},
{
"docid": "f43ed3feda4e243a1cb77357b435fb52",
"text": "Existing text generation methods tend to produce repeated and “boring” expressions. To tackle this problem, we propose a new text generation model, called Diversity-Promoting Generative Adversarial Network (DP-GAN). The proposed model assigns low reward for repeatedly generated text and high reward for “novel” and fluent text, encouraging the generator to produce diverse and informative text. Moreover, we propose a novel languagemodel based discriminator, which can better distinguish novel text from repeated text without the saturation problem compared with existing classifier-based discriminators. The experimental results on review generation and dialogue generation tasks demonstrate that our model can generate substantially more diverse and informative text than existing baselines.1",
"title": ""
},
{
"docid": "d90a66cf63abdc1d0caed64812de7043",
"text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.",
"title": ""
},
{
"docid": "955376cf6d04373c407987613d1c2bd1",
"text": "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.",
"title": ""
},
{
"docid": "56fa6f96657182ff527e42655bbd0863",
"text": "Nootropics or smart drugs are well-known compounds or supplements that enhance the cognitive performance. They work by increasing the mental function such as memory, creativity, motivation, and attention. Recent researches were focused on establishing a new potential nootropic derived from synthetic and natural products. The influence of nootropic in the brain has been studied widely. The nootropic affects the brain performances through number of mechanisms or pathways, for example, dopaminergic pathway. Previous researches have reported the influence of nootropics on treating memory disorders, such as Alzheimer's, Parkinson's, and Huntington's diseases. Those disorders are observed to impair the same pathways of the nootropics. Thus, recent established nootropics are designed sensitively and effectively towards the pathways. Natural nootropics such as Ginkgo biloba have been widely studied to support the beneficial effects of the compounds. Present review is concentrated on the main pathways, namely, dopaminergic and cholinergic system, and the involvement of amyloid precursor protein and secondary messenger in improving the cognitive performance.",
"title": ""
},
{
"docid": "c26eabb377db5f1033ec6d354d890a6f",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "a712b6efb5c869619864cd817c2e27e1",
"text": "We measure the value of promotional activities and referrals by content creators to an online platform of user-generated content. To do so, we develop a modeling approach that explains individual-level choices of visiting the platform, creating, and purchasing content, as a function of consumer characteristics and marketing activities, allowing for the possibility of interdependence of decisions within and across users. Empirically, we apply our model to Hewlett-Packard’s (HP) print-on-demand service of user-created magazines, named MagCloud. We use two distinct data sets to show the applicability of our approach: an aggregate-level data set from Google Analytics, which is a widely available source of data to managers, and an individual-level data set from HP. Our results compare content creator activities, which include referrals and word-ofmouth efforts, with firm-based actions, such as price promotions and public relations. We show that price promotions have strong effects, but limited to the purchase decisions, while content creator referrals and public relations have broader effects which impact all consumer decisions at the platform. We provide recommendations to the level of the firm’s investments when “free” promotional activities by content creators exist. These “free” marketing campaigns are likely to have a substantial presence in most online services of user-generated content.",
"title": ""
},
{
"docid": "1072728cf72fe02d3e1f3c45bfc877b5",
"text": "The annihilating filter-based low-rank Hanel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. Inspired by the recent mathematical discovery that links deep neural networks to Hankel matrix decomposition using data-driven framelet basis, here we propose a fully data-driven deep learning algorithm for k-space interpolation. Our network can be also easily applied to non-Cartesian k-space trajectories by simply adding an additional re-gridding layer. Extensive numerical experiments show that the proposed deep learning method significantly outperforms the existing image-domain deep learning approaches.",
"title": ""
},
{
"docid": "dc4a08d2b98f1e099227c4f80d0b84df",
"text": "We address action temporal localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in action temporal localization via multi-stage segment-based 3D ConvNets: (1) a proposal stage identifies candidate segments in a long video that may contain actions; (2) a classification stage learns one-vs-all action classification model to serve as initialization for the localization stage; and (3) a localization stage fine-tunes on the model learnt in the classification stage to localize each action instance. We propose a novel loss function for the localization stage to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7% to 7.4% on MEXaction2 and increased from 15.0% to 19.0% on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.",
"title": ""
},
{
"docid": "c21e999407da672be5bac4eaba950168",
"text": "Software engineers are frequently faced with tasks that can be expressed as optimization problems. To support them with automation, search-based model-driven engineering combines the abstraction power of models with the versatility of meta-heuristic search algorithms. While current approaches in this area use genetic algorithms with xed mutation operators to explore the solution space, the e ciency of these operators may heavily depend on the problem at hand. In this work, we propose FitnessStudio, a technique for generating e cient problem-tailored mutation operators automatically based on a two-tier framework. The lower tier is a regular meta-heuristic search whose mutation operator is trained by an upper-tier search using a higher-order model transformation. We implemented this framework using the Henshin transformation language and evaluated it in a benchmark case, where the generated mutation operators enabled an improvement to the state of the art in terms of result quality, without sacri cing performance.",
"title": ""
},
{
"docid": "5950aadef33caa371f0de304b2b4869d",
"text": "Responding to a 2015 MISQ call for research on service innovation, this study develops a conceptual model of service innovation in higher education academic libraries. Digital technologies have drastically altered the delivery of information services in the past decade, raising questions about critical resources, their interaction with digital technologies, and the value of new services and their measurement. Based on new product development (NPD) and new service development (NSD) processes and the service-dominant logic (SDL) perspective, this research-in-progress presents a conceptual model that theorizes interactions between critical resources and digital technologies in an iterative process for delivery of service innovation in academic libraries. The study also suggests future research paths to confirm, expand, and validate the new service innovation model.",
"title": ""
},
{
"docid": "1b063dfecff31de929383b8ab74f7f6b",
"text": "This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the “Adam-type”, includes the popular algorithms such as Adam (Kingma & Ba, 2014) , AMSGrad (Reddi et al., 2018) , AdaGrad (Duchi et al., 2011). Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(log T/ √ T ) for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior.",
"title": ""
},
{
"docid": "8c03df6650b3e400bc5447916d01820a",
"text": "People called night owls habitually have late bedtimes and late times of arising, sometimes suffering a heritable circadian disturbance called delayed sleep phase syndrome (DSPS). Those with DSPS, those with more severe progressively-late non-24-hour sleep-wake cycles, and those with bipolar disorder may share genetic tendencies for slowed or delayed circadian cycles. We searched for polymorphisms associated with DSPS in a case-control study of DSPS research participants and a separate study of Sleep Center patients undergoing polysomnography. In 45 participants, we resequenced portions of 15 circadian genes to identify unknown polymorphisms that might be associated with DSPS, non-24-hour rhythms, or bipolar comorbidities. We then genotyped single nucleotide polymorphisms (SNPs) in both larger samples, using Illumina Golden Gate assays. Associations of SNPs with the DSPS phenotype and with the morningness-eveningness parametric phenotype were computed for both samples, then combined for meta-analyses. Delayed sleep and \"eveningness\" were inversely associated with loci in circadian genes NFIL3 (rs2482705) and RORC (rs3828057). A group of haplotypes overlapping BHLHE40 was associated with non-24-hour sleep-wake cycles, and less robustly, with delayed sleep and bipolar disorder (e.g., rs34883305, rs34870629, rs74439275, and rs3750275 were associated with n=37, p=4.58E-09, Bonferroni p=2.95E-06). Bright light and melatonin can palliate circadian disorders, and genetics may clarify the underlying circadian photoperiodic mechanisms. After further replication and identification of the causal polymorphisms, these findings may point to future treatments for DSPS, non-24-hour rhythms, and possibly bipolar disorder or depression.",
"title": ""
},
{
"docid": "b8dfe30c07f0caf46b3fc59406dbf017",
"text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.",
"title": ""
},
{
"docid": "139f750d4e53b86bc785785b7129e6ee",
"text": "Enterprise Resource Planning (ERP) systems hold great promise for integrating business processes and have proven their worth in a variety of organizations. Yet the gains that they have enabled in terms of increased productivity and cost savings are often achieved in the face of daunting usability problems. While one frequently hears anecdotes about the difficulties involved in using ERP systems, there is little documentation of the types of problems typically faced by users. The purpose of this study is to begin addressing this gap by categorizing and describing the usability issues encountered by one division of a Fortune 500 company in the first years of its large-scale ERP implementation. This study also demonstrates the promise of using collaboration theory to evaluate usability characteristics of existing systems and to design new systems. Given the impressive results already achieved by some corporations with these systems, imagine how much more would be possible if understanding how to use them weren’t such an",
"title": ""
},
{
"docid": "7b1b0e31384cb99caf0f3d8cf8134a53",
"text": "Toxic epidermal necrolysis (TEN) is one of the most threatening adverse reactions to various drugs. No case of concomitant occurrence TEN and severe granulocytopenia following the treatment with cefuroxime has been reported to date. Herein we present a case of TEN that developed eighteen days of the initiation of cefuroxime axetil therapy for urinary tract infection in a 73-year-old woman with chronic renal failure and no previous history of allergic diathesis. The condition was associated with severe granulocytopenia and followed by gastrointestinal hemorrhage, severe sepsis and multiple organ failure syndrome development. Despite intensive medical treatment the patient died. The present report underlines the potential of cefuroxime to simultaneously induce life threatening adverse effects such as TEN and severe granulocytopenia. Further on, because the patient was also taking furosemide for chronic renal failure, the possible unfavorable interactions between the two drugs could be hypothesized. Therefore, awareness of the possible drug interaction is necessary, especially when given in conditions of their altered pharmacokinetics as in case of chronic renal failure.",
"title": ""
}
] |
scidocsrr
|
fbbf8a4fae9225bb651a3199beed5417
|
Computation offloading and resource allocation for low-power IoT edge devices
|
[
{
"docid": "16fbebf500be1bf69027d3a35d85362b",
"text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.",
"title": ""
},
{
"docid": "2c4babb483ddd52c9f1333cbe71a3c78",
"text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "956799f28356850fda78a223a55169bf",
"text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for",
"title": ""
},
{
"docid": "bd820eea00766190675cd3e8b89477f2",
"text": "Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper.",
"title": ""
}
] |
[
{
"docid": "e602ab2a2d93a8912869ae8af0925299",
"text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.",
"title": ""
},
{
"docid": "34883c8cef40a0e587295b6ece1b796b",
"text": "Instance weighting has been widely applied to phrase-based machine translation domain adaptation. However, it is challenging to be applied to Neural Machine Translation (NMT) directly, because NMT is not a linear model. In this paper, two instance weighting technologies, i.e., sentence weighting and domain weighting with a dynamic weight learning strategy, are proposed for NMT domain adaptation. Empirical results on the IWSLT EnglishGerman/French tasks show that the proposed methods can substantially improve NMT performance by up to 2.7-6.7 BLEU points, outperforming the existing baselines by up to 1.6-3.6 BLEU points.",
"title": ""
},
{
"docid": "fec8129b24f30d4dbb93df4dce7885e8",
"text": "We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.",
"title": ""
},
{
"docid": "13b9fd37b1cf4f15def39175157e12c5",
"text": "Although motorcycle safety helmets are known for preventing head injuries, in many countries, the use of motorcycle helmets is low due to the lack of police power to enforcing helmet laws. This paper presents a system which automatically detect motorcycle riders and determine that they are wearing safety helmets or not. The system extracts moving objects and classifies them as a motorcycle or other moving objects based on features extracted from their region properties using K-Nearest Neighbor (KNN) classifier. The heads of the riders on the recognized motorcycle are then counted and segmented based on projection profiling. The system classifies the head as wearing a helmet or not using KNN based on features derived from 4 sections of segmented head region. Experiment results show an average correct detection rate for near lane, far lane, and both lanes as 84%, 68%, and 74%, respectively.",
"title": ""
},
{
"docid": "f5fd1d6f15c9ef06c343378a6f7038a0",
"text": "Wayfinding is part of everyday life. This study concentrates on the development of a conceptual model of human navigation in the U.S. Interstate Highway Network. It proposes three different levels of conceptual understanding that constitute the cognitive map: the Planning Level, the Instructional Level, and the Driver Level. This paper formally defines these three levels and examines the conceptual objects that comprise them. The problem treated here is a simpler version of the open problem of planning and navigating a multi-mode trip. We expect the methods and preliminary results found here for the Interstate system to apply to other systems such as river transportation networks and railroad networks.",
"title": ""
},
{
"docid": "b22136f00469589c984081742c4605d3",
"text": "Convolutional neural network (CNN), which comprises one or more convolutional and pooling layers followed by one or more fully-connected layers, has gained popularity due to its ability to learn fruitful representations from images or speeches, capturing local dependency and slight-distortion invariance. CNN has recently been applied to the problem of activity recognition, where 1D kernels are applied to capture local dependency over time in a series of observations measured at inertial sensors (3-axis accelerometers and gyroscopes). In this paper we present a multi-modal CNN where we use 2D kernels in both convolutional and pooling layers, to capture local dependency over time as well as spatial dependency over sensors. Experiments on benchmark datasets demonstrate the high performance of our multi-modal CNN, compared to several state of the art methods.",
"title": ""
},
{
"docid": "7b5b9990bfef9d2baf28030123359923",
"text": "a r t i c l e i n f o a b s t r a c t This review takes an evolutionary and chronological perspective on the development of strategic human resource management (SHRM) literature. We divide this body of work into seven themes that reflect the directions and trends researchers have taken over approximately thirty years of research. During this time the field took shape, developed rich conceptual foundations, and matured into a domain that has substantial influence on research activities in HR and related management disciplines. We trace how the field has evolved to its current state, articulate many of the major findings and contributions, and discuss how we believe it will evolve in the future. This approach contributes to the field of SHRM by synthesizing work in this domain and by highlighting areas of research focus that have received perhaps enough attention, as well as areas of research focus that, while promising, have remained largely unexamined. 1. Introduction Boxall, Purcell, and Wright (2007) distinguish among three major subfields of human resource management (HRM): micro HRM (MHRM), strategic HRM (SHRM), and international HRM (IHRM). Micro HRM covers the subfunctions of HR policy and practice and consists of two main categories: one with managing individuals and small groups (e.g., recruitment, selection, induction, training and development, performance management, and remuneration) and the other with managing work organization and employee voice systems (including union-management relations). Strategic HRM covers the overall HR strategies adopted by business units and companies and tries to measure their impacts on performance. Within this domain both design and execution issues are examined. International HRM covers HRM in companies operating across national boundaries. Since strategic HRM often covers the international context, we will include those international HRM articles that have a strategic focus. While most of the academic literature on SHRM has been published in the last 30 years, the intellectual roots of the field can be traced back to the 1920s in the U.S. (Kaufman, 2001). The concept of labor as a human resource and the strategic view of HRM policy and practice were described and discussed by labor economists and industrial relations scholars of that period, such as John Commons. Progressive companies in the 1920s intentionally formulated and adopted innovative HR practices that represented a strategic approach to the management of labor. A small, but visibly elite group of employers in this time period …",
"title": ""
},
{
"docid": "4f537c9e63bbd967e52f22124afa4480",
"text": "Computer role playing games engage players through interleaved story and open-ended game play. We present an approach to procedurally generating, rendering, and making playable novel games based on a priori unknown story structures. These stories may be authored by humans or by computational story generation systems. Our approach couples player, designer, and algorithm to generate a novel game using preferences for game play style, general design aesthetics, and a novel story structure. Our approach is implemented in Game Forge, a system that uses search-based optimization to find and render a novel game world configuration that supports a sequence of plot points plus play style preferences. Additionally, Game Forge supports execution of the game through reactive control of game world logic and non-player character behavior.",
"title": ""
},
{
"docid": "0846f7d40f5cbbd4c199dfb58c4a4e7d",
"text": "While active learning has drawn broad attention in recent years, there are relatively few studies on stopping criterion for active learning. We here propose a novel model stability based stopping criterion, which considers the potential of each unlabeled examples to change the model once added to the training set. The underlying motivation is that active learning should terminate when the model does not change much by adding remaining examples. Inspired by the widely used stochastic gradient update rule, we use the gradient of the loss at each candidate example to measure its capability to change the classifier. Under the model change rule, we stop active learning when the changing ability of all remaining unlabeled examples is less than a given threshold. We apply the stability-based stopping criterion to two popular classifiers: logistic regression and support vector machines (SVMs). It can be generalized to a wide spectrum of learning models. Substantial experimental results on various UCI benchmark data sets have demonstrated that the proposed approach outperforms state-of-art methods in most cases.",
"title": ""
},
{
"docid": "d6136f26c7b387693a5f017e6e2e679a",
"text": "Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign variants (e.g., slowing). Commercially available systems suffer from unacceptably high false alarm rates. Deep learning algorithms that employ high dimensional models have not previously been effective due to the lack of big data resources. In this paper, we use the TUH EEG Seizure Corpus to evaluate a variety of hybrid deep structures including Convolutional Neural Networks and Long Short-Term Memory Networks. We introduce a novel recurrent convolutional architecture that delivers 30% sensitivity at 7 false alarms per 24 hours. We have also evaluated our system on a held-out evaluation set based on the Duke University Seizure Corpus and demonstrate that performance trends are similar to the TUH EEG Seizure Corpus. This is a significant finding because the Duke corpus was collected with different instrumentation and at different hospitals. Our work shows that deep learning architectures that integrate spatial and temporal contexts are critical to achieving state of the art performance and will enable a new generation of clinically-acceptable technology.",
"title": ""
},
{
"docid": "d39ada44eb3c1c9b5dfa1abd0f1fbc22",
"text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.",
"title": ""
},
{
"docid": "160ab7f4c7be89ae2d56a7094e19d1a3",
"text": "These days, microarray gene expression data are playing an essential role in cancer classifications. However, due to the availability of small number of effective samples compared to the large number of genes in microarray data, many computational methods have failed to identify a small subset of important genes. Therefore, it is a challenging task to identify small number of disease-specific significant genes related for precise diagnosis of cancer sub classes. In this paper, particle swarm optimization (PSO) method along with adaptive K-nearest neighborhood (KNN) based gene selection technique are proposed to distinguish a small subset of useful genes that are sufficient for the desired classification purpose. A proper value of K would help to form the appropriate numbers of neighborhood to be explored and hence to classify the dataset accurately. Thus, a heuristic for selecting the optimal values of K efficiently, guided by the classification accuracy is also proposed. This proposed technique of finding minimum possible meaningful set of genes is applied on three benchmark microarray datasets, namely the small round blue cell tumor (SRBCT) data, the acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) data and the mixed-lineage leukemia (MLL) data. Results demonstrate the usefulness of the proposed method in terms of classification accuracy on blind test samples, number of informative genes and computing time. Further, the usefulness and universal characteristics of the identified genes are reconfirmed by using different classifiers, such as support vector machine (SVM). 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "ec7f5b4596ae6e2c24856d16e4fdc193",
"text": "This prospective, randomized study evaluated continuous-flow cold therapy for postoperative pain in outpatient arthroscopic anterior cruciate ligament (ACL) reconstructions. In group 1, cold therapy was constant for 3 days then as needed in days 4 through 7. Group 2 had no cold therapy. Evaluations and diaries were kept at 1, 2, and 8 hours after surgery, and then daily. Pain was assessed using the VAS and Likert scales. There were 51 cold and 49 noncold patients included. Continuous passive movement (CPM) use averaged 54 hours for cold and 41 hours for noncold groups (P=.003). Prone hangs were done for 192 minutes in the cold group and 151 minutes in the noncold group. Motion at 1 week averaged 5/88 for the cold group and 5/79 the noncold group. The noncold group average visual analog scale (VAS) pain and Likert pain scores were always greater than the cold group. The noncold group average Vicodin use (Knoll, Mt. Olive, NJ) was always greater than the cold group use (P=.001). Continuous-flow cold therapy lowered VAS and Likert scores, reduced Vicodin use, increased prone hangs, CPM, and knee flexion. Continuous-flow cold therapy is safe and effective for outpatient ACL reconstruction reducing pain medication requirements.",
"title": ""
},
{
"docid": "496e57bd6a6d06123ae886e0d6753783",
"text": "With the enormous growth of digital content in internet, various types of online reviews such as product and movie reviews present a wealth of subjective information that can be very helpful for potential users. Sentiment analysis aims to use automated tools to detect subjective information from reviews. Up to now as there are few researches conducted on feature selection in sentiment analysis, there are very rare works for Persian sentiment analysis. This paper considers the problem of sentiment classification using different feature selection methods for online customer reviews in Persian language. Three of the challenges of Persian text are using of a wide variety of declensional suffixes, different word spacing and many informal or colloquial words. In this paper we study these challenges by proposing a model for sentiment classification of Persian review documents. The proposed model is based on stemming and feature selection and is employed Naive Bayes algorithm for classification. We evaluate the performance of the model on a collection of cellphone reviews, where the results show the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "7bac448a5754c168c897125a4f080548",
"text": "BACKGROUND\nOne of the main methods for evaluation of fetal well-being is analysis of Doppler flow velocity waveform of fetal vessels. Evaluation of Doppler wave of the middle cerebral artery can predict most of the at-risk fetuses in high-risk pregnancies. In this study, we tried to determine the normal ranges and their trends during pregnancy of Doppler flow velocity indices (resistive index, pulsatility index, systolic-to-diastolic ratio, and peak systolic velocity) of middle cerebral artery in 20 - 40 weeks normal pregnancies in Iranians.\n\n\nMETHODS\nIn this cross-sectional study, 1037 women with normal pregnancy and gestational age of 20 to 40 weeks were investigated for fetal middle cerebral artery Doppler examination.\n\n\nRESULTS\nResistive index, pulsatility index, and systolic-to-diastolic ratio values of middle cerebral artery decreased in a parabolic pattern while the peak systolic velocity value increased linearly with progression of the gestational age. These changes were statistically significant (P<0.001 for all four variables) and were more characteristic during late weeks of pregnancy. The mean fetal heart rate was also significantly (P<0.001) reduced in correlation with the gestational age.\n\n\nCONCLUSION\nDoppler waveform indices of fetal middle cerebral artery are useful means for determining fetal well-being. Herewith, the normal ranges of Doppler waveform indices for an Iranian population are presented.",
"title": ""
},
{
"docid": "944d467bb6da4991127b76310fec585b",
"text": "One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.",
"title": ""
},
{
"docid": "3f220d8863302719d3cf69b7d99f8c4e",
"text": "The numerical representation precision required by the computations performed by Deep Neural Networks (DNNs) varies across networks and between layers of a same network. This observation motivates a precision-based approach to acceleration which takes into account both the computational structure and the required numerical precision representation. This work presents <italic>Stripes</italic> (<italic>STR</italic>), a hardware accelerator that uses bit-serial computations to improve energy efficiency and performance. Experimental measurements over a set of state-of-the-art DNNs for image classification show that <italic>STR</italic> improves performance over a state-of-the-art accelerator from 1.35<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq1-2597140.gif\"/></alternatives></inline-formula> to 5.33<inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives><inline-graphic xlink:href=\"judd-ieq2-2597140.gif\"/> </alternatives></inline-formula> and by 2.24<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math> <alternatives><inline-graphic xlink:href=\"judd-ieq3-2597140.gif\"/></alternatives></inline-formula> on average. <italic>STR</italic>’s area and power overhead are estimated at 5 percent and 12 percent respectively. <italic> STR</italic> is 2.00<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq4-2597140.gif\"/></alternatives></inline-formula> more energy efficient than the baseline.",
"title": ""
},
{
"docid": "c613a7c8bca5b0c198d2a1885ecb0efb",
"text": "Botnets have traditionally been seen as a threat to personal computers; however, the recent shift to mobile platforms resulted in a wave of new botnets. Due to its popularity, Android mobile Operating System became the most targeted platform. In spite of rising numbers, there is a significant gap in understanding the nature of mobile botnets and their communication characteristics. In this paper, we address this gap and provide a deep analysis of Command and Control (C&C) and built-in URLs of Android botnets detected since the first appearance of the Android platform. By combining both static and dynamic analyses with visualization, we uncover the relationships between the majority of the analyzed botnet families and offer an insight into each malicious infrastructure. As a part of this study we compile and offer to the research community a dataset containing 1929 samples representing 14 Android botnet families.",
"title": ""
},
{
"docid": "a091e8885bd30e58f6de7d14e8170199",
"text": "This paper represents the design and implementation of an indoor based navigation system for visually impaired people using a path finding algorithm and a wearable cap. This development of the navigation system consists of two modules: a Wearable part and a schematic of the area where the navigation system works by guiding the user. The wearable segment consists of a cap designed with IR receivers, an Arduino Nano processor, a headphone and an ultrasonic sensor. The schematic segment plans for the movement directions inside a room by dividing the room area into cells with a predefined matrix containing location information. For navigating the user, sixteen IR transmitters which continuously monitor the user position are placed at equal interval in the XY (8 in X-plane and 8 in Y-plane) directions of the indoor environment. A Braille keypad is used by the user where he gave the cell number for determining destination position. A path finding algorithm has been developed for determining the position of the blind person and guide him/her to his/her destination. The developed algorithm detects the position of the user by receiving continuous data from transmitter and guide the user to his/her destination by voice command. The ultrasonic sensor mounted on the cap detects the obstacles along the pathway of the visually impaired person. This proposed navigation system does not require any complex infrastructure design or the necessity of holding any extra assistive device by the user (i.e. augmented cane, smartphone, cameras). In the proposed design, prerecorded voice command will provide movement guideline to every edge of the indoor environment according to the user's destination choice. This makes this navigation system relatively simple and user friendly for those who are not much familiar with the most advanced technology and people with physical disabilities. Moreover, this proposed navigation system does not need GPS or any telecommunication networks which makes it suitable for use in rural areas where there is no telecommunication network coverage. In conclusion, the proposed system is relatively cheaper to implement in comparison to other existing navigation system, which will contribute to the betterment of the visually impaired people's lifestyle of developing and under developed countries.",
"title": ""
}
] |
scidocsrr
|
98199af516cd71aed3d6f88f3d9e743f
|
Three-Port Series-Resonant DC–DC Converter to Interface Renewable Energy Sources With Bidirectional Load and Energy Storage Ports
|
[
{
"docid": "8b70670fa152dbd5185e80136983ff12",
"text": "This letter proposes a novel converter topology that interfaces three power ports: a source, a bidirectional storage port, and an isolated load port. The proposed converter is based on a modified version of the isolated half-bridge converter topology that utilizes three basic modes of operation within a constant-frequency switching cycle to provide two independent control variables. This allows tight control over two of the converter ports, while the third port provides the power balance in the system. The switching sequence ensures a clamping path for the energy of the leakage inductance of the transformer at all times. This energy is further utilized to achieve zero-voltage switching for all primary switches for a wide range of source and load conditions. Basic steady-state analysis of the proposed converter is included, together with a suggested structure for feedback control. Key experimental results are presented that validate the converter operation and confirm its ability to achieve tight independent control over two power processing paths. This topology promises significant savings in component count and losses for power-harvesting systems. The proposed topology and control is particularly relevant to battery-backed power systems sourced by solar or fuel cells",
"title": ""
},
{
"docid": "3b8033d8d68e5e9889df190d93800f85",
"text": "A three-port triple-half-bridge bidirectional dc-dc converter topology is proposed in this paper. The topology comprises a high-frequency three-winding transformer and three half-bridges, one of which is a boost half-bridge interfacing a power port with a wide operating voltage. The three half-bridges are coupled by the transformer, thereby providing galvanic isolation for all the power ports. The converter is controlled by phase shift, which achieves the primary power flow control, in combination with pulsewidth modulation (PWM). Because of the particular structure of the boost half-bridge, voltage variations at the port can be compensated for by operating the boost half-bridge, together with the other two half-bridges, at an appropriate duty cycle to keep a constant voltage across the half-bridge. The resulting waveforms applied to the transformer windings are asymmetrical due to the automatic volt-seconds balancing of the half-bridges. With the PWM control it is possible to reduce the rms loss and to extend the zero-voltage switching operating range to the entire phase shift region. A fuel cell and supercapacitor generation system is presented as an embodiment of the proposed multiport topology. The theoretical considerations are verified by simulation and with experimental results from a 1 kW prototype.",
"title": ""
},
{
"docid": "149d9a316e4c5df0c9300d26da685bc6",
"text": "Multiport dc-dc converters are particularly interesting for sustainable energy generation systems where diverse sources and storage elements are to be integrated. This paper presents a zero-voltage switching (ZVS) three-port bidirectional dc-dc converter. A simple and effective duty ratio control method is proposed to extend the ZVS operating range when input voltages vary widely. Soft-switching conditions over the full operating range are achievable by adjusting the duty ratio of the voltage applied to the transformer winding in response to the dc voltage variations at the port. Keeping the volt-second product (half-cycle voltage-time integral) equal for all the windings leads to ZVS conditions over the entire operating range. A detailed analysis is provided for both the two-port and the three-port converters. Furthermore, for the three-port converter a dual-PI-loop based control strategy is proposed to achieve constant output voltage, power flow management, and soft-switching. The three-port converter is implemented and tested for a fuel cell and supercapacitor system.",
"title": ""
}
] |
[
{
"docid": "2801a5a26d532fc33543744ea89743f1",
"text": "Microalgae have received much interest as a biofuel feedstock in response to the uprising energy crisis, climate change and depletion of natural sources. Development of microalgal biofuels from microalgae does not satisfy the economic feasibility of overwhelming capital investments and operations. Hence, high-value co-products have been produced through the extraction of a fraction of algae to improve the economics of a microalgae biorefinery. Examples of these high-value products are pigments, proteins, lipids, carbohydrates, vitamins and anti-oxidants, with applications in cosmetics, nutritional and pharmaceuticals industries. To promote the sustainability of this process, an innovative microalgae biorefinery structure is implemented through the production of multiple products in the form of high value products and biofuel. This review presents the current challenges in the extraction of high value products from microalgae and its integration in the biorefinery. The economic potential assessment of microalgae biorefinery was evaluated to highlight the feasibility of the process.",
"title": ""
},
{
"docid": "39debcb0aa41eec73ff63a4e774f36fd",
"text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.",
"title": ""
},
{
"docid": "31122e142e02b7e3b99c52c8f257a92e",
"text": "Impervious surface has been recognized as a key indicator in assessing urban environments. However, accurate impervious surface extraction is still a challenge. Effectiveness of impervious surface in urban land-use classification has not been well addressed. This paper explored extraction of impervious surface information from Landsat Enhanced Thematic Mapper data based on the integration of fraction images from linear spectral mixture analysis and land surface temperature. A new approach for urban land-use classification, based on the combined use of impervious surface and population density, was developed. Five urban land-use classes (i.e., low-, medium-, high-, and very-high-intensity residential areas, and commercial/industrial/transportation uses) were developed in the city of Indianapolis, Indiana, USA. Results showed that the integration of fraction images and surface temperature provided substantially improved impervious surface image. Accuracy assessment indicated that the rootmean-square error and system error yielded 9.22% and 5.68%, respectively, for the impervious surface image. The overall classification accuracy of 83.78% for five urban land-use classes was obtained. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "af0cfa757d5e419f4e0d00da20e2db8a",
"text": "Vertebrate CpG islands (CGIs) are short interspersed DNA sequences that deviate significantly from the average genomic pattern by being GC-rich, CpG-rich, and predominantly nonmethylated. Most, perhaps all, CGIs are sites of transcription initiation, including thousands that are remote from currently annotated promoters. Shared DNA sequence features adapt CGIs for promoter function by destabilizing nucleosomes and attracting proteins that create a transcriptionally permissive chromatin state. Silencing of CGI promoters is achieved through dense CpG methylation or polycomb recruitment, again using their distinctive DNA sequence composition. CGIs are therefore generically equipped to influence local chromatin structure and simplify regulation of gene activity.",
"title": ""
},
{
"docid": "e5b7402470ad6198b4c1ddb9d9878ea9",
"text": "Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors.",
"title": ""
},
{
"docid": "462813402246b53bb4af46ca3b407195",
"text": "We present the performance of a patient with acquired dysgraphia, DS, who has intact oral spelling (100% correct) but severely impaired written spelling (7% correct). Her errors consisted entirely of well-formed letter substitutions. This striking dissociation is further characterized by consistent preservation of orthographic, as opposed to phonological, length in her written output. This pattern of performance indicates that DS has intact graphemic representations, and that her errors are due to a deficit in letter shape assignment. We further interpret the occurrence of a small percentage of lexical errors in her written responses and a significant effect of letter frequencies and transitional probabilities on the pattern of letter substitutions as the result of a repair mechanism that locally constrains DS' written output.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "2dfad4f4b0d69085341dfb64d6b37d54",
"text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.",
"title": ""
},
{
"docid": "c2edf373d60d4165afec75d70117530d",
"text": "In her book Introducing Arguments, Linda Pylkkänen distinguishes between the core and noncore arguments of verbs by means of a detailed discussion of applicative and causative constructions. The term applicative refers to structures that in more general linguistic terms are defined as ditransitive, i.e. when both a direct and an indirect object are associated with the verb, as exemplified in (1) (Pylkkänen, 2008: 13):",
"title": ""
},
{
"docid": "72c164c281e98386a054a25677c21065",
"text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.",
"title": ""
},
{
"docid": "2f793fb05d0dbe43f20f2b73119aa402",
"text": "Dark Web analysis is an important aspect in field of counter terrorism (CT). In the present scenario terrorist attacks are biggest problem for the mankind and whole world is under constant threat from these well-planned, sophisticated and coordinated terrorist operations. Terrorists anonymously set up various web sites embedded in the public Internet, exchanging ideology, spreading propaganda, and recruiting new members. Dark web is a hotspot where terrorists are communicating and spreading their messages. Now every country is focusing for CT. Dark web analysis can be an efficient proactive method for CT by detecting and avoiding terrorist threats/attacks. In this paper we have proposed dark web analysis model that analyzes dark web forums for CT and connecting the dots to prevent the country from terrorist attacks.",
"title": ""
},
{
"docid": "21555c1ab91642c691a711f7b5868cda",
"text": "Do men die young and sick, or do women live long and healthy? By trying to explain the sexual dimorphism in life expectancy, both biological and environmental aspects are presently being addressed. Besides age-related changes, both the immune and the endocrine system exhibit significant sex-specific differences. This review deals with the aging immune system and its interplay with sex steroid hormones. Together, they impact on the etiopathology of many infectious diseases, which are still the major causes of morbidity and mortality in people at old age. Among men, susceptibilities toward many infectious diseases and the corresponding mortality rates are higher. Responses to various types of vaccination are often higher among women thereby also mounting stronger humoral responses. Women appear immune-privileged. The major sex steroid hormones exhibit opposing effects on cells of both the adaptive and the innate immune system: estradiol being mainly enhancing, testosterone by and large suppressive. However, levels of sex hormones change with age. At menopause transition, dropping estradiol potentially enhances immunosenescence effects posing postmenopausal women at additional, yet specific risks. Conclusively during aging, interventions, which distinctively consider the changing level of individual hormones, shall provide potent options in maintaining optimal immune functions.",
"title": ""
},
{
"docid": "d2c8a3fd1049713d478fe27bd8f8598b",
"text": "In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art.",
"title": ""
},
{
"docid": "ff27912cfef17e66266bfcd013a874ee",
"text": "The purpose of this note is to describe a useful lesson we learned on authentication protocol design. In a recent article [9], we presented a simple authentication protocol to illustrate the concept of a trusted server. The protocol has a flaw, which was brought to our attention by Mart~n Abadi of DEC. In what follows, we first describe the protocol and its flaw, and how the flaw-was introduced in the process of deriving the protocol from its correct full information version. We then introduce a principle, called the Principle of Full Information, and explain how its use could have prevented the protocol flaw. We believe the Principle of Full Information is a useful authentication protocol design principle, and advocate its use. Lastly, we present several heuristics for simplifying full information protocols and illustrate their application to a mutual authentication protocol.",
"title": ""
},
{
"docid": "54380a4e0ab433be24d100db52e6bb55",
"text": "Why do some new technologies emerge and quickly supplant incumbent technologies while others take years or decades to take off? We explore this question by presenting a framework that considers both the focal competing technologies as well as the ecosystems in which they are embedded. Within our framework, each episode of technology transition is characterized by the ecosystem emergence challenge that confronts the new technology and the ecosystem extension opportunity that is available to the old technology. We identify four qualitatively distinct regimes with clear predictions for the pace of substitution. Evidence from 10 episodes of technology transitions in the semiconductor lithography equipment industry from 1972 to 2009 offers strong support for our framework. We discuss the implication of our approach for firm strategy. Disciplines Management Sciences and Quantitative Methods This journal article is available at ScholarlyCommons: https://repository.upenn.edu/mgmt_papers/179 Innovation Ecosystems and the Pace of Substitution: Re-examining Technology S-curves Ron Adner Tuck School of Business, Dartmouth College Strategy and Management 100 Tuck Hall Hanover, NH 03755, USA Tel: 1 603 646 9185 Email:\t\r ron.adner@dartmouth.edu Rahul Kapoor The Wharton School University of Pennsylvania Philadelphia, PA-19104 Tel : 1 215 898 6458 Email: kapoorr@wharton.upenn.edu",
"title": ""
},
{
"docid": "709021b1b7b7ddd073cac22abf26cf36",
"text": "A video from a moving camera produces different number of observations of different scene areas. We can construct an attention map of the scene by bringing the frames to a common reference and counting the number of frames that observed each scene point. Different representations can be constructed from this. The base of the attention map gives the scene mosaic. Super-resolved images of parts of the scene can be obtained using a subset of observations or video frames. We can combine mosaicing with super-resolution by using all observations, but the magnification factor will vary across the scene based on the attention received. The height of the attention map indicates the amount of super-resolution for that scene point. We modify the traditional super-resolution framework to generate a varying resolution image for panning cameras in this paper. The varying resolution image uses all useful data available in a video. We introduce the concept of attention-based super-resolution and give the modified framework for it. We also show its applicability on a few indoor and outdoor videos.",
"title": ""
},
{
"docid": "060101cf53a576336e27512431c4c4fc",
"text": "The aim of this chapter is to give an overview of domain adaptation and transfer learning with a specific view to visual applications. After a general motivation, we first position domain adaptation in the more general transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to the new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we review DA methods that go beyond image categorization, such as object detection, image segmentation, video analyses or learning visual attributes. We conclude the chapter with a section where we relate domain adaptation to other machine learning solutions.",
"title": ""
},
{
"docid": "9c452434ad1c25d0fbe71138b6c39c4b",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "3e80dc7319f1241e96db42033c16f6b4",
"text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.",
"title": ""
}
] |
scidocsrr
|
2b6f51a00468b699236dbf09b625d81a
|
MLC Toolbox: A MATLAB/OCTAVE Library for Multi-Label Classification
|
[
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "d6d55f2f3c29605835305d3cc72a34ad",
"text": "Most classification problems associate a single class to each example or instance. However, there are many classification tasks where each instance can be associated with one or more classes. This group of problems represents an area known as multi-label classification. One typical example of multi-label classification problems is the classification of documents, where each document can be assigned to more than one class. This tutorial presents the most frequently used techniques to deal with these problems in a pedagogical manner, with examples illustrating the main techniques and proposing a taxonomy of multi-label techniques that highlights the similarities and differences between these techniques.",
"title": ""
},
{
"docid": "fbf57d773bcdd8096e77246b3f785a96",
"text": "The explosion of online content has made the management of such content non-trivial. Web-related tasks such as web page categorization, news filtering, query categorization, tag recommendation, etc. often involve the construction of multi-label categorization systems on a large scale. Existing multi-label classification methods either do not scale or have unsatisfactory performance. In this work, we propose MetaLabeler to automatically determine the relevant set of labels for each instance without intensive human involvement or expensive cross-validation. Extensive experiments conducted on benchmark data show that the MetaLabeler tends to outperform existing methods. Moreover, MetaLabeler scales to millions of multi-labeled instances and can be deployed easily. This enables us to apply the MetaLabeler to a large scale query categorization problem in Yahoo!, yielding a significant improvement in performance.",
"title": ""
}
] |
[
{
"docid": "85cabd8a0c19f5db993edd34ded95d06",
"text": "We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a “realistic” relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such conditional program generation are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on program sketches, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method.",
"title": ""
},
{
"docid": "470ecc2bc4299d913125d307c20dd48d",
"text": "The task of end-to-end relation extraction consists of two sub-tasks: i) identifying entity mentions along with their types and ii) recognizing semantic relations among the entity mention pairs. It has been shown that for better performance, it is necessary to address these two sub-tasks jointly [22,13]. We propose an approach for simultaneous extraction of entity mentions and relations in a sentence, by using inference in Markov Logic Networks (MLN) [21]. We learn three different classifiers : i) local entity classifier, ii) local relation classifier and iii) “pipeline” relation classifier which uses predictions of the local entity classifier. Predictions of these classifiers may be inconsistent with each other. We represent these predictions along with some domain knowledge using weighted first-order logic rules in an MLN and perform joint inference over the MLN to obtain a global output with minimum inconsistencies. Experiments on the ACE (Automatic Content Extraction) 2004 dataset demonstrate that our approach of joint extraction using MLNs outperforms the baselines of individual classifiers. Our end-to-end relation extraction performance is better than 2 out of 3 previous results reported on the ACE 2004 dataset.",
"title": ""
},
{
"docid": "76e75c4549cbaf89796355b299bedfdc",
"text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.",
"title": ""
},
{
"docid": "7b0d52753e359a6dff3847ff57c321ac",
"text": "Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-term memory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.",
"title": ""
},
{
"docid": "01b05ea8fcca216e64905da7b5508dea",
"text": "Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by “heating” the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed β-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.",
"title": ""
},
{
"docid": "e1b050e8dc79f363c4a2b956f384c8d5",
"text": "Keyphrase extraction is a fundamental technique in natural language processing. It enables documents to be mapped to a concise set of phrases that can be used for indexing, clustering, ontology building, auto-tagging and other information organization schemes. Two major families of unsupervised keyphrase extraction algorithms may be characterized as statistical and graph-based. We present a hybrid statistical-graphical algorithm that capitalizes on the heuristics of both families of algorithms and is able to outperform the state of the art in unsupervised keyphrase extraction on several datasets.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "0fff38933ebaa8ecc2d891b0e742c567",
"text": "The rates of different ATP-consuming reactions were measured in concanavalin A-stimulated thymocytes, a model system in which more than 80% of the ATP consumption can be accounted for. There was a clear hierarchy of the responses of different energy-consuming reactions to changes in energy supply: pathways of macromolecule biosynthesis (protein synthesis and RNA/DNA synthesis) were most sensitive to energy supply, followed by sodium cycling and then calcium cycling across the plasma membrane. Mitochondrial proton leak was the least sensitive to energy supply. Control analysis was used to quantify the relative control over ATP production exerted by the individual groups of ATP-consuming reactions. Control was widely shared; no block of reactions had more than one-third of the control. A fuller control analysis showed that there appeared to be a hierarchy of control over the flux through ATP: protein synthesis > RNA/DNA synthesis and substrate oxidation > Na+ cycling and Ca2+ cycling > other ATP consumers and mitochondrial proton leak. Control analysis also indicated that there was significant control over the rates of individual ATP consumers by energy supply. Each ATP consumer had strong control over its own rate but very little control over the rates of the other ATP consumers.",
"title": ""
},
{
"docid": "ac76a4fe36e95d87f844c6876735b82f",
"text": "Theoretical estimates indicate that graphene thin films can be used as transparent electrodes for thin-film devices such as solar cells and organic light-emitting diodes, with an unmatched combination of sheet resistance and transparency. We demonstrate organic light-emitting diodes with solution-processed graphene thin film transparent conductive anodes. The graphene electrodes were deposited on quartz substrates by spin-coating of an aqueous dispersion of functionalized graphene, followed by a vacuum anneal step to reduce the sheet resistance. Small molecular weight organic materials and a metal cathode were directly deposited on the graphene anodes, resulting in devices with a performance comparable to control devices on indium-tin-oxide transparent anodes. The outcoupling efficiency of devices on graphene and indium-tin-oxide is nearly identical, in agreement with model predictions.",
"title": ""
},
{
"docid": "de40fc5103b26520e0a8c981019982b5",
"text": "Return-Oriented Programming (ROP) is the cornerstone of today’s exploits. Yet, building ROP chains is predominantly a manual task, enjoying limited tool support. Many of the available tools contain bugs, are not tailored to the needs of exploit development in the real world and do not offer practical support to analysts, which is why they are seldom used for any tasks beyond gadget discovery. We present PSHAPE (P ractical Support for Half-Automated P rogram Exploitation), a tool which assists analysts in exploit development. It discovers gadgets, chains gadgets together, and ensures that side effects such as register dereferences do not crash the program. Furthermore, we introduce the notion of gadget summaries, a compact representation of the effects a gadget or a chain of gadgets has on memory and registers. These semantic summaries enable analysts to quickly determine the usefulness of long, complex gadgets that use a lot of aliasing or involve memory accesses. Case studies on nine real binaries representing 147 MiB of code show PSHAPE’s usefulness: it automatically builds usable ROP chains for nine out of eleven scenarios.",
"title": ""
},
{
"docid": "c67fbc6e0a2a66e0855dcfc7a70cfb86",
"text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.",
"title": ""
},
{
"docid": "7c106fc6fc05ec2d35b89a1dec8e2ca2",
"text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.",
"title": ""
},
{
"docid": "0db1e1304ec2b5d40790677c9ce07394",
"text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.",
"title": ""
},
{
"docid": "6c4c235c779d9e6a78ea36d7fc636df4",
"text": "Digital archiving creates a vast store of knowledge that can be accessed only through digital tools. Users of this information will need fluency in the tools of digital access, exploration, visualization, analysis, and collaboration. This paper proposes that this fluency represents a new form of literacy, which must become fundamental for humanities scholars. Tools influence both the creation and the analysis of information. Whether using pen and paper, Microsoft Office, or Web 2.0, scholars base their process, production, and questions on the capabilities their tools offer them. Digital archiving and the interconnectivity of the Web provide new challenges in terms of quantity and quality of information. They create a new medium for presentation as well as a foundation for collaboration that is independent of physical location. Challenges for digital humanities include: • developing new genres for complex information presentation that can be shared, analyzed, and compared; • creating a literacy in information analysis and visualization that has the same rigor and richness as current scholarship; and • expanding classically text-based pedagogy to include simulation, animation, and spatial and geographic representation.",
"title": ""
},
{
"docid": "4d6d315aed4535c15714f78c183ac196",
"text": "Is narcissism related to observer-rated attractiveness? Two views imply that narcissism is unrelated to attractiveness: positive illusions theory and Feingold’s (1992) attractiveness theory (i.e., attractiveness is unrelated to personality in general). In contrast, two other views imply that narcissism is positively related to attractiveness: an evolutionary perspective on narcissism (i.e., selection pressures in shortterm mating contexts shaped the evolution of narcissism, including greater selection for attractiveness in short-term versus long-term mating contexts) and, secondly, the self-regulatory processing model of narcissism (narcissists groom themselves to bolster grandiose self-images). A meta-analysis (N > 1000) reveals a small but reliable positive narcissism–attractiveness correlation that approaches the largest known personality–attractiveness correlations. The finding supports the evolutionary and self-regulatory views of narcissism. 2009 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "99476690b32f04c8a1ec04dcd779f8f7",
"text": "This paper discusses the conception and development of a ball-on-plate balancing system based on mechatronic design principles. Realization of the design is achieved with the simultaneous consideration towards constraints like cost, performance, functionality, extendibility, and educational merit. A complete dynamic system investigation for the ball-on-plate system is presented in this paper. This includes hardware design, sensor and actuator selection, system modeling, parameter identification, controller design and experimental testing. The system was designed and built by students as part of the course Mechatronics System Design at Rensselaer. 1. MECHATRONICS AT RENSSELAER Mechatronics is the synergistic combination of mechanical engineering, electronics, control systems and computers. The key element in mechatronics is the integration of these areas through the design process. The essential characteristic of a mechatronics engineer and the key to success in mechatronics is a balance between two sets of skills: modeling / analysis skills and experimentation / hardware implementation skills. Synergism and integration in design set a mechatronic system apart from a traditional, multidisciplinary system. Mechanical engineers are expected to design with synergy and integration and professors must now teach design accordingly. In the Department of Mechanical Engineering, Aeronautical Engineering & Mechanics (ME, AE & M) at Rensselaer there are presently two seniorelective courses in the field of mechatronics, which are also open to graduate students: Mechatronics, offered in the fall semester, and Mechatronic System Design, offered in the spring semester. In both courses, emphasis is placed on a balance between physical understanding and mathematical formalities. The key areas of study covered in both courses are: 1. Mechatronic system design principles 2. Modeling, analysis, and control of dynamic physical systems 3. Selection and interfacing of sensors, actuators, and microcontrollers 4. Analog and digital control electronics 5. Real-time programming for control Mechatronics covers the fundamentals in these areas through integrated lectures and laboratory exercises, while Mechatronic System Design focuses on the application and extension of the fundamentals through a design, build, and test experience. Throughout the coverage, the focus is kept on the role of the key mechatronic areas of study in the overall design process and how these key areas are integrated into a successful mechatronic system design. In mechatronics, balance is paramount. The essential characteristic of a mechatronics engineer and the key to success in mechatronics is a balance between two skill sets: 1. Modeling (physical and mathematical), analysis (closed-form and numerical simulation), and control design (analog and digital) of dynamic physical systems; and 2. Experimental validation of models and analysis (for computer simulation without experimental verification is at best questionable, and at worst useless), and an understanding of the key issues in hardware implementation of designs. Figure 1 shows a diagram of the procedure for a dynamic system investigation which emphasizes this balance. This diagram serves as a guide for the study of the various mechatronic hardware systems in the courses taught at Rensselaer. When students perform a complete dynamic system investigation of a mechatronic system, they develop modeling / analysis skills and obtain knowledge of and experience with a wide variety of analog and digital sensors and actuators that will be indispensable as mechatronic design engineers in future years. This fundamental process of dynamic system investigation shall be followed in this paper. 2. INTRODUCTION: BALL ON PLATE SYSTEM The ball-on-plate balancing system, due to its inherent complexity, presents a challenging design problem. In the context of such an unconventional problem, the relevance of mechatronics design methodology becomes apparent. This paper describes the design and development of a ball-on-plate balancing system that was built from an initial design concept by a team of primarily undergraduate students as part of the course Mechatronics System Design at Rensselaer. Other ball-on-plate balancing systems have been designed in the past and some are also commercially available (TecQuipment). The existing systems are, to some extent, bulky and non-portable, and prohibitively expensive for educational purposes. The objective of this design exercise, as is typical of mechatronics design, was to make the ball-on-plate balancing system ‘better, cheaper, quicker’, i.e., to build a compact and affordable ball-on-plate system within a single semester. These objectives were met extremely well by the design that will be presented in this paper. The system described here is unique for its innovativeness in terms of the sensing and actuation schemes, which are the two most critical issues in this design. The first major challenge was to sense the ball position, accurately, reliably, and in a noncumbersome, yet inexpensive way. The various options that were considered are listed below. The relative merits and demerits are also indicated. 1. Some sort of touch sensing scheme: not enough information available, maybe hard to implement. 2. Overhead digital camera with image grabbing and processing software: expensive, requires the use of additional software, requires the use of a super-structure to mount the camera. 3. Resistive grid on the plate (a two dimensional potentiometer): limited resolution, excessive and cumbersome wiring needed. 4. Grid of infrared sensors: inexpensive, limited resolution, cumbersome, excessive wiring needed. Physical System Physical Model Mathematical Model Model Parameter Identification Actual Dynamic Behavior Compare Predicted Dynamic Behavior Make Design Decisions Design Complete Measurements, Calculations, Manufacturer's Specifications Assumptions and Engineering Judgement Physical Laws Experimental Analysis Equation Solution: Analytical and Numerical Solution Model Adequate, Performance Adequate Model Adequate, Performance Inadequate Modify or Augment Model Inadequate: Modify Which Parameters to Identify? What Tests to Perform? Figure 1.Dynamic System Investigation chart 5. 3D-motion tracking of the ball by means of an infrared-ultrasonic transponder attached to the ball, which exchanges signals with 3 remotely located towers (V-scope by Lipman Electronic Engineering Ltd.): very accurate and clean measurements, requires an additional apparatus altogether, very expensive, special attachment to the ball has to be made Based on the above listed merits and demerits associated with each choice, it was decided to pursue the option of using a touch-screen. It offered the most compact, reliable, and affordable solution. This decision was followed by extensive research pertaining to the selection and implementation of an appropriate touch-sensor. The next major challenge was to design an actuation mechanism for the plate. The plate has to rotate about its two planer body axes, to be able to balance the ball. For this design, the following options were considered: 1. Two linear actuators connected to two corners on the base of the plate that is supported by a ball and socket joint in the center, thus providing the two necessary degrees of motion: very expensive 2. Mount the plate on a gimbal ring. One motor turns the gimbal providing one degree of rotation; the other motor turns the plate relative to the ring thus providing a second degree of rotation: a non-symmetric set-up because one motor has to move the entire gimbal along with the plate thus experiencing a much higher load inertia as compared to the other motor. 3. Use of cable and pulley arrangement to turn the plate using two motors (DC or Stepper): good idea, has been used earlier 4. Use a spatial linkage mechanism to turn the plate using two motors (DC or Stepper): This comprises two four-bar parallelogram linkages, each driving one axis of rotation of the plate: an innovative method never tried before, design has to verified. Figure 2 Ball-on-plate System Assembly In this case, the final choice was selected for its uniqueness as a design never tried before. Figure 2 shows an assembly view of the entire system including the spatial linkage mechanism and the touch-screen mounted on the plate. 3. PHYSICAL SYSTEM DESCRIPTION The physical system consists of an acrylic plate, an actuation mechanism for tilting the plate about two axes, a ball position sensor, instrumentation for signal processing, and real-time control software/hardware. The entire system is mounted on an aluminium base plate and is supported by four vertical aluminium beams. The beams provide shape and support to the system and also provide mountings for the two motors. 3.1 Actuation mechanism Figure 3. The spatial linkage mechanism used for actuating the plate. Each motor (O 1 and O2) drives one axis of the plate-rotation angle and is connected to the plate by a spatial linkage mechanism (Figure 3). Referring to the schematic in Figure 5, each side of the spatial linkage mechanism (O 1-P1-A-O and O2-P2-B-O) is a four-bar parallelogram linkage. This ensures that for small motions around the equilibrium, the plate angles (q1 and q2, defined later) are equal to the corresponding motor angles (θm1 and θm2). The plate is connected to ground by means of a U-joint at O. Ball joints (at points P1, P2, A and B) connecting linkages and rods provide enough freedom of motion to ensure that the system does not bind. The motor angles are measured by highresolution optical encoders mounted on the motor shafts. A dual-axis inclinometer is mounted on the plate to measure the plate angles directly. As shall be shown later, for small motions, the motor angles correspond to the plate angles due to the kinematic constraints imposed by the parallelogram linkages. The motors used for driving the l",
"title": ""
},
{
"docid": "2f83ca2bdd8401334877ae4406a4491c",
"text": "Mobile IP is the current standard for supporting macromobility of mobile hosts. However, in the case of micromobility support, there are several competing proposals. In this paper, we present the design, implementation, and performance evaluation of HAWAII, a domain-based approach for supporting mobility. HAWAII uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility. These path setup schemes deliver excellent performance by reducing mobility related disruption to user applications. Also, mobile hosts retain their network address while moving within the domain, simplifying quality-of-service (QoS) support. Furthermore, reliability is achieved through maintaining soft-state forwarding entries for the mobile hosts and leveraging fault detection mechanisms built in existing intra-domain routing protocols. HAWAII defaults to using Mobile IP for macromobility, thus providing a comprehensive solution for mobility support in wide-area wireless networks.",
"title": ""
},
{
"docid": "0793d82c1246c777dce673d8f3146534",
"text": "CONTEXT\nMedical schools are known to be stressful environments for students and hence medical students have been believed to experience greater incidences of depression than others. We evaluated the global prevalence of depression amongst medical students, as well as epidemiological, psychological, educational and social factors in order to identify high-risk groups that may require targeted interventions.\n\n\nMETHODS\nA systematic search was conducted in online databases for cross-sectional studies examining prevalences of depression among medical students. Studies were included only if they had used standardised and validated questionnaires to evaluate the prevalence of depression in a group of medical students. Random-effects models were used to calculate the aggregate prevalence and pooled odds ratios (ORs). Meta-regression was carried out when heterogeneity was high.\n\n\nRESULTS\nFindings for a total of 62 728 medical students and 1845 non-medical students were pooled across 77 studies and examined. Our analyses demonstrated a global prevalence of depression amongst medical students of 28.0% (95% confidence interval [CI] 24.2-32.1%). Female, Year 1, postgraduate and Middle Eastern medical students were more likely to be depressed, but the differences were not statistically significant. By year of study, Year 1 students had the highest rates of depression at 33.5% (95% CI 25.2-43.1%); rates of depression then gradually decreased to reach 20.5% (95% CI 13.2-30.5%) at Year 5. This trend represented a significant decline (B = - 0.324, p = 0.005). There was no significant difference in prevalences of depression between medical and non-medical students. The overall mean frequency of suicide ideation was 5.8% (95% CI 4.0-8.3%), but the mean proportion of depressed medical students who sought treatment was only 12.9% (95% CI 8.1-19.8%).\n\n\nCONCLUSIONS\nDepression affects almost one-third of medical students globally but treatment rates are relatively low. The current findings suggest that medical schools and health authorities should offer early detection and prevention programmes, and interventions for depression amongst medical students before graduation.",
"title": ""
}
] |
scidocsrr
|
32f88afee3e76030d362e32d0a300e56
|
A Distributed Sensor Data Search Platform for Internet of Things Environments
|
[
{
"docid": "1a9e2481abf23501274e67575b1c9be6",
"text": "The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the idealâ€, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utility†for the “majority†and a minimum of an individual regret for the “opponentâ€. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences. a, 1 b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
}
] |
[
{
"docid": "46a4e4dbcb9b6656414420a908b51cc5",
"text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.",
"title": ""
},
{
"docid": "0e4334595aeec579e8eb35b0e805282d",
"text": "In this paper, we present madmom, an open-source audio processing and music information retrieval (MIR) library written in Python. madmom features a concise, NumPy-compatible, object oriented design with simple calling conventions and sensible default values for all parameters, which facilitates fast prototyping of MIR applications. Prototypes can be seamlessly converted into callable processing pipelines through madmom's concept of Processors, callable objects that run transparently on multiple cores. Processors can also be serialised, saved, and re-run to allow results to be easily reproduced anywhere. Apart from low-level audio processing, madmom puts emphasis on musically meaningful high-level features. Many of these incorporate machine learning techniques and madmom provides a module that implements some methods commonly used in MIR such as hidden Markov models and neural networks. Additionally, madmom comes with several state-of-the-art MIR algorithms for onset detection, beat, downbeat and meter tracking, tempo estimation, and chord recognition. These can easily be incorporated into bigger MIR systems or run as stand-alone programs.",
"title": ""
},
{
"docid": "df54716e3bed98a8bf510587cfcdb6cb",
"text": "We propose a method to procedurally generate a familiar yet complex human artifact: the city. We are not trying to reproduce existing cities, but to generate artificial cities that are convincing and plausible by capturing developmental behavior. In addition, our results are meant to build upon themselves, such that they ought to look compelling at any point along the transition from village to metropolis. Our approach largely focuses upon land usage and building distribution for creating realistic city environments, whereas previous attempts at city modeling have mainly focused on populating road networks. Finally, we want our model to be self automated to the point that the only necessary input is a terrain description, but other high-level and low-level parameters can be specified to support artistic contributions. With the aid of agent based simulation we are generating a system of agents and behaviors that interact with one another through their effects upon a simulated environment. Our philosophy is that as each agent follows a simple behavioral rule set, a more complex behavior will tend to emerge out of the interactions between the agents and their differing rule sets. By confining our model to a set of simple rules for each class of agents, we hope to make our model extendible not only in regard to the types of structures that are produced, but also in describing the social and cultural influences prevalent in all cities.",
"title": ""
},
{
"docid": "6cf7fb67afbbc7d396649bb3f05dd0ca",
"text": "This paper details a methodology for using structured light laser imaging to create high resolution bathymetric maps of the sea floor. The system includes a pair of stereo cameras and an inclined 532nm sheet laser mounted to a remotely operated vehicle (ROV). While a structured light system generally requires a single camera, a stereo vision set up is used here for in-situ calibration of the laser system geometry by triangulating points on the laser line. This allows for quick calibration at the survey site and does not require precise jigs or a controlled environment. A batch procedure to extract the laser line from the images to sub-pixel accuracy is also presented. The method is robust to variations in image quality and moderate amounts of water column turbidity. The final maps are constructed using a reformulation of a previous bathymetric Simultaneous Localization and Mapping (SLAM) algorithm called incremental Smoothing and Mapping (iSAM). The iSAM framework is adapted from previous applications to perform sub-mapping, where segments of previously visited terrain are registered to create relative pose constraints. The resulting maps can be gridded at one centimeter and have significantly higher sample density than similar surveys using high frequency multibeam sonar or stereo vision. Results are presented for sample surveys at a submerged archaeological site and sea floor rock outcrop.",
"title": ""
},
{
"docid": "4b9ccf92713405e7c45e8a21bb09e150",
"text": "The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible.",
"title": ""
},
{
"docid": "4159f4f92adea44577319e897f10d765",
"text": "While our knowledge about ancient civilizations comes mostly from studies in archaeology and history books, much can also be learned or confirmed from literary texts . Using natural language processing techniques, we present aspects of ancient China as revealed by statistical textual analysis on the Complete Tang Poems , a 2.6-million-character corpus of all surviving poems from the Tang Dynasty (AD 618 —907). Using an automatically created treebank of this corpus , we outline the semantic profiles of various poets, and discuss the role of s easons, geography, history, architecture, and colours , as observed through word selection and dependencies.",
"title": ""
},
{
"docid": "554b82dc9820bae817bac59e81bf798a",
"text": "This paper proposed a 4-channel parallel 40 Gb/s front-end amplifier (FEA) in optical receiver for parallel optical transmission system. A novel enhancement type regulated cascade (ETRGC) configuration with an active inductor is originated in this paper for the transimpedance amplifier to significantly increase the bandwidth. The technique of three-order interleaving active feedback expands the bandwidth of the gain stage of transimpedance amplifier and limiting amplifier. Experimental results show that the output swing is 210 mV (Vpp) when the input voltage varies from 5 mV to 500 mV. The power consumption of the 4-channel parallel 40 Gb/s front-end amplifier (FEA) is 370 mW with 1.8 V power supply and the chip area is 650 μm×1300 μm.",
"title": ""
},
{
"docid": "734eb2576affeb2e34f07b5222933f12",
"text": "In this paper, a novel chemical sensor system utilizing an Ion-Sensitive Field Effect Transistor (ISFET) for pH measurement is presented. Compared to other interface circuits, this system uses auto-zero amplifiers with a pingpong control scheme and array of Programmable-Gate Ion-Sensitive Field Effect Transistor (PG-ISFET). By feedback controlling the programable gates of ISFETs, the intrinsic sensor offset can be compensated for uniformly. Furthermore the chemical signal sensitivity can be enhanced due to the feedback system on the sensing node. A pingpong structure and operation protocol has been developed to realize the circuit, reducing the error and achieve continuous measurement. This system has been designed and fabricated in AMS 0.35µm, to compensate for a threshold voltage variation of ±5V and enhance the pH sensitivity to 100mV/pH.",
"title": ""
},
{
"docid": "d48529ec9487fab939bc8120c44499d0",
"text": "A new wideband circularly polarized antenna using metasurface superstrate for C-band satellite communication application is proposed in this letter. The proposed antenna consists of a planar slot coupling antenna with an array of metallic rectangular patches that can be viewed as a polarization-dependent metasurface superstrate. The metasurface is utilized to adjust axial ratio (AR) for wideband circular polarization. Furthermore, the proposed antenna has a compact structure with a low profile of 0.07λ0 ( λ0 stands for the free-space wavelength at 5.25 GHz) and ground size of 34.5×28 mm2. Measured results show that the -10-dB impedance bandwidth for the proposed antenna is 33.7% from 4.2 to 5.9 GHz, and 3-dB AR bandwidth is 16.5% from 4.9 to 5.9 GHz with an average gain of 5.8 dBi. The simulated and measured results are in good agreement to verify the good performance of the proposed antenna.",
"title": ""
},
{
"docid": "9f5e4d52df5f13a80ccdb917a899bb9e",
"text": "This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.",
"title": ""
},
{
"docid": "9d316fae0354f3eb28540ea013b4f8a4",
"text": "Natural language makes considerable use of recurrent formulaic patterns of words. This article triangulates the construct of formula from corpus linguistic, psycholinguistic, and educational perspectives. It describes the corpus linguistic extraction of pedagogically useful formulaic sequences for academic speech and writing. It determines English as a second language (ESL) and English for academic purposes (EAP) instructors’ evaluations of their pedagogical importance. It summarizes three experiments which show that different aspects of formulaicity affect the accuracy and fluency of processing of these formulas in native speakers and in advanced L2 learners of English. The language processing tasks were selected to sample an ecologically valid range of language processing skills: spoken and written, production and comprehension. Processing in all experiments was affected by various corpus-derived metrics: length, frequency, and mutual information (MI), but to different degrees in the different populations. For native speakers, it is predominantly the MI of the formula which determines processability; for nonnative learners of the language, it is predominantly the frequency of the formula. The implications of these findings are discussed for (a) the psycholinguistic validity of corpus-derived formulas, (b) a model of their acquisition, (c) ESL and EAP instruction and the prioritization of which formulas to teach.",
"title": ""
},
{
"docid": "769ba1ac260f54ea64b83d34b97fc868",
"text": "Truck platooning for which multiple trucks follow at a short distance is considered a near-term truck automation opportunity, with the potential to reduce fuel consumption. Short following distances and increasing automation make it hard for a driver to be the backup if the system fails. The EcoTwin consortium successfully demonstrated a two truck platooning system with trucks following at 20 meters distance at the public road, in which the driver is the backup. The ambition of the consortium is to increase the truck automation and to reduce the following distance, which requires a new fail-operational truck platooning architecture. This paper presents a level 2+ platooning system architecture, which is fail-operational for a single failure, and the corresponding process to obtain it. First insights in the existing two truck platooning system are obtained by analyzing its key aspects, being utilization, latency, reliability, and safety. Using these insights, candidate level 2+ platooning system architectures are defined from which the most suitable truck platooning architecture is selected. Future work is the design and implementation of a prototype, based on the presented level 2+ platooning system architecture.",
"title": ""
},
{
"docid": "b5a8577b02f7f44e9fc5abd706e096d4",
"text": "Automotive Safety Integrity Level (ASIL) decomposition is a technique presented in the ISO 26262: Road Vehicles Functional Safety standard. Its purpose is to satisfy safety-critical requirements by decomposing them into less critical ones. This procedure requires a system-level validation, and the elements of the architecture to which the decomposed requirements are allocated must be analyzed in terms of Common-Cause Faults (CCF). In this work, we present a generic method for a bottomup ASIL decomposition, which can be used during the development of a new product. The system architecture is described in a three-layer model, from which fault trees are generated, formed by the application, resource, and physical layers and their mappings. A CCF analysis is performed on the fault trees to verify the absence of possible common faults between the redundant elements and to validate the ASIL decomposition.",
"title": ""
},
{
"docid": "d9e4a4303a7949b51510cf95098e4248",
"text": "Recent increased regulatory scrutiny concerning subvisible particulates (SbVPs) in parenteral formulations of biologics has led to the publication of numerous articles about the sources, characteristics, implications, and approaches to monitoring and detecting SbVPs. Despite varying opinions on the level of associated risks and method of regulation, nearly all industry scientists and regulators agree on the need for monitoring and reporting visible and subvisible particles. As prefillable drug delivery systems have become a prominent packaging option, silicone oil, a common primary packaging lubricant, may play a role in the appearance of particles. The goal of this article is to complement the current SbVP knowledge base with new insights into the evolution of silicone-oil-related particulates and their interactions with components in prefillable systems. We propose a \"toolbox\" for improved silicone-oil-related particulate detection and enumeration, and discuss the benefits and limitations of approaches for lowering and controlling silicone oil release in parenterals. Finally, we present surface cross-linking of silicone as the recommended solution for achieving significant SbVP reduction without negatively affecting functional performance.",
"title": ""
},
{
"docid": "becd45d50ead03dd5af399d5618f1ea3",
"text": "This paper presents a new paradigm of cryptography, quantum public-key cryptosystems. In quantum public-key cryptosystems, all parties including senders, receivers and adversaries are modeled as quantum (probabilistic) poly-time Turing (QPT) machines and only classical channels (i.e., no quantum channels) are employed. A quantum trapdoor one-way function, f , plays an essential role in our system, in which a QPT machine can compute f with high probability, any QPT machine can invert f with negligible probability, and a QPT machine with trapdoor data can invert f . This paper proposes a concrete scheme for quantum public-key cryptosystems: a quantum public-key encryption scheme or quantum trapdoor one-way function. The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.",
"title": ""
},
{
"docid": "1b394e01c8e2ea7957c62e3e0b15fbd7",
"text": "In this paper, we present results on the implementation of a hierarchical quaternion based attitude and trajectory controller for manual and autonomous flights of quadrotors. Unlike previous papers on using quaternion representation, we use the nonlinear complementary filter that estimates the attitude in quaternions and as such does not involve Euler angles or rotation matrices. We show that for precise trajectory tracking, the resulting attitude error dynamics of the system is non-autonomous and is almost globally asymptotically and locally exponentially stable under the proposed control law. We also show local exponential stability of the translational dynamics under the proposed trajectory tracking controller which sits at the highest level of the hierarchy. Thus by input-to-state stability, the entire system is locally exponentially stable. The quaternion based observer and controllers are available as open-source.",
"title": ""
},
{
"docid": "4437a0241b825fddd280517b9ae3565a",
"text": "The levels of pregnenolone, dehydroepiandrosterone (DHA), androstenedione, testosterone, dihydrotestosterone (DHT), oestrone, oestradiol, cortisol and luteinizing hormone (LH) were measured in the peripheral plasma of a group of young, apparently healthy males before and after masturbation. The same steroids were also determined in a control study, in which the psychological antipation of masturbation was encouraged, but the physical act was not carried out. The plasma levels of all steroids were significantly increased after masturbation, whereas steroid levels remained unchanged in the control study. The most marked changes after masturbation were observed in pregnenolone and DHA levels. No alterations were observed in the plasma levels of LH. Both before and after masturbation plasma levels of testosterone were significantly correlated to those of DHT and oestradiol, but not to those of the other steroids studied. On the other hand, cortisol levels were significantly correlated to those of pregnenolone, DHA, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT, androstenedione and oestrone. In the same subjects, the levels of pregnenolone, DHA, androstenedione, testosterone and DHT in seminal plasma were also estimated; they were all significantly correlated to the levels of the corresponding steroid in the systemic blood withdrawn both before and after masturbation. As a practical consequence, the results indicate that whenever both blood and semen are analysed, blood sampling must precede semen collection.",
"title": ""
},
{
"docid": "33cab0ec47af5e40d64e34f8ffc7dd6f",
"text": "This inaugural article has a twofold purpose: (i) to present a simpler and more general justification of the fundamental scaling laws of quasibrittle fracture, bridging the asymptotic behaviors of plasticity, linear elastic fracture mechanics, and Weibull statistical theory of brittle failure, and (ii) to give a broad but succinct overview of various applications and ramifications covering many fields, many kinds of quasibrittle materials, and many scales (from 10(-8) to 10(6) m). The justification rests on developing a method to combine dimensional analysis of cohesive fracture with second-order accurate asymptotic matching. This method exploits the recently established general asymptotic properties of the cohesive crack model and nonlocal Weibull statistical model. The key idea is to select the dimensionless variables in such a way that, in each asymptotic case, all of them vanish except one. The minimal nature of the hypotheses made explains the surprisingly broad applicability of the scaling laws.",
"title": ""
},
{
"docid": "6c72b38246e35d1f49d7f55e89b42f21",
"text": "The success of IT project related to numerous factors. It had an important significance to find the critical factors for the success of project. Based on the general analysis of IT project management, this paper analyzed some factors of project management for successful IT project from the angle of modern project management. These factors include project participators, project communication, collaboration, and information sharing mechanism as well as project management process. In the end, it analyzed the function of each factor for a successful IT project. On behalf of the collective goal, by the use of the favorable project communication and collaboration, the project participants carry out successfully to the management of the process, which is significant to the project, and make project achieve success eventually.",
"title": ""
},
{
"docid": "95db5921ba31588e962ffcd8eb6469b0",
"text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this",
"title": ""
}
] |
scidocsrr
|
d8651538ed422dc108590164f96bf59f
|
Towards a Semantic Driven Framework for Smart Grid Applications: Model-Driven Development Using CIM, IEC 61850 and IEC 61499
|
[
{
"docid": "e4bc807f5d5a9f81fdfadd3632ffa5d9",
"text": "openview network node manager designing and implementing an enterprise solution PDF high availability in websphere messaging solutions PDF designing web interfaces principles and patterns for rich interactions PDF pivotal certified spring enterprise integration specialist exam a study guide PDF active directory designing deploying and running active directory PDF application architecture for net designing applications and services patterns & practices PDF big data analytics from strategic planning to enterprise integration with tools techniques nosql and graph PDF designing and building security operations center PDF patterns of enterprise application architecture PDF java ee and net interoperability integration strategies patterns and best practices PDF making healthy places designing and building for health well-being and sustainability PDF architectural ceramics for the studio potter designing building installing PDF xml for data architects designing for reuse and integration the morgan kaufmann series in data management systems PDF",
"title": ""
},
{
"docid": "3be99b1ef554fde94742021e4782a2aa",
"text": "This is the second part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examined the potential value of MAS technology to the power industry, described fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications, and presented a comprehensive review of the power engineering applications for which MAS are being investigated. It also defined the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented. Given the significant and growing interest in this field, it is imperative that the power engineering community considers the standards, tools, supporting technologies, and design methodologies available to those wishing to implement a MAS solution for a power engineering problem. This paper describes the various options available and makes recommendations on best practice. It also describes the problem of interoperability between different multi-agent systems and proposes how this may be tackled.",
"title": ""
},
{
"docid": "7e6bbd25c49b91fd5dc4248f3af918a7",
"text": "Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.",
"title": ""
},
{
"docid": "152c11ef8449d53072bbdb28432641fa",
"text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
}
] |
[
{
"docid": "7e848e98909c69378f624ce7db31dbfa",
"text": "Phenotypically identical cells can dramatically vary with respect to behavior during their lifespan and this variation is reflected in their molecular composition such as the transcriptomic landscape. Single-cell transcriptomics using next-generation transcript sequencing (RNA-seq) is now emerging as a powerful tool to profile cell-to-cell variability on a genomic scale. Its application has already greatly impacted our conceptual understanding of diverse biological processes with broad implications for both basic and clinical research. Different single-cell RNA-seq protocols have been introduced and are reviewed here-each one with its own strengths and current limitations. We further provide an overview of the biological questions single-cell RNA-seq has been used to address, the major findings obtained from such studies, and current challenges and expected future developments in this booming field.",
"title": ""
},
{
"docid": "8e88621c949e0df5a7eda810bfac113d",
"text": "About one fourth of patients with bipolar disorders (BD) have depressive episodes with a seasonal pattern (SP) coupled to a more severe disease. However, the underlying genetic influence on a SP in BD remains to be identified. We studied 269 BD Caucasian patients, with and without SP, recruited from university-affiliated psychiatric departments in France and performed a genetic single-marker analysis followed by a gene-based analysis on 349 single nucleotide polymorphisms (SNPs) spanning 21 circadian genes and 3 melatonin pathway genes. A SP in BD was nominally associated with 14 SNPs identified in 6 circadian genes: NPAS2, CRY2, ARNTL, ARNTL2, RORA and RORB. After correcting for multiple testing, using a false discovery rate approach, the associations remained significant for 5 SNPs in NPAS2 (chromosome 2:100793045-100989719): rs6738097 (pc = 0.006), rs12622050 (pc = 0.006), rs2305159 (pc = 0.01), rs1542179 (pc = 0.01), and rs1562313 (pc = 0.02). The gene-based analysis of the 349 SNPs showed that rs6738097 (NPAS2) and rs1554338 (CRY2) were significantly associated with the SP phenotype (respective Empirical p-values of 0.0003 and 0.005). The associations remained significant for rs6738097 (NPAS2) after Bonferroni correction. The epistasis analysis between rs6738097 (NPAS2) and rs1554338 (CRY2) suggested an additive effect. Genetic variations in NPAS2 might be a biomarker for a seasonal pattern in BD.",
"title": ""
},
{
"docid": "51f90bbb8519a82983eec915dd643d34",
"text": "The growth of vehicles in Yogyakarta Province, Indonesia is not proportional to the growth of roads. This problem causes severe traffic jam in many main roads. Common traffic anomalies detection using surveillance camera requires manpower and costly, while traffic anomalies detection with crowdsourcing mobile applications are mostly owned by private. This research aims to develop a real-time traffic classification by harnessing the power of social network data, Twitter. In this study, Twitter data are processed to the stages of preprocessing, feature extraction, and tweet classification. This study compares classification performance of three machine learning algorithms, namely Naive Bayes (NB), Support Vector Machine (SVM), and Decision Tree (DT). Experimental results show that SVM algorithm produced the best performance among the other algorithms with 99.77% and 99.87% of classification accuracy in balanced and imbalanced data, respectively. This research implies that social network service may be used as an alternative source for traffic anomalies detection by providing information of traffic flow condition in real-time.",
"title": ""
},
{
"docid": "2d7de6d43997449f4ad922bc71e385ad",
"text": "A microwave duplexer with high isolation is presented in this paper. The device is based on triple-mode filters that are built using silver-plated ceramic cuboids. To create a six-pole, six-transmission-zero filter in the DCS-1800 band, which is utilized in mobile communications, two cuboids are cascaded. To shift spurious harmonics, low dielectric caps are placed on the cuboid faces. These caps push the first cuboid spurious up in frequency by around 340 MHz compared to the uncapped cuboid, allowing a 700-MHz spurious free window. To verify the design, a DCS-1800 duplexer with 75-MHz widebands is built. It achieves around 1 dB of insertion loss for both the receive and transmit ports with around 70 dB of mutual isolation within only 20-MHz band separation, using a volume of only 30 cm3 .",
"title": ""
},
{
"docid": "78cae00cd81dc1f519d25ff6cb8f41c8",
"text": "We present a technique for efficiently synthesizing images of atmospheric clouds using a combination of Monte Carlo integration and neural networks. The intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming aerosols make rendering of clouds---e.g. the characteristic silverlining and the \"whiteness\" of the inner body---challenging for methods based solely on Monte Carlo integration or diffusion theory. We approach the problem differently. Instead of simulating all light transport during rendering, we pre-learn the spatial and directional distribution of radiant flux from tens of cloud exemplars. To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source. The descriptor is input to a deep neural network that predicts the radiance function for each shading configuration. We make the key observation that progressively feeding the hierarchical descriptor into the network enhances the network's ability to learn faster and predict with higher accuracy while using fewer coefficients. We also employ a block design with residual connections to further improve performance. A GPU implementation of our method synthesizes images of clouds that are nearly indistinguishable from the reference solution within seconds to minutes. Our method thus represents a viable solution for applications such as cloud design and, thanks to its temporal stability, for high-quality production of animated content.",
"title": ""
},
{
"docid": "e89123df2d60f011a3c6057030c42167",
"text": "Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.",
"title": ""
},
{
"docid": "7b94828573579b393a371d64d5125f64",
"text": "This paper presents an artificial neural network(ANN) approach to electric load forecasting. The ANN is used to learn the relationship among past, current and future temperatures and loads. In order to provide the forecasted load, the ANN interpolates among the load and temperature data in a training data set. The average absolute errors of the one-hour and 24-hour ahead forecasts in our test on actual utility data are shown to be 1.40% and 2.06%, respectively. This compares with an average error of 4.22% for 24hour ahead forecasts with a currently used forecasting technique applied to the same data.",
"title": ""
},
{
"docid": "62c96348c818cbe9f1aa72df5ca717e6",
"text": "BACKGROUND\nChocolate consumption has long been associated with enjoyment and pleasure. Popular claims confer on chocolate the properties of being a stimulant, relaxant, euphoriant, aphrodisiac, tonic and antidepressant. The last claim stimulated this review.\n\n\nMETHOD\nWe review chocolate's properties and the principal hypotheses addressing its claimed mood altering propensities. We distinguish between food craving and emotional eating, consider their psycho-physiological underpinnings, and examine the likely 'positioning' of any effect of chocolate to each concept.\n\n\nRESULTS\nChocolate can provide its own hedonistic reward by satisfying cravings but, when consumed as a comfort eating or emotional eating strategy, is more likely to be associated with prolongation rather than cessation of a dysphoric mood.\n\n\nLIMITATIONS\nThis review focuses primarily on clarifying the possibility that, for some people, chocolate consumption may act as an antidepressant self-medication strategy and the processes by which this may occur.\n\n\nCONCLUSIONS\nAny mood benefits of chocolate consumption are ephemeral.",
"title": ""
},
{
"docid": "899349ba5a7adb31f5c7d24db6850a82",
"text": "Sampling is a core process for a variety of graphics applications. Among existing sampling methods, blue noise sampling remains popular thanks to its spatial uniformity and absence of aliasing artifacts. However, research so far has been mainly focused on blue noise sampling with a single class of samples. This could be insufficient for common natural as well as man-made phenomena requiring multiple classes of samples, such as object placement, imaging sensors, and stippling patterns.\n We extend blue noise sampling to multiple classes where each individual class as well as their unions exhibit blue noise characteristics. We propose two flavors of algorithms to generate such multi-class blue noise samples, one extended from traditional Poisson hard disk sampling for explicit control of sample spacing, and another based on our soft disk sampling for explicit control of sample count. Our algorithms support uniform and adaptive sampling, and are applicable to both discrete and continuous sample space in arbitrary dimensions. We study characteristics of samples generated by our methods, and demonstrate applications in object placement, sensor layout, and color stippling.",
"title": ""
},
{
"docid": "3a651ab1f8c05cfae51da6a14f6afef8",
"text": "The taxonomical relationship of Cylindrospermopsis raciborskii and Raphidiopsis mediterranea was studied by morphological and 16S rRNA gene diversity analyses of natural populations from Lake Kastoria, Greece. Samples were obtained during a bloom (23,830 trichomes mL ) in August 2003. A high diversity of apical cell, trichome, heterocyte and akinete morphology, trichome fragmentation and reproduction was observed. Trichomes were grouped into three dominant morphotypes: the typical and the non-heterocytous morphotype of C. raciborskii and the typical morphotype of R. mediterranea. A morphometric comparison of the dominant morphotypes showed significant differences in mean values of cell and trichome sizes despite the high overlap in the range of the respective size values. Additionally, two new morphotypes representing developmental stages of the species are described while a new mode of reproduction involving a structurally distinct reproductive cell is described for the first time in planktic Nostocales. A putative life-cycle, common for C. raciborskii and R. mediterranea is proposed revealing that trichome reproduction of R. mediterranea gives rise both to R. mediterranea and C. raciborskii non-heterocytous morphotypes. The phylogenetic analysis of partial 16S rRNA gene (ca. 920 bp) of the co-existing Cylindrospermopsis and Raphidiopsis morphotypes revealed only one phylotype which showed 99.54% similarity to R. mediterranea HB2 (China) and 99.19% similarity to C. raciborskii form 1 (Australia). We propose that all morphotypes comprised stages of the life cycle of C. raciborkii whereas R. mediterranea from Lake Kastoria (its type locality) represents non-heterocytous stages of Cylindrospermopsis complex life cycle. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d2a94d4dc8d8d5d71fc5f838f692544f",
"text": "This introductory chapter reviews the emergence, classification, and contemporary examples of cultural robots: social robots that are shaped by, producers of, or participants in culture. We review the emergence of social robotics as a field, and then track early references to the terminology and key lines of inquiry of Cultural Robotics. Four categories of the integration of culture with robotics are outlined; and the content of the contributing chapters following this introductory chapter are summarised within these categories.",
"title": ""
},
{
"docid": "7c427a383fe1e95f33049335371a84e4",
"text": "Gene set analysis is moving towards considering pathway topology as a crucial feature. Pathway elements are complex entities such as protein complexes, gene family members and chemical compounds. The conversion of pathway topology to a gene/protein networks (where nodes are a simple element like a gene/protein) is a critical and challenging task that enables topology-based gene set analyses. Unfortunately, currently available R/Bioconductor packages provide pathway networks only from single databases. They do not propagate signals through chemical compounds and do not differentiate between complexes and gene families. Here we present graphite, a Bioconductor package addressing these issues. Pathway information from four different databases is interpreted following specific biologically-driven rules that allow the reconstruction of gene-gene networks taking into account protein complexes, gene families and sensibly removing chemical compounds from the final graphs. The resulting networks represent a uniform resource for pathway analyses. Indeed, graphite provides easy access to three recently proposed topological methods. The graphite package is available as part of the Bioconductor software suite. graphite is an innovative package able to gather and make easily available the contents of the four major pathway databases. In the field of topological analysis graphite acts as a provider of biological information by reducing the pathway complexity considering the biological meaning of the pathway elements.",
"title": ""
},
{
"docid": "c1cdb2ab2a594e7fbb1dfdb261f0910c",
"text": "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.",
"title": ""
},
{
"docid": "ab3fb8980fa8d88e348f431da3d21ed4",
"text": "PIECE (Plant Intron Exon Comparison and Evolution) is a web-accessible database that houses intron and exon information of plant genes. PIECE serves as a resource for biologists interested in comparing intron-exon organization and provides valuable insights into the evolution of gene structure in plant genomes. Recently, we updated PIECE to a new version, PIECE 2.0 (http://probes.pw.usda.gov/piece or http://aegilops.wheat.ucdavis.edu/piece). PIECE 2.0 contains annotated genes from 49 sequenced plant species as compared to 25 species in the previous version. In the current version, we also added several new features: (i) a new viewer was developed to show phylogenetic trees displayed along with the structure of individual genes; (ii) genes in the phylogenetic tree can now be also grouped according to KOG (The annotation of Eukaryotic Orthologous Groups) and KO (KEGG Orthology) in addition to Pfam domains; (iii) information on intronless genes are now included in the database; (iv) a statistical summary of global gene structure information for each species and its comparison with other species was added; and (v) an improved GSDraw tool was implemented in the web server to enhance the analysis and display of gene structure. The updated PIECE 2.0 database will be a valuable resource for the plant research community for the study of gene structure and evolution.",
"title": ""
},
{
"docid": "433340f3392257a8ac830215bf5e3ef2",
"text": "A compact Substrate Integrated Waveguide (SIW) Leaky-Wave Antenna (LWA) is proposed. Internal vias are inserted in the SIW in order to have narrow walls, and so reducing the size of the SIW-LWA, the new structure is called Slow Wave - Substrate Integrated Waveguide - Leaky Wave Antenna (SW-SIW-LWA), since inserting the vias induce the SW effect. After designing the antenna and simulating with HFSS a reduction of 30% of the transverse side of the antenna is attained while maintaining an acceptable gain. Other parameters like the radiation efficiency, Gain, directivity, and radiation pattern are analyzed. Finally a Comparison of our miniaturization technique with Half-Mode Substrate Integrated Waveguide (HMSIW) technique realized in recent articles is done, shows that SW-SIW-LWA technique could be a good candidate for SIW miniaturization.",
"title": ""
},
{
"docid": "1b24b5d1936377c3659273a68aafeb35",
"text": "In this paper, hand dorsal images acquired under infrared light are used to design an accurate personal authentication system. Each of the image is segmented into palm dorsal and fingers which are subsequently used to extract palm dorsal veins and infrared hand geometry features respectively. A new quality estimation algorithm is proposed to estimate the quality of palm dorsal which assigns low values to the pixels containing hair or skin texture. Palm dorsal is enhanced using filtering. For vein extraction, information provided by the enhanced image and the vein quality is consolidated using a variational approach. The proposed vein extraction can handle the issues of hair, skin texture and variable width veins so as to extract the genuine veins accurately. Several post processing techniques are introduced in this paper for accurate feature extraction of infrared hand geometry features. Matching scores are obtained by matching palm dorsal veins and infrared hand geometry features. These are eventually fused for authentication. For performance evaluation, a database of 1500 hand images acquired from 300 different hands is created. Experimental results demonstrate the superiority of the proposed system over existing",
"title": ""
},
{
"docid": "49cf26b6c6dde96df9009a68758ee506",
"text": "Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with ∗Corresponding author. Tel.: +86 27 87558918 Email addresses: Yang Xiao@hust.edu.cn (Yang Xiao), chenjun2015@hust.edu.cn (Jun Chen), yancheng wang@hust.edu.cn (Yancheng Wang), zgcao@hust.edu.cn (Zhiguo Cao), zhouty@ihpc.a-star.edu.sg (Joey Tianyi Zhou), xbai@hust.edu.cn (Xiang Bai) Preprint submitted to Information Sciences December 31, 2018 ar X iv :1 80 6. 11 26 9v 3 [ cs .C V ] 2 7 D ec 2 01 8 respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.",
"title": ""
},
{
"docid": "cf02d97cdcc1a4be51ed0af2af771b7d",
"text": "Bowen's disease is a squamous cell carcinoma in situ and has the potential to progress to a squamous cell carcinoma. The authors treated two female patients (a 39-year-old and a 41-year-old) with Bowen's disease in the vulva area using topical photodynamic therapy (PDT), involving the use of 5-aminolaevulinic acid and a light-emitting diode device. The light was administered at an intensity of 80 mW/cm(2) for a dose of 120 J/cm(2) biweekly for 6 cycles. The 39-year-old patient showed excellent clinical improvement, but the other patient achieved only a partial response. Even though one patient underwent a total excision 1 year later due to recurrence, both patients were satisfied with the cosmetic outcomes of this therapy and the partial improvement over time. The common side effect of PDT was a stinging sensation. PDT provides a relatively effective and useful alternative treatment for Bowen's disease in the vulva area.",
"title": ""
},
{
"docid": "e33dd9c497488747f93cfcc1aa6fee36",
"text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.",
"title": ""
},
{
"docid": "33b129cb569c979c81c0cb1c0a5b9594",
"text": "During animal development, accurate control of tissue specification and growth are critical to generate organisms of reproducible shape and size. The eye-antennal disc epithelium of Drosophila is a powerful model system to identify the signaling pathway and transcription factors that mediate and coordinate these processes. We show here that the Yorkie (Yki) pathway plays a major role in tissue specification within the developing fly eye disc epithelium at a time when organ primordia and regional identity domains are specified. RNAi-mediated inactivation of Yki, or its partner Scalloped (Sd), or increased activity of the upstream negative regulators of Yki cause a dramatic reorganization of the eye disc fate map leading to specification of the entire disc epithelium into retina. On the contrary, constitutive expression of Yki suppresses eye formation in a Sd-dependent fashion. We also show that knockdown of the transcription factor Homothorax (Hth), known to partner Yki in some developmental contexts, also induces an ectopic retina domain, that Yki and Scalloped regulate Hth expression, and that the gain-of-function activity of Yki is partially dependent on Hth. Our results support a critical role for Yki- and its partners Sd and Hth--in shaping the fate map of the eye epithelium independently of its universal role as a regulator of proliferation and survival.",
"title": ""
}
] |
scidocsrr
|
ddbde03fe2445a7daad4ba7f9c09aec8
|
LBANN: livermore big artificial neural network HPC toolkit
|
[
{
"docid": "091279f6b95594f9418591264d0d7e3c",
"text": "A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several offthe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the effect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the effect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6% and 97.2% respectively). Appearing in Proceedings of the 14 International Conference on Artificial Intelligence and Statistics (AISTATS) 2011, Fort Lauderdale, FL, USA. Volume 15 of JMLR: W&CP 15. Copyright 2011 by the authors.",
"title": ""
}
] |
[
{
"docid": "1d7035cc5b85e13be6ff932d39740904",
"text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor",
"title": ""
},
{
"docid": "1dbaa72cd95c32d1894750357e300529",
"text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.",
"title": ""
},
{
"docid": "738555e605ee2b90ff99bef6d434162d",
"text": "In this paper we present two deep-learning systems that competed at SemEval-2017 Task 4 “Sentiment Analysis in Twitter”. We participated in all subtasks for English tweets, involving message-level and topic-based sentiment polarity classification and quantification. We use Long Short-Term Memory (LSTM) networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages. Also, we present a text processing tool suitable for social network messages, which performs tokenization, word normalization, segmentation and spell correction. Moreover, our approach uses no hand-crafted features or sentiment lexicons. We ranked 1st (tie) in Subtask A, and achieved very competitive results in the rest of the Subtasks. Both the word embeddings and our text processing tool1 are available to the research community.",
"title": ""
},
{
"docid": "3a6f2d4fa9531d9bc8c2dbf2110990f3",
"text": "In a Grid Connected Photo-voltaic System (GCPVS) maximum power is to be drawn from the PV array and has to be injected into the Grid, using suitable maximum power point tracking algorithms, converter topologies and control algorithms. Usually converter topologies such as buck, boost, buck-boost, sepic, flyback, push pull etc. are used. Loss factors such as irradiance, temperature, shading effects etc. have zero loss in a two stage system, but additional converter used will lead to an extra loss which makes the single stage system more efficient when compared to a two stage systems, in applications like standalone and grid connected renewable energy systems. In Cuk converter the source and load side are separated via a capacitor thus energy transfer from the source side to load side occurs through this capacitor which leads to less current ripples at the load side. Thus in this paper, a Simulink model of two stage GCPVS using Cuk converter is being designed, simulated and is compared with a GCPVS using Boost Converter. For tracking the maximum power point the most common and accurate method called incremental conductance algorithm is used. And the inverter control is done using the dc bus voltage algorithm.",
"title": ""
},
{
"docid": "1e7f14531caad40797594f9e4c188697",
"text": "The Drosophila melanogaster germ plasm has become the paradigm for understanding both the assembly of a specific cytoplasmic localization during oogenesis and its function. The posterior ooplasm is necessary and sufficient for the induction of germ cells. For its assembly, localization of gurken mRNA and its translation at the posterior pole of early oogenic stages is essential for establishing the posterior pole of the oocyte. Subsequently, oskar mRNA becomes localized to the posterior pole where its translation leads to the assembly of a functional germ plasm. Many gene products are required for producing the posterior polar plasm, but only oskar, tudor, valois, germcell-less and some noncoding RNAs are required for germ cell formation. A key feature of germ cell formation is the precocious segregation of germ cells, which isolates the primordial germ cells from mRNA turnover, new transcription, and continued cell division. nanos is critical for maintaining the transcription quiescent state and it is required to prevent transcription of Sex-lethal in pole cells. In spite of the large body of information about the formation and function of the Drosophila germ plasm, we still do not know what specifically is required to cause the pole cells to be germ cells. A series of unanswered problems is discussed in this chapter.",
"title": ""
},
{
"docid": "fc172716fe01852d53d0ae5d477f3afc",
"text": "Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods.",
"title": ""
},
{
"docid": "2da528dcbf7a97875e0a5a1a79cbaa21",
"text": "Convolutional neural net-like structures arise from training an unstructured deep belief network (DBN) using structured simulation data of 2-D Ising Models at criticality. The convolutional structure arises not just because such a structure is optimal for the task, but also because the belief network automatically engages in block renormalization procedures to “rescale” or “encode” the input, a fundamental approach in statistical mechanics. This work primarily reviews the work of Mehta et al. [1], the group that first made the discovery that such a phenomenon occurs, and replicates their results training a DBN on Ising models, confirming that weights in the DBN become spatially concentrated during training on critical Ising samples.",
"title": ""
},
{
"docid": "6f9bca88fbb59e204dd8d4ae2548bd2d",
"text": "As the biomechanical literature concerning softball pitching is evolving, there are no data to support the mechanics of softball position players. Pitching literature supports the whole kinetic chain approach including the lower extremity in proper throwing mechanics. The purpose of this project was to examine the gluteal muscle group activation patterns and their relationship with shoulder and elbow kinematics and kinetics during the overhead throwing motion of softball position players. Eighteen Division I National Collegiate Athletic Association softball players (19.2 ± 1.0 years; 68.9 ± 8.7 kg; 168.6 ± 6.6 cm) who were listed on the active playing roster volunteered. Electromyographic, kinematic, and kinetic data were collected while players caught a simulated hit or pitched ball and perform their position throw. Pearson correlation revealed a significant negative correlation between non-throwing gluteus maximus during the phase of maximum external rotation to maximum internal rotation (MIR) and elbow moments at ball release (r = −0.52). While at ball release, trunk flexion and rotation both had a positive relationship with shoulder moments at MIR (r = 0.69, r = 0.82, respectively) suggesting that the kinematic actions of the pelvis and trunk are strongly related to the actions of the shoulder during throwing.",
"title": ""
},
{
"docid": "a6b65ee65eea7708b4d25fb30444c8e6",
"text": "The Intelligent vehicle is experiencing revolutionary growth in research and industry, but it still suffers from a lot of security vulnerabilities. Traditional security methods are incapable of providing secure IV, mainly in terms of communication. In IV communication, major issues are trust and data accuracy of received and broadcasted reliable data in the communication channel. Blockchain technology works for the cryptocurrency, Bitcoin which has been recently used to build trust and reliability in peer-to-peer networks with similar topologies to IV Communication world. IV to IV, communicate in a decentralized manner within communication networks. In this paper, we have proposed, Trust Bit (TB) for IV communication among IVs using Blockchain technology. Our proposed trust bit provides surety for each IVs broadcasted data, to be secure and reliable in every particular networks. Our Trust Bit is a symbol of trustworthiness of vehicles behavior, and vehicles legal and illegal action. Our proposal also includes a reward system, which can exchange some TB among IVs, during successful communication. For the data management of this trust bit, we have used blockchain technology in the vehicular cloud, which can store all Trust bit details and can be accessed by IV anywhere and anytime. Our proposal provides secure and reliable information. We evaluate our proposal with the help of IV communication on intersection use case which analyzes a variety of trustworthiness between IVs during communication.",
"title": ""
},
{
"docid": "b1b56020802d11d1f5b2badb177b06b9",
"text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.",
"title": ""
},
{
"docid": "cdfec1296a168318f773bb7ef0bfb307",
"text": "Today service markets are becoming business reality as for example Amazon's EC2 spot market. However, current research focusses on simplified consumer-provider service markets only. Taxes are an important market element which has not been considered yet for service markets. This paper introduces and evaluates the effects of tax systems for IaaS markets which trade virtual machines. As a digital good with well defined characteristics like storage or processing power a virtual machine can be taxed by the tax authority using different tax systems. Currently the value added tax is widely used for taxing virtual machines only. The main contribution of the paper is the so called CloudTax component, a framework to simulate and evaluate different tax systems on service markets. It allows to introduce economical principles and phenomenons like the Laffer Curve or tax incidences. The CloudTax component is based on the CloudSim simulation framework using the Bazaar-Extension for comprehensive economic simulations. We show that tax mechanisms strongly influence the efficiency of negotiation processes in the Cloud market.",
"title": ""
},
{
"docid": "73f8a5e5e162cc9b1ed45e13a06e78a5",
"text": "Two major projects in the U.S. and Europe have joined in a collaboration to work toward achieving interoperability among language resources. In the U.S., the project, Sustainable Interoperability for Language Technology (SILT) has been funded by the National Science Foundation under the INTEROP program, and in Europe, FLaReNet, Fostering Language Resources Network, has been funded by the European Commission under the eContentPlus framework. This international collaborative effort involves members of the language processing community and others working in related areas to build consensus regarding the sharing of data and technologies for language resources and applications, to work towards interoperability of existing data, and, where possible, to promote standards for annotation and resource building. This paper focuses on the results of a recent workshop whose goal was to arrive at operational definitions for interoperability over four thematic areas, including metadata for describing language resources, data categories and their semantics, resource publication requirements, and software sharing.",
"title": ""
},
{
"docid": "70593bbda6c88f0ac10e26768d74b3cd",
"text": "Type 2 diabetes mellitus (T2DM) is a chronic disease that oen results in multiple complications. Risk prediction and proling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications aer the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Specically, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the dierent risk factors, and (3) between the risk factor selection paerns. e method uses coecient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. e proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the dierent risks and risk factors are also identied. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a signicant margin. Furthermore, we show that the risk associations learned and the risk factors identied lead to meaningful clinical insights. CCS CONCEPTS •Information systems→ Data mining; •Applied computing → Health informatics;",
"title": ""
},
{
"docid": "c3ef6598f869e40fc399c89baf0dffd8",
"text": "In this article, a novel hybrid genetic algorithm is proposed. The selection operator, crossover operator and mutation operator of the genetic algorithm have effectively been improved according to features of Sudoku puzzles. The improved selection operator has impaired the similarity of the selected chromosome and optimal chromosome in the current population such that the chromosome with more abundant genes is more likely to participate in crossover; such a designed crossover operator has possessed dual effects of self-experience and population experience based on the concept of tactfully combining PSO, thereby making the whole iterative process highly directional; crossover probability is a random number and mutation probability changes along with the fitness value of the optimal solution in the current population such that more possibilities of crossover and mutation could then be considered during the algorithm iteration. The simulation results show that the convergence rate and stability of the novel algorithm has significantly been improved.",
"title": ""
},
{
"docid": "8222f8eae81c954e8e923cbd883f8322",
"text": "Work stealing is a promising approach to constructing multithreaded program runtimes of parallel programming languages. This paper presents HERMES, an energy-efficient work-stealing language runtime. The key insight is that threads in a work-stealing environment -- thieves and victims - have varying impacts on the overall program running time, and a coordination of their execution \"tempo\" can lead to energy efficiency with minimal performance loss. The centerpiece of HERMES is two complementary algorithms to coordinate thread tempo: the workpath-sensitive algorithm determines tempo for each thread based on thief-victim relationships on the execution path, whereas the workload-sensitive algorithm selects appropriate tempo based on the size of work-stealing deques. We construct HERMES on top of Intel Cilk Plus's runtime, and implement tempo adjustment through standard Dynamic Voltage and Frequency Scaling (DVFS). Benchmarks running on HERMES demonstrate an average of 11-12% energy savings with an average of 3-4% performance loss through meter-based measurements over commercial CPUs.",
"title": ""
},
{
"docid": "26886ff5cb6301dd960e79d8fb3f9362",
"text": "We propose a preprocessing method to improve the performance of Principal Component Analysis (PCA) for classification problems composed of two steps; in the first step, the weight of each feature is calculated by using a feature weighting method. Then the features with weights larger than a predefined threshold are selected. The selected relevant features are then subject to the second step. In the second step, variances of features are changed until the variances of the features are corresponded to their importance. By taking the advantage of step 2 to reveal the class structure, we expect that the performance of PCA increases in classification problems. Results confirm the effectiveness of our proposed methods.",
"title": ""
},
{
"docid": "21a45086509bd0edb1b578a8a904bf50",
"text": "Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.",
"title": ""
},
{
"docid": "063389c654f44f34418292818fc781e7",
"text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.",
"title": ""
},
{
"docid": "c760e6db820733dc3f57306eef81e5c9",
"text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.",
"title": ""
},
{
"docid": "2c0770b42050c4d67bfc7e723777baa6",
"text": "We describe a framework for understanding how age-related changes in adult development affect work motivation, and, building on recent life-span theories and research on cognitive abilities, personality, affect, vocational interests, values, and self-concept, identify four intraindividual change trajectories (loss, gain, reorganization, and exchange). We discuss implications of the integrative framework for the use and effectiveness of different motivational strategies with midlife and older workers in a variety of jobs, as well as abiding issues and future research directions.",
"title": ""
}
] |
scidocsrr
|
8c2975ba60444927e58c923e7e5a9a71
|
Empirical evidence for resource-rational anchoring and adjustment.
|
[
{
"docid": "637a7d7e0c33b6f63f17f9ec77add5a6",
"text": "In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.",
"title": ""
},
{
"docid": "68477e8a53020dd0b98014a6eab96255",
"text": "This article reviews a diverse set of proposals for dual processing in higher cognition within largely disconnected literatures in cognitive and social psychology. All these theories have in common the distinction between cognitive processes that are fast, automatic, and unconscious and those that are slow, deliberative, and conscious. A number of authors have recently suggested that there may be two architecturally (and evolutionarily) distinct cognitive systems underlying these dual-process accounts. However, it emerges that (a) there are multiple kinds of implicit processes described by different theorists and (b) not all of the proposed attributes of the two kinds of processing can be sensibly mapped on to two systems as currently conceived. It is suggested that while some dual-process theories are concerned with parallel competing processes involving explicit and implicit knowledge systems, others are concerned with the influence of preconscious processes that contextualize and shape deliberative reasoning and decision-making.",
"title": ""
}
] |
[
{
"docid": "6b3cdd024b6232e5226cae2c15463509",
"text": "Blended learning involves the combination of two fields of concern: education and educational technology. To gain the scholarly recognition from educationists, it is necessary to revisit its models and educational theory underpinned. This paper respond to this issue by reviewing models related to blended learning based on two prominent educational theorists, Maslow’s and Vygotsky’s view. Four models were chosen due to their holistic ideas or vast citations related to blended learning: (1) E-Moderation Model emerging from Open University of UK; (2) Learning Ecology Model by Sun Microsoft System; (3) Blended Learning Continuum in University of Glamorgan; and (4) Inquirybased Framework by Garrison and Vaughan. The discussion of each model concerning pedagogical impact to learning and teaching are made. Critical review of the models in accordance to Maslow or Vygotsky is argued. Such review is concluded with several key principles for the design and practice in",
"title": ""
},
{
"docid": "65840e476736336c9cb0fa18f8321492",
"text": "Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step —adding a constant shift to the input data— to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. In order to guarantee reliability, we posit that methods should fulfill input invariance, the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy input invariance result in misleading attribution.",
"title": ""
},
{
"docid": "f82972fcda26b339eb078bbcaad26cdc",
"text": "Colorectal cancer (CRC) shows variable underlying molecular changes with two major mechanisms of genetic instability: chromosomal instability and microsatellite instability. This review aims to delineate the different pathways of colorectal carcinogenesis and provide an overview of the most recent advances in molecular pathological classification systems for colorectal cancer. Two molecular pathological classification systems for CRC have recently been proposed. Integrated molecular analysis by The Cancer Genome Atlas project is based on a wide-ranging genomic and transcriptomic characterisation study of CRC using array-based and sequencing technologies. This approach classified CRC into two major groups consistent with previous classification systems: (1) ∼16 % hypermutated cancers with either microsatellite instability (MSI) due to defective mismatch repair (∼13 %) or ultramutated cancers with DNA polymerase epsilon proofreading mutations (∼3 %); and (2) ∼84 % non-hypermutated, microsatellite stable (MSS) cancers with a high frequency of DNA somatic copy number alterations, which showed common mutations in APC, TP53, KRAS, SMAD4, and PIK3CA. The recent Consensus Molecular Subtypes (CMS) Consortium analysing CRC expression profiling data from multiple studies described four CMS groups: almost all hypermutated MSI cancers fell into the first category CMS1 (MSI-immune, 14 %) with the remaining MSS cancers subcategorised into three groups of CMS2 (canonical, 37 %), CMS3 (metabolic, 13 %) and CMS4 (mesenchymal, 23 %), with a residual unclassified group (mixed features, 13 %). Although further research is required to validate these two systems, they may be useful for clinical trial designs and future post-surgical adjuvant treatment decisions, particularly for tumours with aggressive features or predicted responsiveness to immune checkpoint blockade.",
"title": ""
},
{
"docid": "6daa93f2a7cfaaa047ecdc04fb802479",
"text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.",
"title": ""
},
{
"docid": "e9046bfaf5488138ca5c2ff0067646a8",
"text": "In this paper we consider several new versions of approximate string matching with gaps. The main characteristic of these new versions is the existence of gaps in the matching of a given pattern in a text. Algorithms are devised for each version and their time and space complexities are stated. These specific versions of approximate string matching have various applications in computerized music analysis. CR Classification: F.2.2",
"title": ""
},
{
"docid": "ab07e92f052a03aac253fabadaea4ab3",
"text": "As news is increasingly accessed on smartphones and tablets, the need for personalising news app interactions is apparent. We report a series of three studies addressing key issues in the development of adaptive news app interfaces. We first surveyed users' news reading preferences and behaviours; analysis revealed three primary types of reader. We then implemented and deployed an Android news app that logs users' interactions with the app. We used the logs to train a classifier and showed that it is able to reliably recognise a user according to their reader type. Finally we evaluated alternative, adaptive user interfaces for each reader type. The evaluation demonstrates the differential benefit of the adaptation for different users of the news app and the feasibility of adaptive interfaces for news apps.",
"title": ""
},
{
"docid": "3a32bb2494edefe8ea28a83dad1dc4c4",
"text": "Objective: The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. Methods: The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. Results: On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. Conclusion: The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. Significance: The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.",
"title": ""
},
{
"docid": "59639429e45dc75e0b8db773d112f994",
"text": "Vector modulators are a key component in phased array antennas and communications systems. The paper describes a novel design methodology for a bi-directional, reflection-type balanced vector modulator using metal-oxide-semiconductor field-effect (MOS) transistors as active loads, which provides an improved constellation quality. The fabricated IC occupies 787 × 1325 μm2 and exhibits a minimum transmission loss of 9 dB and return losses better than 14 dB. As an application example, its use in a 16-QAM modulator is verified.",
"title": ""
},
{
"docid": "b2444538456800e84df8288f4a482775",
"text": "Thermoelectric generators (TEGs) provide a unique way for harvesting thermal energy. These devices are compact, durable, inexpensive, and scalable. Unfortunately, the conversion efficiency of TEGs is low. This requires careful design of energy harvesting systems including the interface circuitry between the TEG module and the load, with the purpose of minimizing power losses. In this paper, it is analytically shown that the traditional approach for estimating the internal resistance of TEGs may result in a significant loss of harvested power. This drawback comes from ignoring the dependence of the electrical behavior of TEGs on their thermal behavior. Accordingly, a systematic method for accurately determining the TEG input resistance is presented. Next, through a case study on automotive TEGs, it is shown that compared to prior art, more than 11% of power losses in the interface circuitry that lies between the TEG and the electrical load can be saved by the proposed modeling technique. In addition, it is demonstrated that the traditional approach would have resulted in a deviation from the target regulated voltage by as much as 59%.",
"title": ""
},
{
"docid": "cb85db604bf21751766daf3751dd73bd",
"text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.",
"title": ""
},
{
"docid": "c48d0c94d3e97661cc2c944cc4b61813",
"text": "CIPO is the very “tip of the iceberg” of functional gastrointestinal disorders, being a rare and frequently misdiagnosed condition characterized by an overall poor outcome. Diagnosis should be based on clinical features, natural history and radiologic findings. There is no cure for CIPO and management strategies include a wide array of nutritional, pharmacologic, and surgical options which are directed to minimize malnutrition, promote gut motility and reduce complications of stasis (ie, bacterial overgrowth). Pain may become so severe to necessitate major analgesic drugs. Underlying causes of secondary CIPO should be thoroughly investigated and, if detected, treated accordingly. Surgery should be indicated only in a highly selected, well characterized subset of patients, while isolated intestinal or multivisceral transplantation is a rescue therapy only in those patients with intestinal failure unsuitable for or unable to continue with TPN/HPN. Future perspectives in CIPO will be directed toward an accurate genomic/proteomic phenotying of these rare, challenging patients. Unveiling causative mechanisms of neuro-ICC-muscular abnormalities will pave the way for targeted therapeutic options for patients with CIPO.",
"title": ""
},
{
"docid": "f395e3d72341bd20e1a16b97259bad7d",
"text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.",
"title": ""
},
{
"docid": "30d0ff3258decd5766d121bf97ae06d4",
"text": "In this paper, we present a new image forgery detection method based on deep learning technique, which utilizes a convolutional neural network (CNN) to automatically learn hierarchical representations from the input RGB color images. The proposed CNN is specifically designed for image splicing and copy-move detection applications. Rather than a random strategy, the weights at the first layer of our network are initialized with the basic high-pass filter set used in calculation of residual maps in spatial rich model (SRM), which serves as a regularizer to efficiently suppress the effect of image contents and capture the subtle artifacts introduced by the tampering operations. The pre-trained CNN is used as patch descriptor to extract dense features from the test images, and a feature fusion technique is then explored to obtain the final discriminative features for SVM classification. The experimental results on several public datasets show that the proposed CNN based model outperforms some state-of-the-art methods.",
"title": ""
},
{
"docid": "5e6f9014a07e7b2bdfd255410a73b25f",
"text": "Context: Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost. Objective: The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects. Method: We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. Results: We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation’s track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990–1999 and 2000–mid 2008). Conclusions: Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in as sk the OSDO business, such and services.",
"title": ""
},
{
"docid": "3b7ac492add26938636ae694ebb14b65",
"text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. lbriand@crim.ca Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction",
"title": ""
},
{
"docid": "f0c4c1a82eee97d19012421614ee5d5f",
"text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.",
"title": ""
},
{
"docid": "58d7e76a4b960e33fc7b541d04825dc9",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
},
{
"docid": "f53dc3977a9e8c960e0232ef59c0e7fd",
"text": "The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.",
"title": ""
},
{
"docid": "69f413d247e88022c3018b2dee1b53e2",
"text": "Research and development (R&D) project selection is an important task for organizations with R&D project management. It is a complicated multi-stage decision-making process, which involves groups of decision makers. Current research on R&D project selection mainly focuses on mathematical decision models and their applications, but ignores the organizational aspect of the decision-making process. This paper proposes an organizational decision support system (ODSS) for R&D project selection. Object-oriented method is used to design the architecture of the ODSS. An organizational decision support system has also been developed and used to facilitate the selection of project proposals in the National Natural Science Foundation of China (NSFC). The proposed system supports the R&D project selection process at the organizational level. It provides useful information for decision-making tasks in the R&D project selection process. D 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9409922d01a00695745939b47e6446a0",
"text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.",
"title": ""
}
] |
scidocsrr
|
8820236a0f3281d41e9c0098bfb27062
|
Taxonomy Construction Using Syntactic Contextual Evidence
|
[
{
"docid": "57457909ea5fbee78eccc36c02464942",
"text": "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.",
"title": ""
},
{
"docid": "074011796235a8ab0470ba0fe967918f",
"text": "We present a novel approach to weakly supervised semantic class learning from the web, using a single powerful hyponym pattern combined with graph structures, which capture two properties associated with pattern-based extractions:popularity and productivity. Intuitively, a candidate ispopular if it was discovered many times by other instances in the hyponym pattern. A candidate is productive if it frequently leads to the discovery of other instances. Together, these two measures capture not only frequency of occurrence, but also cross-checking that the candidate occurs both near the class name and near other class members. We developed two algorithms that begin with just a class name and one seed instance and then automatically generate a ranked list of new class instances. We conducted experiments on four semantic classes and consistently achieved high accuracies.",
"title": ""
}
] |
[
{
"docid": "df22aa6321c86b0aec44778c7293daca",
"text": "BACKGROUND\nAtopic dermatitis (AD) is characterized by dry skin and a hyperactive immune response to allergens, 2 cardinal features that are caused in part by epidermal barrier defects. Tight junctions (TJs) reside immediately below the stratum corneum and regulate the selective permeability of the paracellular pathway.\n\n\nOBJECTIVE\nWe evaluated the expression/function of the TJ protein claudin-1 in epithelium from AD and nonatopic subjects and screened 2 American populations for single nucleotide polymorphisms in the claudin-1 gene (CLDN1).\n\n\nMETHODS\nExpression profiles of nonlesional epithelium from patients with extrinsic AD, nonatopic subjects, and patients with psoriasis were generated using Illumina's BeadChips. Dysregulated intercellular proteins were validated by means of tissue staining and quantitative PCR. Bioelectric properties of epithelium were measured in Ussing chambers. Functional relevance of claudin-1 was assessed by using a knockdown approach in primary human keratinocytes. Twenty-seven haplotype-tagging SNPs in CLDN1 were screened in 2 independent populations with AD.\n\n\nRESULTS\nWe observed strikingly reduced expression of the TJ proteins claudin-1 and claudin-23 only in patients with AD, which were validated at the mRNA and protein levels. Claudin-1 expression inversely correlated with T(H)2 biomarkers. We observed a remarkable impairment of the bioelectric barrier function in AD epidermis. In vitro we confirmed that silencing claudin-1 expression in human keratinocytes diminishes TJ function while enhancing keratinocyte proliferation. Finally, CLDN1 haplotype-tagging SNPs revealed associations with AD in 2 North American populations.\n\n\nCONCLUSION\nCollectively, these data suggest that an impairment in tight junctions contributes to the barrier dysfunction and immune dysregulation observed in AD subjects and that this may be mediated in part by reductions in claudin-1.",
"title": ""
},
{
"docid": "617bb88fdb8b76a860c58fc887ab2bc4",
"text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.",
"title": ""
},
{
"docid": "0c4ca5a63c7001e6275b05da7771a7a6",
"text": "We present a new data structure for the c-approximate near neighbor problem (ANN) in the Euclidean space. For n points in R, our algorithm achieves Oc(n + d log n) query time and Oc(n + d log n) space, where ρ ≤ 0.73/c + O(1/c) + oc(1). This is the first improvement over the result by Andoni and Indyk (FOCS 2006) and the first data structure that bypasses a locality-sensitive hashing lower bound proved by O’Donnell, Wu and Zhou (ICS 2011). By known reductions we obtain a data structure for the Hamming space and l1 norm with ρ ≤ 0.73/c+O(1/c) + oc(1), which is the first improvement over the result of Indyk and Motwani (STOC 1998). Thesis Supervisor: Piotr Indyk Title: Professor of Electrical Engineering and Computer Science",
"title": ""
},
{
"docid": "9acb65952ca0ceb489a97794b6f380ce",
"text": "Conventional railway track, of the type seen throughout the majority of the UK rail network, is made up of rails that are fixed to sleepers (ties), which, in turn, are supported by ballast. The ballast comprises crushed, hard stone and its main purpose is to distribute loads from the sleepers as rail traffic passes along the track. Over time, the stones in the ballast deteriorate, leading the track to settle and the geometry of the rails to change. Changes in geometry must be addressed in order that the track remains in a safe condition. Track inspections are carried out by measurement trains, which use sensors to precisely measure the track geometry. Network operators aim to carry out maintenance before the track geometry degrades to such an extent that speed restrictions or line closures are required. However, despite the fact that it restores the track geometry, the maintenance also worsens the general condition of the ballast, meaning that the rate of track geometry deterioration tends to increase as the amount of maintenance performed to the ballast increases. This paper considers the degradation, inspection and maintenance of a single one eighth of a mile section of railway track. A Markov model of such a section is produced. Track degradation data from the UK rail network has been analysed to produce degradation distributions which are used to define transition rates within the Markov model. The model considers the changing deterioration rate of the track section following maintenance and is used to analyse the effects of changing the level of track geometry degradation at which maintenance is requested for the section. The results are also used to show the effects of unrevealed levels of degradation. A model such as the one presented can be used to form an integral part of an asset management strategy and maintenance decision making process for railway track.",
"title": ""
},
{
"docid": "c6daad10814bafb3453b12cfac30b788",
"text": "In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MSCOCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https: //github.com/kuanghuei/SCAN.",
"title": ""
},
{
"docid": "8a8edb63c041a01cbb887cd526b97eb0",
"text": "We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.",
"title": ""
},
{
"docid": "912c4601f8c6e31107b21233ee871a6b",
"text": "The physiological mechanisms that control energy balance are reciprocally linked to those that control reproduction, and together, these mechanisms optimize reproductive success under fluctuating metabolic conditions. Thus, it is difficult to understand the physiology of energy balance without understanding its link to reproductive success. The metabolic sensory stimuli, hormonal mediators and modulators, and central neuropeptides that control reproduction also influence energy balance. In general, those that increase ingestive behavior inhibit reproductive processes, with a few exceptions. Reproductive processes, including the hypothalamic-pituitary-gonadal (HPG) system and the mechanisms that control sex behavior are most proximally sensitive to the availability of oxidizable metabolic fuels. The role of hormones, such as insulin and leptin, are not understood, but there are two possible ways they might control food intake and reproduction. They either mediate the effects of energy metabolism on reproduction or they modulate the availability of metabolic fuels in the brain or periphery. This review examines the neural pathways from fuel detectors to the central effector system emphasizing the following points: first, metabolic stimuli can directly influence the effector systems independently from the hormones that bind to these central effector systems. For example, in some cases, excess energy storage in adipose tissue causes deficits in the pool of oxidizable fuels available for the reproductive system. Thus, in such cases, reproduction is inhibited despite a high body fat content and high plasma concentrations of hormones that are thought to stimulate reproductive processes. The deficit in fuels creates a primary sensory stimulus that is inhibitory to the reproductive system, despite high concentrations of hormones, such as insulin and leptin. Second, hormones might influence the central effector systems [including gonadotropin-releasing hormone (GnRH) secretion and sex behavior] indirectly by modulating the metabolic stimulus. Third, the critical neural circuitry involves extrahypothalamic sites, such as the caudal brain stem, and projections from the brain stem to the forebrain. Catecholamines, neuropeptide Y (NPY) and corticotropin-releasing hormone (CRH) are probably involved. Fourth, the metabolic stimuli and chemical messengers affect the motivation to engage in ingestive and sex behaviors instead of, or in addition to, affecting the ability to perform these behaviors. Finally, it is important to study these metabolic events and chemical messengers in a wider variety of species under natural or seminatural circumstances.",
"title": ""
},
{
"docid": "017055a324d781774f05e35d07eff8f6",
"text": "We propose a lattice Boltzmann method to treat moving boundary problems for solid objects moving in a fluid. The method is based on the simple bounce-back boundary scheme and interpolations. The proposed method is tested in two flows past an impulsively started cylinder moving in a channel in two dimensions: (a) the flow past an impulsively started cylinder moving in a transient Couette flow; and (b) the flow past an impulsively started cylinder moving in a channel flow at rest. We obtain satisfactory results and also verify the Galilean invariance of the lattice Boltzmann method. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "115ebc84b27fbf2195dbf6a5b0eebac5",
"text": "This paper presents an automatic system for fire detection in video sequences. There are several previous methods to detect fire, however, all except two use spectroscopy or particle sensors. The two that use visual information suffer from the inability to cope with a moving camera or a moving scene. One of these is not able to work on general data, such as movie sequences. The other is too simplistic and unrestrictive in determining what is considered fire; so that it can be used reliably only in aircraft dry bays. We propose a system that uses color and motion information computed from video sequences to locate fire. This is done by first using an approach that is based upon creating a Gaussian-smoothed color histogram to detect the fire-colored pixels, and then using a temporal variation of pixels to determine which of these pixels are actually fire pixels. Next, some spurious fire pixels are automatically removed using an erode operation, and some missing fire pixels are found using region growing method. Unlike the two previous vision-based methods for fire detection, our method is applicable to more areas because of its insensitivity to camera motion. Two specific applications not possible with previous algorithms are the recognition of fire in the presence of global camera motion or scene motion and the recognition of fire in movies for possible use in an automatic rating system. We show that our method works in a variety of conditions, and that it can automatically determine when it has insufficient information.",
"title": ""
},
{
"docid": "30bc7923529eec5ac7d62f91de804f8e",
"text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.",
"title": ""
},
{
"docid": "52050687ccc2844863197f9cba11a3d2",
"text": "Classical mechanics was first envisaged by Newton, formed into a powerful tool by Euler, and brought to perfection by Lagrange and Laplace. It has served as the paradigm of science ever since. Even the great revolutions of 19th century phys icsnamely, the FaradayMaxwell electro-magnetic theory and the kinetic t h e o r y w e r e viewed as further support for the complete adequacy of the mechanistic world view. The physicist at the end of the 19th century had a coherent conceptual scheme which, in principle at least, answered all his questions about the world. The only work left to be done was the computing of the next decimal. This consensus began to unravel at the beginning of the 20th century. The work of Planck, Einstein, and Bohr simply could not be made to fit. The series of ad hoc moves by Bohr, Eherenfest, et al., now called the old quantum theory, was viewed by all as, at best, a stopgap. In the period 1925-27 a new synthesis was formed by Heisenberg, Schr6dinger, Dirac and others. This new synthesis was so successful that even today, fifty years later, physicists still teach quantum mechanics as it was formulated by these men. Nevertheless, two foundational tasks remained: that of providing a rigorous mathematical formulation of the theory, and that of providing a systematic comparison with classical mechanics so that the full ramifications of the quantum revolution could be clearly revealed. These tasks are, of course, related, and a possible fringe benefit of the second task might be the pointing of the way 'beyond quantum theory'. These tasks were taken up by von Neumann as a consequence of a seminar on the foundations of quantum mechanics conducted by Hilbert in the fall of 1926. In papers published in 1927 and in his book, The Mathemat ical Foundations of Quantum Mechanics, von Neumann provided the first completely rigorous",
"title": ""
},
{
"docid": "707947e404b363963d08a9b7d93c87fb",
"text": "The Lexical Substitution task involves selecting and ranking lexical paraphrases for a target word in a given sentential context. We present PIC, a simple measure for estimating the appropriateness of substitutes in a given context. PIC outperforms another simple, comparable model proposed in recent work, especially when selecting substitutes from the entire vocabulary. Analysis shows that PIC improves over baselines by incorporating frequency biases into predictions.",
"title": ""
},
{
"docid": "85bdac91c8c7d456a7e76ce5927cc994",
"text": "Current CNN-based solutions to salient object detection (SOD) mainly rely on the optimization of cross-entropy loss (CELoss). Then the quality of detected saliency maps is often evaluated in terms of F-measure. In this paper, we investigate an interesting issue: can we consistently use the F-measure formulation in both training and evaluation for SOD? By reformulating the standard F-measure we propose the relaxed F-measure which is differentiable w.r.t the posterior and can be easily appended to the back of CNNs as the loss function. Compared to the conventional cross-entropy loss of which the gradients decrease dramatically in the saturated area, our loss function, named FLoss, holds considerable gradients even when the activation approaches the target. Consequently, the FLoss can continuously force the network to produce polarized activations. Comprehensive benchmarks on several popular datasets show that FLoss outperforms the stateof-the-arts with a considerable margin. More specifically, due to the polarized predictions, our method is able to obtain high quality saliency maps without carefully tuning the optimal threshold, showing significant advantages in real world applications.",
"title": ""
},
{
"docid": "26d347d66524f1d57262e35041d3ca67",
"text": "Many Network Representation Learning (NRL) methods have been proposed to learn vector representations for vertices in a network recently. In this paper, we summarize most existing NRL methods into a unified two-step framework, including proximity matrix construction and dimension reduction. We focus on the analysis of proximity matrix construction step and conclude that an NRL method can be improved by exploring higher order proximities when building the proximity matrix. We propose Network Embedding Update (NEU) algorithm which implicitly approximates higher order proximities with theoretical approximation bound and can be applied on any NRL methods to enhance their performances. We conduct experiments on multi-label classification and link prediction tasks. Experimental results show that NEU can make a consistent and significant improvement over a number of NRL methods with almost negligible running time on all three publicly available datasets. The source code of this paper can be obtained from https://github.com/thunlp/NEU.",
"title": ""
},
{
"docid": "2de213f62e6b5fbf89d9b43a3ad78a34",
"text": "To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.",
"title": ""
},
{
"docid": "f36ef9dd6b78605683f67b382b9639ac",
"text": "Stable clones of neural stem cells (NSCs) have been isolated from the human fetal telencephalon. These self-renewing clones give rise to all fundamental neural lineages in vitro. Following transplantation into germinal zones of the newborn mouse brain they participate in aspects of normal development, including migration along established migratory pathways to disseminated central nervous system regions, differentiation into multiple developmentally and regionally appropriate cell types, and nondisruptive interspersion with host progenitors and their progeny. These human NSCs can be genetically engineered and are capable of expressing foreign transgenes in vivo. Supporting their gene therapy potential, secretory products from NSCs can correct a prototypical genetic metabolic defect in neurons and glia in vitro. The human NSCs can also replace specific deficient neuronal populations. Cryopreservable human NSCs may be propagated by both epigenetic and genetic means that are comparably safe and effective. By analogy to rodent NSCs, these observations may allow the development of NSC transplantation for a range of disorders.",
"title": ""
},
{
"docid": "63429f5eebc2434660b0073b802127c2",
"text": "Body Area Networks are unique in that the large-scale mobility of users allows the network itself to travel across a diverse range of operating domains or even to enter new and unknown environments. This network mobility is unlike node mobility in that sensed changes in inter-network interference level may be used to identify opportunities for intelligent inter-networking, for example, by merging or splitting from other networks, thus providing an extra degree of freedom. This paper introduces the concept of context-aware bodynets for interactive environments using inter-network interference sensing. New ideas are explored at both the physical and link layers with an investigation based on a 'smart' office environment. A series of carefully controlled measurements of the mesh interconnectivity both within and between an ambulatory body area network and a stationary desk-based network were performed using 2.45 GHz nodes. Received signal strength and carrier to interference ratio time series for selected node to node links are presented. The results provide an insight into the potential interference between the mobile and static networks and highlight the possibility for automatic identification of network merging and splitting opportunities.",
"title": ""
},
{
"docid": "7b54a56e4ad51210bc56bd768a6f4c22",
"text": "Research on the predictive bias of cognitive tests has generally shown (a) no slope effects and (b) small intercept effects, typically favoring the minority group. Aguinis, Culpepper, and Pierce (2010) simulated data and demonstrated that statistical artifacts may have led to a lack of power to detect slope differences and an overestimate of the size of the intercept effect. In response to Aguinis et al.'s (2010) call for a revival of predictive bias research, we used data on over 475,000 students entering college between 2006 and 2008 to estimate slope and intercept differences in the college admissions context. Corrections for statistical artifacts were applied. Furthermore, plotting of regression lines supplemented traditional analyses of predictive bias to offer additional evidence of the form and extent to which predictive bias exists. Congruent with previous research on bias of cognitive tests, using SAT scores in conjunction with high school grade-point average to predict first-year grade-point average revealed minimal differential prediction (ΔR²intercept ranged from .004 to .032 and ΔR²slope ranged from .001 to .013 depending on the corrections applied and comparison groups examined). We found, on the basis of regression plots, that college grades were consistently overpredicted for Black and Hispanic students and underpredicted for female students.",
"title": ""
},
{
"docid": "1edb5f3179ebfc33922e12a0c2eea294",
"text": "PURPOSE OF REVIEW\nThis review discusses the rational development of guidelines for the management of neonatal sepsis in developing countries.\n\n\nRECENT FINDINGS\nDiagnosis of neonatal sepsis with high specificity remains challenging in developing countries. Aetiology data, particularly from rural, community-based studies, are very limited, but molecular tests to improve diagnostics are being tested in a community-based study in South Asia. Antibiotic susceptibility data are limited, but suggest reducing susceptibility to first-and second-line antibiotics in both hospital and community-acquired neonatal sepsis. Results of clinical trials in South Asia and sub-Saharan Africa assessing feasibility of simplified antibiotic regimens are awaited.\n\n\nSUMMARY\nEffective management of neonatal sepsis in developing countries is essential to reduce neonatal mortality and morbidity. Simplified antibiotic regimens are currently being examined in clinical trials, but reduced antimicrobial susceptibility threatens current empiric treatment strategies. Improved clinical and microbiological surveillance is essential, to inform current practice, treatment guidelines, and monitor implementation of policy changes.",
"title": ""
},
{
"docid": "c235af1fbd499c1c3c10ea850d01bffd",
"text": "Cloud computing, as a concept, promises cost savings to end-users by letting them outsource their non-critical business functions to a third party in pay-as-you-go style. However, to enable economic pay-as-you-go services, we need Cloud middleware that maximizes sharing and support near zero costs for unused applications. Multi-tenancy, which let multiple tenants (user) to share a single application instance securely, is a key enabler for building such a middleware. On the other hand, Business processes capture Business logic of organizations in an abstract and reusable manner, and hence play a key role in most organizations. This paper presents the design and architecture of a Multi-tenant Workflow engine while discussing in detail potential use cases of such architecture. Primary contributions of this paper are motivating workflow multi-tenancy, and the design and implementation of multi-tenant workflow engine that enables multiple tenants to run their workflows securely within the same workflow engine instance without modifications to the workflows.",
"title": ""
}
] |
scidocsrr
|
cd400e4383dff77cd6958cea9159cf57
|
How to Build a CC System
|
[
{
"docid": "178cf363aaef9b888e881bf67955d1aa",
"text": "The computational creativity community (rightfully) takes a dim view of supposedly creative systems that operate by mere generation. However, what exactly this means has never been adequately defined, and therefore the idea of requiring systems to exceed this standard is problematic. Here, we revisit the question of mere generation and attempt to qualitatively identify what constitutes exceeding this threshold. This exercise leads to the conclusion that the question is likely no longer relevant for the field and that a failure to recognize this is likely detrimental to its future health.",
"title": ""
}
] |
[
{
"docid": "f169f42bcdbaf79e7efa9b1066b86523",
"text": "Logic and Philosophy of Science Research Group, Hokkaido University, Japan Jan 7, 2015 Abstract In this paper we provide an analysis and overview of some notable definitions, works and thoughts concerning discrete physics (digital philosophy) that mainly suggest a finite and discrete characteristic for the physical world, as well as, of the cellular automaton, which could serve as the basis of a (or the only) perfect mathematical deterministic model for the physical reality.",
"title": ""
},
{
"docid": "a9de29e1d8062b4950e5ab3af6bea8df",
"text": "Asserts have long been a strongly recommended (if non-functional) adjunct to programs. They certainly don't add any user-evident feature value; and it can take quite some skill and effort to devise and add useful asserts. However, they are believed to add considerable value to the developer. Certainly, they can help with automated verification; but even in the absence of that, claimed advantages include improved understandability, maintainability, easier fault localization and diagnosis, all eventually leading to better software quality. We focus on this latter claim, and use a large dataset of asserts in C and C++ programs to explore the connection between asserts and defect occurrence. Our data suggests a connection: functions with asserts do have significantly fewer defects. This indicates that asserts do play an important role in software quality; we therefore explored further the factors that play a role in assertion placement: specifically, process factors (such as developer experience and ownership) and product factors, particularly interprocedural factors, exploring how the placement of assertions in functions are influenced by local and global network properties of the callgraph. Finally, we also conduct a differential analysis of assertion use across different application domains.",
"title": ""
},
{
"docid": "9077dede1c2c4bc4b696a93e01c84f52",
"text": "Reliable continuous core temperature measurement is of major importance for monitoring patients. The zero heat flux method (ZHF) can potentially fulfil the requirements of non-invasiveness, reliability and short delay time that current measurement methods lack. The purpose of this study was to determine the performance of a new ZHF device on the forehead regarding these issues. Seven healthy subjects performed a protocol of 10 min rest, 30 min submaximal exercise (average temperature increase about 1.5 °C) and 10 min passive recovery in ambient conditions of 35 °C and 50% relative humidity. ZHF temperature (T(zhf)) was compared to oesophageal (T(es)) and rectal (T(re)) temperature. ΔT(zhf)-T(es) had an average bias ± standard deviation of 0.17 ± 0.19 °C in rest, -0.05 ± 0.18 °C during exercise and -0.01 ± 0.20 °C during recovery, the latter two being not significant. The 95% limits of agreement ranged from -0.40 to 0.40 °C and T(zhf) had hardly any delay compared to T(es). T(re) showed a substantial delay and deviation from T(es) when core temperature changed rapidly. Results indicate that the studied ZHF sensor tracks T(es) very well in hot and stable ambient conditions and may be a promising alternative for reliable non-invasive continuous core temperature measurement in hospital.",
"title": ""
},
{
"docid": "d292d1334594bec8531e6011fabaafd2",
"text": "Insight into the growth (or shrinkage) of “knowledge communities” of authors that build on each other's work can be gained by studying the evolution over time of clusters of documents. We cluster documents based on the documents they cite in common using the Streemer clustering method, which finds cohesive foreground clusters (the knowledge communities) embedded in a diffuse background. We build predictive models with features based on the citation structure, the vocabulary of the papers, and the affiliations and prestige of the authors and use these models to study the drivers of community growth and the predictors of how widely a paper will be cited. We find that scientific knowledge communities tend to grow more rapidly if their publications build on diverse information and use narrow vocabulary and that papers that lie on the periphery of a community have the highest impact, while those not in any community have the lowest impact.",
"title": ""
},
{
"docid": "58fc801888515e773a174e50e05f69fa",
"text": "Anopheles mosquitoes, sp is the main vector of malaria disease that is widespread in many parts of the world including in Papua Province. There are four speciesof Anopheles mosquitoes, sp, in Papua namely: An.farauti, An.koliensis, An. subpictus, and An.punctulatus. Larviciding synthetic cause resistance. This study aims to analyze the potential of papaya leaf and seeds extracts (Carica papaya) as larvicides against the mosquitoes Anopheles sp. The experiment was conducted at the Laboratory of Health Research and Development in Jayapura Papua province. The method used is an experimental post only control group design. Sampling was done randomly on the larvae of Anopheles sp of breeding places in Kampung Kehiran Jayapura Sentani District, 1,500 larvae. Analysis of data using statistical analysis to test the log probit mortality regression dosage, Kruskall Wallis and Mann Whitney. The results showed that papaya leaf extract effective in killing larvae of Anopheles sp, value Lethal Concentration (LC50) were 422.311 ppm, 1399.577 ppm LC90, Lethal Time (LT50) 13.579 hours, LT90 23.478 hours. Papaya seed extract is effective in killing mosquito larvae Anopheles sp, with 21.983 ppm LC50, LC90 ppm 137.862, 13.269 hours LT50, LT90 26.885 hours. Papaya seed extract is more effective in killing larvae of Anopheles sp. The mixture of papaya leaf extract and seeds are effective in killing mosquito larvae Anopheles sp, indicated by the percentage of larval mortality, the observation hours to 12, the highest larval mortality in comparison 0,05:0,1 extract, 52%, ratio 0.1 : 0.1 by 48 %, on a 24 hour observation, larval mortality in both groups reached 100 %.",
"title": ""
},
{
"docid": "4ef797ee3961528ec3bed66b2ddac452",
"text": "WiFi offloading is envisioned as a promising solution to the mobile data explosion problem in cellular networks. WiFi offloading for moving vehicles, however, poses unique characteristics and challenges, due to high mobility, fluctuating mobile channels, etc. In this paper, we focus on the problem of WiFi offloading in vehicular communication environments. Specifically, we discuss the challenges and identify the research issues related to drive-thru Internet access and effectiveness of vehicular WiFi offloading. Moreover, we review the state-of-the-art offloading solutions, in which advanced vehicular communications can be employed. We also shed some lights on the path for future research on this topic.",
"title": ""
},
{
"docid": "100c8fbe79112e2f7e12e85d7a1335f8",
"text": "Staging and response criteria were initially developed for Hodgkin lymphoma (HL) over 60 years ago, but not until 1999 were response criteria published for non-HL (NHL). Revisions to these criteria for both NHL and HL were published in 2007 by an international working group, incorporating PET for response assessment, and were widely adopted. After years of experience with these criteria, a workshop including representatives of most major international lymphoma cooperative groups and cancer centers was held at the 11(th) International Conference on Malignant Lymphoma (ICML) in June, 2011 to determine what changes were needed. An Imaging Task Force was created to update the relevance of existing imaging for staging, reassess the role of interim PET-CT, standardize PET-CT reporting, and to evaluate the potential prognostic value of quantitative analyses using PET and CT. A clinical task force was charged with assessing the potential of PET-CT to modify initial staging. A subsequent workshop was help at ICML-12, June 2013. Conclusions included: PET-CT should now be used to stage FDG-avid lymphomas; for others, CT will define stage. Whereas Ann Arbor classification will still be used for disease localization, patients should be treated as limited disease [I (E), II (E)], or extensive disease [III-IV (E)], directed by prognostic and risk factors. Since symptom designation A and B are frequently neither recorded nor accurate, and are not prognostic in most widely used prognostic indices for HL or the various types of NHL, these designations need only be applied to the limited clinical situations where they impact treatment decisions (e.g., stage II HL). PET-CT can replace the bone marrow biopsy (BMBx) for HL. A positive PET of bone or bone marrow is adequate to designate advanced stage in DLBCL. However, BMBx can be considered in DLBCL with no PET evidence of BM involvement, if identification of discordant histology is relevant for patient management, or if the results would alter treatment. BMBx remains recommended for staging of other histologies, primarily if it will impact therapy. PET-CT will be used to assess response in FDG-avid histologies using the 5-point scale, and included in new PET-based response criteria, but CT should be used in non-avid histologies. The definition of PD can be based on a single node, but must consider the potential for flare reactions seen early in treatment with newer targeted agents which can mimic disease progression. Routine surveillance scans are strongly discouraged, and the number of scans should be minimized in practice and in clinical trials, when not a direct study question. Hopefully, these recommendations will improve the conduct of clinical trials and patient management.",
"title": ""
},
{
"docid": "4bcc299aaaea50bfbf11960b66d6d5d3",
"text": "The multigram model assumes that language can be described as the output of a memoryless source that emits variable-length sequences of words. The estimation of the model parameters can be formulated as a Maximum Likelihood estimation problem from incomplete data. We show that estimates of the model parameters can be computed through an iterative Expectation-Maximization algorithm and we describe a forward-backward procedure for its implementation. We report the results of a systematical evaluation of multi-grams for language modeling on the ATIS database. The objective performance measure is the test set perplexity. Our results show that multigrams outperform conventional n-grams for this task.",
"title": ""
},
{
"docid": "3cfdf87f53d4340287fa92194afe355e",
"text": "With the rise of e-commerce, people are accustomed to writing their reviews after receiving the goods. These comments are so important that a bad review can have a direct impact on others buying. Besides, the abundant information within user reviews is very useful for extracting user preferences and item properties. In this paper, we investigate the approach to effectively utilize review information for recommender systems. The proposed model is named LSTM-Topic matrix factorization (LTMF) which integrates both LSTM and Topic Modeling for review understanding. In the experiments on popular review dataset Amazon , our LTMF model outperforms previous proposed HFT model and ConvMF model in rating prediction. Furthermore, LTMF shows the better ability on making topic clustering than traditional topic model based method, which implies integrating the information from deep learning and topic modeling is a meaningful approach to make a better understanding of reviews.",
"title": ""
},
{
"docid": "a583b48a8eb40a9e88a5137211f15bce",
"text": "The deterioration of cancellous bone structure due to aging and disease is characterized by a conversion from plate elements to rod elements. Consequently the terms \"rod-like\" and \"plate-like\" are frequently used for a subjective classification of cancellous bone. In this work a new morphometric parameter called Structure Model Index (SMI) is introduced, which makes it possible to quantify the characteristic form of a three-dimensionally described structure in terms of the amount of plates and rod composing the structure. The SMI is calculated by means of three-dimensional image analysis based on a differential analysis of the triangulated bone surface. For an ideal plate and rod structure the SMI value is 0 and 3, respectively, independent of the physical dimensions. For a structure with both plates and rods of equal thickness the value lies between 0 and 3, depending on the volume ratio of rods and plates. The SMI parameter is evaluated by examining bone biopsies from different skeletal sites. The bone samples were measured three-dimensionally with a micro-CT system. Samples with the same volume density but varying trabecular architecture can uniquely be characterized with the SMI. Furthermore the SMI values were found to correspond well with the perceived structure type.",
"title": ""
},
{
"docid": "32a97a3d9f010c7cdd542c34f02afb46",
"text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we delve into the logical design of ETL scenarios and provide a generic and customizable framework in order to support the DW designer in his task. First, we present a metamodel particularly customized for the definition of ETL activities. We follow a workflow-like approach, where the output of a certain activity can either be stored persistently or passed to a subsequent activity. Also, we employ a declarative database programming language, LDL, to define the semantics of each activity. The metamodel is generic enough to capture any possible ETL activity. Nevertheless, in the pursuit of higher reusability and flexibility, we specialize the set of our generic metamodel constructs with a palette of frequently-used ETL activities, which we call templates. Moreover, in order to achieve a uniform extensibility mechanism for this library of built-ins, we have to deal with specific language issues. Therefore, we also discuss the mechanics of template instantiation to concrete activities. The design concepts that we introduce have been implemented in a tool, ARKTOS II, which is also presented.",
"title": ""
},
{
"docid": "bacd81a1074a877e0c943a6755290d34",
"text": "This thesis addresses the problem of scheduling multiple, concurrent, adaptively parallel jobs on a multiprogrammed shared-memory multiprocessor. Adaptively parallel jobs are jobs for which the number of processors that can be used without waste varies during execution. We focus on the specific case of parallel jobs that are scheduled using a randomized work-stealing algorithm, as is used in the Cilk multithreaded language. We begin by developing a theoretical model for two-level scheduling systems, or those in which the operating system allocates processors to jobs, and the jobs schedule their threads on the processors. To analyze the performance of a job scheduling algorithm, we model the operating system as an adversary. We show that a greedy scheduler achieves an execution time that is within a factor of 2 of optimal under these conditions. Guided by our model, we present a randomized work-stealing algorithm for adaptively parallel jobs, algorithm WSAP, which takes a unique approach to estimating the processor desire of a job. We show that attempts to directly measure a job’s instantaneous parallelism are inherently misleading. We also describe a dynamic processor-allocation algorithm, algorithm DP, that allocates processors to jobs in a fair and efficient way. Using these two algorithms, we present the design and implementation of Cilk-AP, a two-level scheduling system for adaptively parallel workstealing jobs. Cilk-AP is implemented by extending the runtime system of Cilk. We tested the Cilk-AP system on a shared-memory symmetric multiprocessor (SMP) with 16 processors. Our experiments show that, relative to the original Cilk system, Cilk-AP incurs negligible overhead and provides up to 37% improvement in throughput and 30% improvement in response time in typical multiprogramming scenarios. This thesis represents joint work with Charles Leiserson and Kunal Agrawal of the Supercomputing Technologies Group at MIT’s Computer Science and Artificial Intelligence Laboratory. Thesis Supervisor: Charles E. Leiserson Title: Professor",
"title": ""
},
{
"docid": "c4b4c647e13d0300845bed2b85c13a3c",
"text": "Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.",
"title": ""
},
{
"docid": "5769af5ff99595032653dbda724f5a9d",
"text": "JULY 2005, GSA TODAY ABSTRACT The subduction factory processes raw materials such as oceanic sediments and oceanic crust and manufactures magmas and continental crust as products. Aqueous fluids, which are extracted from oceanic raw materials via dehydration reactions during subduction, dissolve particular elements and overprint such elements onto the mantle wedge to generate chemically distinct arc basalt magmas. The production of calc-alkalic andesites typifies magmatism in subduction zones. One of the principal mechanisms of modern-day, calc-alkalic andesite production is thought to be mixing of two endmember magmas, a mantle-derived basaltic magma and an arc crust-derived felsic magma. This process may also have contributed greatly to continental crust formation, as the bulk continental crust possesses compositions similar to calc-alkalic andesites. If so, then the mafic melting residue after extraction of felsic melts should be removed and delaminated from the initial basaltic arc crust in order to form “andesitic” crust compositions. The waste materials from the factory, such as chemically modified oceanic materials and delaminated mafic lower crust materials, are transported down to the deep mantle and recycled as mantle plumes. The subduction factory has played a central role in the evolution of the solid Earth through creating continental crust and deep mantle geochemical reservoirs.",
"title": ""
},
{
"docid": "79d22f397503ea852549b9b55dbb6ac6",
"text": "This study examines the effects of body shape (women’s waist-to-hip ratio and men’s waist-to-shoulder ratio) on desirability of a potential romantic partner. In judging desirability, we expected male participants to place more emphasis on female body shape, whereas females would focus more on personality characteristics. Further, we expected that relationship type would moderate the extent to which physical characteristics were valued over personality. Specifically, physical characteristics were expected to be most valued in short-term sexual encounters when compared with long-term relationships. Two hundred and thirty-nine participants (134 females, 105 males; 86% Caucasian) rated the desirability of an opposite-sex target for a date, a one-time sexual encounter, and a serious relationship. All key hypotheses were supported by the data.",
"title": ""
},
{
"docid": "f1f08c43fdf29222a61f343390291000",
"text": "This paper describes the way of Market Basket Analysis implementation to Six Sigma methodology. Data Mining methods provide a lot of opportunities in the market sector. Basket Market Analysis is one of them. Six Sigma methodology uses several statistical methods. With implementation of Market Basket Analysis (as a part of Data Mining) to Six Sigma (to one of its phase), we can improve the results and change the Sigma performance level of the process. In our research we used GRI (General Rule Induction) algorithm to produce association rules between products in the market basket. These associations show a variety between the products. To show the dependence between the products we used a Web plot. The last algorithm in analysis was C5.0. This algorithm was used to build rule-based profiles.",
"title": ""
},
{
"docid": "804b320c6f5b07f7f4d7c5be29c572e9",
"text": "Softmax is the most commonly used output function for multiclass problems and is widely used in areas such as vision, natural language processing, and recommendation. A softmax model has linear costs in the number of classes which makes it too expensive for many real-world problems. A common approach to speed up training involves sampling only some of the classes at each training step. It is known that this method is biased and that the bias increases the more the sampling distribution deviates from the output distribution. Nevertheless, almost all recent work uses simple sampling distributions that require a large sample size to mitigate the bias. In this work, we propose a new class of kernel based sampling methods and develop an efficient sampling algorithm. Kernel based sampling adapts to the model as it is trained, thus resulting in low bias. It can also be easily applied to many models because it relies only on the model’s last hidden layer. We empirically study the trade-off of bias, sampling distribution and sample size and show that kernel based sampling results in low bias with few samples.",
"title": ""
},
{
"docid": "5a3ffb6a6c15420569ea3c2b064b1c33",
"text": "In this paper, we propose a novel tensor graph convolutional neural network (TGCNN) to conduct convolution on factorizable graphs, for which here two types of problems are focused, one is sequential dynamic graphs and the other is cross-attribute graphs. Especially, we propose a graph preserving layer to memorize salient nodes of those factorized subgraphs, i.e. cross graph convolution and graph pooling. For cross graph convolution, a parameterized Kronecker sum operation is proposed to generate a conjunctive adjacency matrix characterizing the relationship between every pair of nodes across two subgraphs. Taking this operation, then general graph convolution may be efficiently performed followed by the composition of small matrices, which thus reduces high memory and computational burden. Encapsuling sequence graphs into a recursive learning, the dynamics of graphs can be efficiently encoded as well as the spatial layout of graphs. To validate the proposed TGCNN, experiments are conducted on skeleton action datasets as well as matrix completion dataset. The experiment results demonstrate that our method can achieve more competitive performance with the state-of-the-art methods.",
"title": ""
},
{
"docid": "e1b72aba65e515e7d85cd1703bded445",
"text": "BACKGROUND AND OBJECTIVES\nTo assess the influence of risk factors on the rates and kinetics of peripheral vein phlebitis (PVP) development and its theoretical influence in absolute PVP reduction after catheter replacement.\n\n\nMETHODS\nAll peripheral short intravenous catheters inserted during one month were included (1201 catheters and 967 patients). PVP risk factors were assessed by a Cox proportional hazard model. Cumulative probability, conditional failure of PVP and theoretical estimation of the benefit from replacement at different intervals were performed.\n\n\nRESULTS\nFemale gender, catheter insertion at the emergency or medical-surgical wards, forearm site, amoxicillin-clavulamate or aminoglycosides were independent predictors of PVP with hazard ratios (95 confidence interval) of 1.46 (1.09-2.15), 1.94 (1.01-3.73), 2.51 (1.29-4.88), 1.93 (1.20-3.01), 2.15 (1.45-3.20) and 2.10 (1.01-4.63), respectively. Maximum phlebitis incidence was reached sooner in patients with ≥2 risk factors (days 3-4) than in those with <2 (days 4-5). Conditional failure increased from 0.08 phlebitis/one catheter-day for devices with ≤1 risk factors to 0.26 for those with ≥3. The greatest benefit of routine catheter exchange was obtained by replacement every 60h. However, this benefit differed according to the number of risk factors: 24.8% reduction with ≥3, 13.1% with 2, and 9.2% with ≤1.\n\n\nCONCLUSIONS\nPVP dynamics is highly influenced by identifiable risk factors which may be used to refine the strategy of catheter management. Routine replacement every 72h seems to be strictly necessary only in high-risk catheters.",
"title": ""
},
{
"docid": "b475a47a9c8e8aca82c236250bbbfc33",
"text": "OBJECTIVE\nTo issue a recommendation on the types and amounts of physical activity needed to improve and maintain health in older adults.\n\n\nPARTICIPANTS\nA panel of scientists with expertise in public health, behavioral science, epidemiology, exercise science, medicine, and gerontology.\n\n\nEVIDENCE\nThe expert panel reviewed existing consensus statements and relevant evidence from primary research articles and reviews of the literature.\n\n\nPROCESS\nAfter drafting a recommendation for the older adult population and reviewing drafts of the Updated Recommendation from the American College of Sports Medicine (ACSM) and the American Heart Association (AHA) for Adults, the panel issued a final recommendation on physical activity for older adults.\n\n\nSUMMARY\nThe recommendation for older adults is similar to the updated ACSM/AHA recommendation for adults, but has several important differences including: the recommended intensity of aerobic activity takes into account the older adult's aerobic fitness; activities that maintain or increase flexibility are recommended; and balance exercises are recommended for older adults at risk of falls. In addition, older adults should have an activity plan for achieving recommended physical activity that integrates preventive and therapeutic recommendations. The promotion of physical activity in older adults should emphasize moderate-intensity aerobic activity, muscle-strengthening activity, reducing sedentary behavior, and risk management.",
"title": ""
}
] |
scidocsrr
|
785b2bddce513baa7977fa400de3e3e9
|
Hedging Deep Features for Visual Tracking.
|
[
{
"docid": "e14d1f7f7e4f7eaf0795711fb6260264",
"text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"title": ""
},
{
"docid": "001104ca832b10553b28bbd713e6cbd5",
"text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.",
"title": ""
},
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
}
] |
[
{
"docid": "3e93f1c35e42fa7abc245677f5be16ba",
"text": "In this paper, an unequal 1:N Wilkinson power divider with variable power dividing ratio is proposed. The proposed unequal power divider is composed of the conventional Wilkinson divider structure, rectangular-shaped defected ground structure (DGS), island in DGS, and varactor diodes of which capacitance is adjustable according to bias voltage. The high impedance value of microstrip line having DGS is going up and down by adjusting the bias voltage for varactor diodes. Output power dividing ratio (N) is adjusted from 2.59 to 10.4 for the unequal power divider with 2 diodes.",
"title": ""
},
{
"docid": "be398b849ba0caf2e714ea9cc8468d78",
"text": "Gadolinium based contrast agents (GBCAs) play an important role in the diagnostic evaluation of many patients. The safety of these agents has been once again questioned after gadolinium deposits were observed and measured in brain and bone of patients with normal renal function. This retention of gadolinium in the human body has been termed \"gadolinium storage condition\". The long-term and cumulative effects of retained gadolinium in the brain and elsewhere are not as yet understood. Recently, patients who report that they suffer from chronic symptoms secondary to gadolinium exposure and retention created gadolinium-toxicity on-line support groups. Their self-reported symptoms have recently been published. Bone and joint complaints, and skin changes were two of the most common complaints. This condition has been termed \"gadolinium deposition disease\". In this review we will address gadolinium toxicity disorders, from acute adverse reactions to GBCAs to gadolinium deposition disease, with special emphasis on the latter, as it is the most recently described and least known.",
"title": ""
},
{
"docid": "426a25d6536a3a388313aadbdb66bbe7",
"text": "In this review, we present the recent developments and future prospects of improving nitrogen use efficiency (NUE) in crops using various complementary approaches. These include conventional breeding and molecular genetics, in addition to alternative farming techniques based on no-till continuous cover cropping cultures and/or organic nitrogen (N) nutrition. Whatever the mode of N fertilization, an increased knowledge of the mechanisms controlling plant N economy is essential for improving NUE and for reducing excessive input of fertilizers, while maintaining an acceptable yield and sufficient profit margin for the farmers. Using plants grown under agronomic conditions, with different tillage conditions, in pure or associated cultures, at low and high N mineral fertilizer input, or using organic fertilization, it is now possible to develop further whole plant agronomic and physiological studies. These can be combined with gene, protein and metabolite profiling to build up a comprehensive picture depicting the different steps of N uptake, assimilation and recycling to produce either biomass in vegetative organs or proteins in storage organs. We provide a critical overview as to how our understanding of the agro-ecophysiological, physiological and molecular controls of N assimilation in crops, under varying environmental conditions, has been improved. We OPEN ACCESS Sustainability 2011, 3 1453 have used combined approaches, based on agronomic studies, whole plant physiology, quantitative genetics, forward and reverse genetics and the emerging systems biology. Long-term sustainability may require a gradual transition from synthetic N inputs to legume-based crop rotation, including continuous cover cropping systems, where these may be possible in certain areas of the world, depending on climatic conditions. Current knowledge and prospects for future agronomic development and application for breeding crops adapted to lower mineral fertilizer input and to alternative farming techniques are explored, whilst taking into account the constraints of both the current world economic situation and the environment.",
"title": ""
},
{
"docid": "bba21c774160b38eb64bf06b2e8b9ab7",
"text": "Open data marketplaces have emerged as a mode of addressing open data adoption barriers. However, knowledge of how such marketplaces affect digital service innovation in open data ecosystems is limited. This paper explores their value proposition for open data users based on an exploratory case study. Five prominent perceived values are identified: lower task complexity, higher access to knowledge, increased possibilities to influence, lower risk and higher visibility. The impact on open data adoption barriers is analyzed and the consequences for ecosystem sustainability is discussed. The paper concludes that open data marketplaces can lower the threshold of using open data by providing better access to open data and associated support services, and by increasing knowledge transfer within the ecosystem.",
"title": ""
},
{
"docid": "69548f662a286c0b3aca5374f36ce2c7",
"text": "A hallmark of glaucomatous optic nerve damage is retinal ganglion cell (RGC) death. RGCs, like other central nervous system neurons, have a limited capacity to survive or regenerate an axon after injury. Strategies that prevent or slow down RGC degeneration, in combination with intraocular pressure management, may be beneficial to preserve vision in glaucoma. Recent progress in neurobiological research has led to a better understanding of the molecular pathways that regulate the survival of injured RGCs. Here we discuss a variety of experimental strategies including intraocular delivery of neuroprotective molecules, viral-mediated gene transfer, cell implants and stem cell therapies, which share the ultimate goal of promoting RGC survival after optic nerve damage. The challenge now is to assess how this wealth of knowledge can be translated into viable therapies for the treatment of glaucoma and other optic neuropathies.",
"title": ""
},
{
"docid": "86502e1c68f309bb7676d5b1e9013172",
"text": "In this article, we present the Menpo 2D and Menpo 3D benchmarks, two new datasets for multi-pose 2D and 3D facial landmark localisation and tracking. In contrast to the previous benchmarks such as 300W and 300VW, the proposed benchmarks contain facial images in both semi-frontal and profile pose. We introduce an elaborate semi-automatic methodology for providing high-quality annotations for both the Menpo 2D and Menpo 3D benchmarks. In Menpo 2D benchmark, different visible landmark configurations are designed for semi-frontal and profile faces, thus making the 2D face alignment full-pose. In Menpo 3D benchmark, a united landmark configuration is designed for both semi-frontal and profile faces based on the correspondence with a 3D face model, thus making face alignment not only full-pose but also corresponding to the real-world 3D space. Based on the considerable number of annotated images, we organised Menpo 2D Challenge and Menpo 3D Challenge for face alignment under large pose variations in conjunction with CVPR 2017 and ICCV 2017, respectively. The results of these challenges demonstrate that recent deep learning architectures, when trained with the abundant data, lead to excellent results. We also provide a very simple, yet effective solution, named Cascade Multi-view Hourglass Model, to 2D and 3D face alignment. In our method, we take advantage of all 2D and 3D facial landmark annotations in a joint way. We not only capitalise on the correspondences between the semi-frontal and profile 2D facial landmarks but also employ joint supervision from both 2D and 3D facial landmarks. Finally, we discuss future directions on the topic of face alignment.",
"title": ""
},
{
"docid": "0831efef8bd60441b0aa2b0a917d04c2",
"text": "Light-weight antenna arrays require utilizing the same antenna aperture to provide multiple functions (e.g., communications and radar) in separate frequency bands. In this paper, we present a novel antenna element design for a dual-band array, comprising interleaved printed dipoles spaced to avoid grating lobes in each band. The folded dipoles are designed to be resonant at octave-separated frequency bands (1 and 2 GHz), and inkjet-printed on photographic paper. Each dipole is gap-fed by voltage induced electromagnetically from a microstrip line on the other side of the substrate. This nested element configuration shows excellent corroboration between simulated and measured data, with 10-dB return loss bandwidth of at least 5% for each band and interchannel isolation better than 15 dB. The measured element gain is 5.3 to 7 dBi in the two bands, with cross-polarization less than -25 dBi. A large array containing 39 printed dipoles has been fabricated on paper, with each dipole individually fed to facilitate independent beam control. Measurements on the array reveal broadside gain of 12 to 17 dBi in each band with low cross-polarization.",
"title": ""
},
{
"docid": "3a00a29587af4f7c5ce974a8e6970413",
"text": "After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning.",
"title": ""
},
{
"docid": "4097fe8240f8399de8c0f7f6bdcbc72f",
"text": "Feature extraction of EEG signals is core issues on EEG based brain mapping analysis. The classification of EEG signals has been performed using features extracted from EEG signals. Many features have proved to be unique enough to use in all brain related medical application. EEG signals can be classified using a set of features like Autoregression, Energy Spectrum Density, Energy Entropy, and Linear Complexity. However, different features show different discriminative power for different subjects or different trials. In this research, two-features are used to improve the performance of EEG signals. Neural Network based techniques are applied to feature extraction of EEG signal. This paper discuss on extracting features based on Average method and Max & Min method of the data set. The Extracted Features are classified using Neural Network Temporal Pattern Recognition Technique. The two methods are compared and performance is analyzed based on the results obtained from the Neural Network classifier.",
"title": ""
},
{
"docid": "049a7164a973fb515ed033ba216ec344",
"text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.",
"title": ""
},
{
"docid": "9a9bc57a279c4b88279bb1078e1e8a45",
"text": "We study the problem of visualizing large-scale and highdimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-ofthe-art methods such as the t-SNE from scaling to largescale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space. Comparing to tSNE, LargeVis significantly reduces the computational cost of the graph construction step and employs a principled probabilistic model for the visualization step, the objective of which can be effectively optimized through asynchronous stochastic gradient descent with a linear time complexity. The whole procedure thus easily scales to millions of highdimensional data points. Experimental results on real-world data sets demonstrate that the LargeVis outperforms the state-of-the-art methods in both efficiency and effectiveness. The hyper-parameters of LargeVis are also much more stable over different data sets.",
"title": ""
},
{
"docid": "64306a76b61bbc754e124da7f61a4fbe",
"text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.",
"title": ""
},
{
"docid": "8a679c93185332398c5261ddcfe81e84",
"text": "We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, using a function approximator involving linear combinations of fixed basis functions. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an ergodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. In addition to proving new and stronger results than those previously available, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. Finally, we prove that on-line updates, based on entire trajectories of the Markov chain, are in a certain sense necessary for convergence. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning.",
"title": ""
},
{
"docid": "41df403d437a17cb65915b755060ef8a",
"text": "User verification systems that use a single biometric indicator often have to contend with noisy sensor data, restricted degrees of freedom, non-universality of the biometric trait and unacceptable error rates. Attempting to improve the performance of individual matchers in such situations may not prove to be effective because of these inherent problems. Multibiometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. These systems help achieve an increase in performance that may not be possible using a single biometric indicator. Further, multibiometric systems provide anti-spoofing measures by making it difficult for an intruder to spoof multiple biometric traits simultaneously. However, an effective fusion scheme is necessary to combine the information presented by multiple domain experts. This paper addresses the problem of information fusion in biometric verification systems by combining information at the matching score level. Experimental results on combining three biometric modalities (face, fingerprint and hand geometry) are presented.",
"title": ""
},
{
"docid": "08eac8e69ef59d9149f071472fb55670",
"text": "This paper describes the issues and tradeoffs in the design and monolithic implementation of direct-conversion receivers and proposes circuit techniques that can alleviate the drawbacks of this architecture. Following a brief study of heterodyne and image-reject topologies, the direct-conversion architecture is introduced and effects such as dc offset, I=Q mismatch, even-order distortion, flicker noise, and oscillator leakage are analyzed. Related design techniques for amplification and mixing, quadrature phase calibration, and baseband processing are also described.",
"title": ""
},
{
"docid": "db622838ba5f6c76f66125cf76c47b40",
"text": "In recent years, the study of lightweight symmetric ciphers has gained interest due to the increasing demand for security services in constrained computing environments, such as in the Internet of Things. However, when there are several algorithms to choose from and different implementation criteria and conditions, it becomes hard to select the most adequate security primitive for a specific application. This paper discusses the hardware implementations of Present, a standardized lightweight cipher called to overcome part of the security issues in extremely constrained environments. The most representative realizations of this cipher are reviewed and two novel designs are presented. Using the same implementation conditions, the two new proposals and three state-of-the-art designs are evaluated and compared, using area, performance, energy, and efficiency as metrics. From this wide experimental evaluation, to the best of our knowledge, new records are obtained in terms of implementation size and energy consumption. In particular, our designs result to be adequate in regards to energy-per-bit and throughput-per-slice.",
"title": ""
},
{
"docid": "d4cd6414a9edbd6f07b4a0b5f298e2ba",
"text": "Measuring Semantic Textual Similarity (STS), between words/ terms, sentences, paragraph and document plays an important role in computer science and computational linguistic. It also has many applications over several fields such as Biomedical Informatics and Geoinformation. In this paper, we present a survey on different methods of textual similarity and we also reported about the availability of different software and tools those are useful for STS. In natural language processing (NLP), STS is a important component for many tasks such as document summarization, word sense disambiguation, short answer grading, information retrieval and extraction. We split out the measures for semantic similarity into three broad categories such as (i) Topological/Knowledge-based (ii) Statistical/ Corpus Based (iii) String based. More emphasis is given to the methods related to the WordNet taxonomy. Because topological methods, plays an important role to understand intended meaning of an ambiguous word, which is very difficult to process computationally. We also propose a new method for measuring semantic similarity between sentences. This proposed method, uses the advantages of taxonomy methods and merge these information to a language model. It considers the WordNet synsets for lexical relationships between nodes/words and a uni-gram language model is implemented over a large corpus to assign the information content value between the two nodes of different classes.",
"title": ""
},
{
"docid": "603c82380d4896b324f4511c301972e5",
"text": "Pseudolymphomatous folliculitis (PLF), which clinically mimicks cutaneous lymphoma, is a rare manifestation of cutaneous pseudolymphoma and cutaneous lymphoid hyperplasia. Here, we report on a 45-year-old Japanese woman with PLF. Dermoscopy findings revealed prominent arborizing vessels with small perifollicular and follicular yellowish spots and follicular red dots. A biopsy specimen also revealed dense lymphocytes, especially CD1a+ cells, infiltrated around the hair follicles. Without any additional treatment, the patient's nodule rapidly decreased. The presented case suggests that typical dermoscopy findings could be a possible supportive tool for the diagnosis of PLF.",
"title": ""
},
{
"docid": "edf548598375ea1e36abd57dd3bad9c7",
"text": "processes associated with social identity. Group identification, as self-categorization, constructs an intragroup prototypicality gradient that invests the most prototypical member with the appearance of having influence; the appearance arises because members cognitively and behaviorally conform to the prototype. The appearance of influence becomes a reality through depersonalized social attraction processes that makefollowers agree and comply with the leader's ideas and suggestions. Consensual social attraction also imbues the leader with apparent status and creates a status-based structural differentiation within the group into leader(s) and followers, which has characteristics ofunequal status intergroup relations. In addition, afundamental attribution process constructs a charismatic leadership personality for the leader, which further empowers the leader and sharpens the leader-follower status differential. Empirical supportfor the theory is reviewed and a range of implications discussed, including intergroup dimensions, uncertainty reduction and extremism, power, and pitfalls ofprototype-based leadership.",
"title": ""
},
{
"docid": "85b95ad66c0492661455281177004b9e",
"text": "Although relatively small in size and power output, automotive accessory motors play a vital role in improving such critical vehicle characteristics as drivability, comfort, and, most importantly, fuel economy. This paper describes a design method and experimental verification of a novel technique for torque ripple reduction in stator claw-pole permanent-magnet (PM) machines, which are a promising technology prospect for automotive accessory motors.",
"title": ""
}
] |
scidocsrr
|
51883090b3992ff102603f118f991367
|
Crowd Map: Accurate Reconstruction of Indoor Floor Plans from Crowdsourced Sensor-Rich Videos
|
[
{
"docid": "f085832faf1a2921eedd3d00e8e592db",
"text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.",
"title": ""
},
{
"docid": "9ad145cd939284ed77919b73452236c0",
"text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.",
"title": ""
}
] |
[
{
"docid": "bfc349d95143237cc1cf55f77cb2044f",
"text": "Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.",
"title": ""
},
{
"docid": "fe6f81141e58bf5cf13bec80e033e197",
"text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.",
"title": ""
},
{
"docid": "48e26039d9b2e4ed3cfdbc0d3ba3f1d0",
"text": "This paper presents a trajectory generator and an active compliance control scheme, unified in a framework to synthesize dynamic, feasible and compliant trot-walking locomotion cycles for a stiff-by-nature hydraulically actuated quadruped robot. At the outset, a CoP-based trajectory generator that is constructed using an analytical solution is implemented to obtain feasible and dynamically balanced motion references in a systematic manner. Initial conditions are uniquely determined for symmetrical motion patterns, enforcing that trajectories are seamlessly connected both in position, velocity and acceleration levels, regardless of the given support phase. The active compliance controller, used simultaneously, is responsible for sufficient joint position/force regulation. An admittance block is utilized to compute joint displacements that correspond to joint force errors. In addition to position feedback, these joint displacements are inserted to the position control loop as a secondary feedback term. In doing so, active compliance control is achieved, while the position/force trade-off is modulated via the virtual admittance parameters. Various trot-walking experiments are conducted with the proposed framework using HyQ, a ~ 75kg hydraulically actuated quadruped robot. We present results of repetitive, continuous, and dynamically equilibrated trot-walking locomotion cycles, both on level surface and uneven surface walking experiments.",
"title": ""
},
{
"docid": "7c291acaf26a61dc5155af21d12c2aaf",
"text": "Recently, deep learning and deep neural networks have attracted considerable attention and emerged as one predominant field of research in the artificial intelligence community. The developed techniques have also gained widespread use in various domains with good success, such as automatic speech recognition, information retrieval and text classification, etc. Among them, long short-term memory (LSTM) networks are well suited to such tasks, which can capture long-range dependencies among words efficiently, meanwhile alleviating the gradient vanishing or exploding problem during training effectively. Following this line of research, in this paper we explore a novel use of a Siamese LSTM based method to learn more accurate document representation for text categorization. Such a network architecture takes a pair of documents with variable lengths as the input and utilizes pairwise learning to generate distributed representations of documents that can more precisely render the semantic distance between any pair of documents. In doing so, documents associated with the same semantic or topic label could be mapped to similar representations having a relatively higher semantic similarity. Experiments conducted on two benchmark text categorization tasks, viz. IMDB and 20Newsgroups, show that using a three-layer deep neural network based classifier that takes a document representation learned from the Siamese LSTM sub-networks as the input can achieve competitive performance in relation to several state-of-the-art methods.",
"title": ""
},
{
"docid": "c2f338aef785f0d6fee503bf0501a558",
"text": "Recognizing 3-D objects in cluttered scenes is a challenging task. Common approaches find potential feature correspondences between a scene and candidate models by matching sampled local shape descriptors and select a few correspondences with the highest descriptor similarity to identify models that appear in the scene. However, real scans contain various nuisances, such as noise, occlusion, and featureless object regions. This makes selected correspondences have a certain portion of false positives, requiring adopting the time-consuming model verification many times to ensure accurate recognition. This paper proposes a 3-D object recognition approach with three key components. First, we construct a Signature of Geometric Centroids descriptor that is descriptive and robust, and apply it to find high-quality potential feature correspondences. Second, we measure geometric compatibility between a pair of potential correspondences based on isometry and three angle-preserving components. Third, we perform effective correspondence selection by using both descriptor similarity and compatibility with an auxiliary set of “less” potential correspondences. Experiments on publicly available data sets demonstrate the robustness and/or efficiency of the descriptor, selection approach, and recognition framework. Comparisons with the state-of-the-arts validate the superiority of our recognition approach, especially under challenging scenarios.",
"title": ""
},
{
"docid": "ee6461f83cee5fdf409a130d2cfb1839",
"text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.",
"title": ""
},
{
"docid": "41539545b3d1f6a90607cc75d1dccf2b",
"text": "Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box.",
"title": ""
},
{
"docid": "faf53f190fe226ce14f32f9d44d551b5",
"text": "We present a study of how Linux kernel developers respond to bug reports issued by a static analysis tool. We found that developers prefer to triage reports in younger, smaller, and more actively-maintained files ( §2), first address easy-to-fix bugs and defer difficult (but possibly critical) bugs ( §3), and triage bugs in batches rather than individually (§4). Also, although automated tools cannot find many types of bugs, they can be effective at directing developers’ attentions towards parts of the codebase that contain up to 3X more user-reported bugs ( §5). Our insights into developer attitudes towards static analysis tools allow us to make suggestions for improving their usability and effectiveness. We feel that it could be effective to run static analysis tools continuously while programming and before committing code, to rank reports so that those most likely to be triaged are shown to developers first, to show the easiest reports to new developers, to perform deeper analysis on more actively-maintained code, and to use reports as indirect indicators of code quality and importance.",
"title": ""
},
{
"docid": "97cc6d9ed4c1aba0dc09635350a401ee",
"text": "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts.\n SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.",
"title": ""
},
{
"docid": "41dc9d6fd6a0550cccac1bc5ba27b11d",
"text": "A low-power forwarded-clock I/O transceiver architecture is presented that employs a high degree of output/input multiplexing, supply-voltage scaling with data rate, and low-voltage circuit techniques to enable low-power operation. The transmitter utilizes a 4:1 output multiplexing voltage-mode driver along with 4-phase clocking that is efficiently generated from a passive poly-phase filter. The output driver voltage swing is accurately controlled from 100–200 <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm mV}_{\\rm ppd}$</tex></formula> using a low-voltage pseudo-differential regulator that employs a partial negative-resistance load for improved low frequency gain. 1:8 input de-multiplexing is performed at the receiver equalizer output with 8 parallel input samplers clocked from an 8-phase injection-locked oscillator that provides more than 1UI de-skew range. In the transmitter clocking circuitry, per-phase duty-cycle and phase-spacing adjustment is implemented to allow adequate timing margins at low operating voltages. Fabricated in a general purpose 65 nm CMOS process, the transceiver achieves 4.8–8 Gb/s at 0.47–0.66 pJ/b energy efficiency for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm V}_{\\rm DD}=0.6$</tex></formula>–0.8 V.",
"title": ""
},
{
"docid": "e6665cc046733c66103506cdbb4814d2",
"text": "....................................................................... 2 Table of",
"title": ""
},
{
"docid": "6101f4582b1ad0b0306fe3d513940fab",
"text": "Although a great deal of media attention has been given to the negative effects of playing video games, relatively less attention has been paid to the positive effects of engaging in this activity. Video games in health care provide ample examples of innovative ways to use existing commercial games for health improvement or surgical training. Tailor-made games help patients be more adherent to treatment regimens and train doctors how to manage patients in different clinical situations. In this review, examples in the scientific literature of commercially available and tailor-made games used for education and training with patients and medical students and doctors are summarized. There is a history of using video games with patients from the early days of gaming in the 1980s, and this has evolved into a focus on making tailor-made games for different disease groups, which have been evaluated in scientific trials more recently. Commercial video games have been of interest regarding their impact on surgical skill. More recently, some basic computer games have been developed and evaluated that train doctors in clinical skills. The studies presented in this article represent a body of work outlining positive effects of playing video games in the area of health care.",
"title": ""
},
{
"docid": "7cf7b6d0ad251b98956a29ad9192cb63",
"text": "A method for two dimensional position finding of stationary targets whose bearing measurements suffers from indeterminable bias and random noise has been proposed. The algorithm uses convex optimization to minimize an error function which has been calculated based on circular as well as linear loci of error. Taking into account a number of observations, certain modifications have been applied to the initial crude method so as to arrive at a faster, more accurate method. Simulation results of the method illustrate up to 30% increase in accuracy compared with the well-known least square filter.",
"title": ""
},
{
"docid": "cb702c48a242c463dfe1ac1f208acaa2",
"text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.",
"title": ""
},
{
"docid": "afc9fbf2db89a5220c897afcbabe028f",
"text": "Evidence for viewpoint-specific image-based object representations have been collected almost entirely using exemplar-specific recognition tasks. Recent results, however, implicate image-based processes in more categorical tasks, for instance when objects contain qualitatively different 3D parts. Although such discriminations approximate class-level recognition. they do not establish whether image-based representations can support generalization across members of an object class. This issue is critical to any theory of recognition, in that one hallmark of human visual competence is the ability to recognize unfamiliar instances of a familiar class. The present study addresses this questions by testing whether viewpoint-specific representations for some members of a class facilitate the recognition of other members of that class. Experiment 1 demonstrates that familiarity with several members of a class of novel 3D objects generalizes in a viewpoint-dependent manner to cohort objects from the same class. Experiment 2 demonstrates that this generalization is based on the degree of familiarity and the degree of geometrical distinctiveness for particular viewpoints. Experiment 3 demonstrates that this generalization is restricted to visually-similar objects rather than all objects learned in a given context. These results support the hypothesis that image-based representations are viewpoint dependent, but that these representations generalize across members of perceptually-defined classes. More generally, these results provide evidence for a new approach to image-based recognition in which object classes are represented as cluster of visually-similar viewpoint-specific representations.",
"title": ""
},
{
"docid": "b4b6b51c8f8a0da586fe66b61711222c",
"text": "Although game-tree search works well in perfect-information games, it is less suitable for imperfect-information games such as contract bridge. The lack of knowledge about the opponents' possible moves gives the game tree a very large branching factor, making it impossible to search a signiicant portion of this tree in a reasonable amount of time. This paper describes our approach for overcoming this problem. We represent information about bridge in a task network that is extended to represent multi-agency and uncertainty. Our game-playing procedure uses this task network to generate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes. We have tested this approach on declarer play in the game of bridge, in an implementation called Tignum 2. On 5000 randomly generated notrump deals, Tignum 2 beat the strongest commercially available program by 1394 to 1302, with 2304 ties. These results are statistically signiicant at the = 0:05 level. Tignum 2 searched an average of only 8745.6 moves per deal in an average time of only 27.5 seconds per deal on a Sun SPARCstation 10. Further enhancements to Tignum 2 are currently underway.",
"title": ""
},
{
"docid": "03cd6ef0cc0dab9f33b88dd7ae4227c2",
"text": "The dopaminergic system plays a pivotal role in the central nervous system via its five diverse receptors (D1–D5). Dysfunction of dopaminergic system is implicated in many neuropsychological diseases, including attention deficit hyperactivity disorder (ADHD), a common mental disorder that prevalent in childhood. Understanding the relationship of five different dopamine (DA) receptors with ADHD will help us to elucidate different roles of these receptors and to develop therapeutic approaches of ADHD. This review summarized the ongoing research of DA receptor genes in ADHD pathogenesis and gathered the past published data with meta-analysis and revealed the high risk of DRD5, DRD2, and DRD4 polymorphisms in ADHD.",
"title": ""
},
{
"docid": "97adb3a003347f579706cd01a762bdc9",
"text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.",
"title": ""
},
{
"docid": "e29596a39ef50454de3035c5bd80e68a",
"text": "A microfluidic device designed to generate monodispersed picoliter to femtoliter sized droplet emulsions at controlled rates is presented. This PDMS microfabricated device utilizes the geometry of the channel junctions in addition to the flow rates to control the droplet sizes. An expanding nozzle is used to control the breakup location of the droplet generation process. The droplet breakup occurs at a fixed point d droplets with s 100 nm can b generation r ©",
"title": ""
},
{
"docid": "dd4a95a6ffdb1a1c5c242b7a5d969d29",
"text": "A microstrip antenna with frequency agility and polarization diversity is presented. Commercially available packaged RF microelectrical-mechanical (MEMS) single-pole double-throw (SPDT) devices are used with a novel feed network to provide four states of polarization control; linear-vertical, linear-horizontal, left-hand circular and right-handed circular. Also, hyper-abrupt silicon junction tuning diodes are used to tune the antenna center frequency from 0.9-1.5 GHz. The microstrip antenna is 1 in x 1 in, and is fabricated on a 4 in x 4 in commercial-grade dielectric laminate. To the authors' knowledge, this is the first demonstration of an antenna element with four polarization states across a tunable bandwidth of 1.4:1.",
"title": ""
}
] |
scidocsrr
|
08e952323708557df37939ab80bf692e
|
Continuum regression for cross-modal multimedia retrieval
|
[
{
"docid": "6508fc8732fd22fde8c8ac180a2e19e3",
"text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.",
"title": ""
},
{
"docid": "0d292d5c1875845408c2582c182a6eb9",
"text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation",
"title": ""
}
] |
[
{
"docid": "5a74a585fb58ff09c05d807094523fb9",
"text": "Deep learning techniques are famous due to Its capability to cope with large-scale data these days. They have been investigated within various of applications e.g., language, graphical modeling, speech, audio, image recognition, video, natural language and signal processing areas. In addition, extensive researches applying machine-learning methods in Intrusion Detection System (IDS) have been done in both academia and industry. However, huge data and difficulties to obtain data instances are hot challenges to machine-learning-based IDS. We show some limitations of previous IDSs which uses classic machine learners and introduce feature learning including feature construction, extraction and selection to overcome the challenges. We discuss some distinguished deep learning techniques and its application for IDS purposes. Future research directions using deep learning techniques for IDS purposes are briefly summarized.",
"title": ""
},
{
"docid": "e08990fec382e1ba5c089d8bc1629bc5",
"text": "Goal-oriented spoken dialogue systems have been the most prominent component in todays virtual personal assistants, which allow users to speak naturally in order to finish tasks more efficiently. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw. 1 Tutorial Overview With the rising trend of artificial intelligence, more and more devices have incorporated goal-oriented spoken dialogue systems. Among popular virtual personal assistants, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, and Google Assistant have incorporated dialogue system modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. Traditional conversational systems have rather complex and/or modular pipelines. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. The goal of this tutorial is to provide the audience with the developing trend of dialogue systems, and a roadmap to get them started with the related work. The first section motivates the work on conversationbased intelligent agents, in which the core underlying system is task-oriented dialogue systems. The following section describes different approaches using deep learning for each component in the dialogue system and how it is evaluated. The last two sections focus on discussing the recent trends and current challenges on dialogue system technology and summarize the challenges and conclusions. The detailed content is described as follows. 2 Dialogue System Basics This section will motivate the work on conversation-based intelligent agents, in which the core underlying system is task-oriented spoken dialogue systems. The section starts with an overview of the standard pipeline framework for dialogue system illustrated in Figure 1 (Tur and De Mori, 2011). Basic components of a dialog system are automatic speech recognition (ASR), language understanding (LU), dialogue management (DM), and natural language generation (NLG) (Rudnicky et al., 1999; Zue et al., 2000; Zue and Glass, 2000). This tutorial will mainly focus on LU, DM, and NLG parts.",
"title": ""
},
{
"docid": "28531c596a9df30b91d9d1e44d5a7081",
"text": "The academic community has published millions of research papers to date, and the number of new papers has been increasing with time. To discover new research, researchers typically rely on manual methods such as keyword-based search, reading proceedings of conferences, browsing publication lists of known experts, or checking the references of the papers they are interested. Existing tools for the literature search are suitable for a first-level bibliographic search. However, they do not allow complex second-level searches. In this paper, we present a web service called TheAdvisor (http://theadvisor.osu.edu) which helps the users to build a strong bibliography by extending the document set obtained after a first-level search. The service makes use of the citation graph for recommendation. It also features diversification, relevance feedback, graphical visualization, venue and reviewer recommendation. In this work, we explain the design criteria and rationale we employed to make the TheAdvisor a useful and scalable web service along with a thorough experimental evaluation.",
"title": ""
},
{
"docid": "7d820e831096dac701e7f0526a8a11da",
"text": "We propose a system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment. We combine panoramas captured using a handheld omnidirectional camera from several viewpoints to create a point cloud model. After the offline modeling step, live camera pose tracking is initialized by feature point matching, and continuously updated by aligning the point cloud model to the camera image. Given a reconstruction made with less than five minutes of video, we achieve below 25 cm translational error and 0.5 degrees rotational error for over 80% of images tested. In contrast to camera-based simultaneous localization and mapping (SLAM) systems, our methods are suitable for handheld use in large outdoor spaces.",
"title": ""
},
{
"docid": "05e754e0567bf6859d7a68446fc81bad",
"text": "Bad presentation of medical statistics such as the risks associated with a particular intervention can lead to patients making poor decisions on treatment. Particularly confusing are single event probabilities, conditional probabilities (such as sensitivity and specificity), and relative risks. How can doctors improve the presentation of statistical information so that patients can make well informed decisions?",
"title": ""
},
{
"docid": "dd1fd4f509e385ea8086a45a4379a8b5",
"text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.",
"title": ""
},
{
"docid": "1ed93d114804da5714b7b612f40e8486",
"text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.",
"title": ""
},
{
"docid": "d18c77b3d741e1a7ed10588f6a3e75c0",
"text": "Given only a few image-text pairs, humans can learn to detect semantic concepts and describe the content. For machine learning algorithms, they usually require a lot of data to train a deep neural network to solve the problem. However, it is challenging for the existing systems to generalize well to the few-shot multi-modal scenario, because the learner should understand not only images and texts but also their relationships from only a few examples. In this paper, we tackle two multi-modal problems, i.e., image captioning and visual question answering (VQA), in the few-shot setting.\n We propose Fast Parameter Adaptation for Image-Text Modeling (FPAIT) that learns to learn jointly understanding image and text data by a few examples. In practice, FPAIT has two benefits. (1) Fast learning ability. FPAIT learns proper initial parameters for the joint image-text learner from a large number of different tasks. When a new task comes, FPAIT can use a small number of gradient steps to achieve a good performance. (2) Robust to few examples. In few-shot tasks, the small training data will introduce large biases in Convolutional Neural Networks (CNN) and damage the learner's performance. FPAIT leverages dynamic linear transformations to alleviate the side effects of the small training set. In this way, FPAIT flexibly normalizes the features and thus reduces the biases during training. Quantitatively, FPAIT achieves superior performance on both few-shot image captioning and VQA benchmarks.",
"title": ""
},
{
"docid": "fd6eea8007c3e58664ded211bfbc52f7",
"text": "We present our overall third ranking solution for the KDD Cup 2010 on educational data mining. The goal of the competition was to predict a student’s ability to answer questions correctly, based on historic results. In our approach we use an ensemble of collaborative filtering techniques, as used in the field of recommender systems and adopt them to fit the needs of the competition. The ensemble of predictions is finally blended, using a neural network.",
"title": ""
},
{
"docid": "d1c2c0b74caf85f25d761128ed708e6c",
"text": "Nearly all our buildings and workspaces are protected against fire breaks, which may occur due to some fault in the electric circuitries and power sources. The immediate alarming and aid to extinguish the fire in such situations of fire breaks are provided using embedded systems installed in the buildings. But as the area being monitored against such fire threats becomes vast, these systems do not provide a centralized solution. For the protection of such a huge area, like a college campus or an industrial park, a centralized wireless fire control system using Wireless sensor network technology is developed. The system developed connects the five dangers prone zones of the campus with a central control room through a ZigBee communication interface such that in case of any fire break in any of the building, a direct communication channel is developed that will send an immediate signal to the control room. In case if any of the emergency zone lies out of reach of the central node, multi hoping technique is adopted for the effective transmitting of the signal. The five nodes maintains a wireless interlink among themselves as well as with the central node for this purpose. Moreover a hooter is attached along with these nodes to notify the occurrence of any fire break such that the persons can leave the building immediately and with the help of the signal received in the control room, the exact building where the fire break occurred is identified and fire extinguishing is done. The real time system developed is implemented in Atmega32 with temperature, fire and humidity sensors and ZigBee module.",
"title": ""
},
{
"docid": "2ff3d496f0174ffc0e3bd21952c8f0ae",
"text": "Each time a latency in responding to a stimulus is measured, we owe a debt to F. C. Donders, who in the mid-19th century made the fundamental discovery that the time required to perform a mental computation reveals something fundamental about how the mind works. Donders expressed the idea in the following simple and optimistic statement about the feasibility of measuring the mind: “Will all quantitative treatment of mental processes be out of the question then? By no means! An important factor seemed to be susceptible to measurement: I refer to the time required for simple mental processes” (Donders, 1868/1969, pp. 413–414). With particular variations of simple stimuli and subjects’ choices, Donders demonstrated that it is possible to bring order to understanding invisible thought processes by computing the time that elapses between stimulus presentation and response production. A more specific observation he offered lies at the center of our own modern understanding of mental operations:",
"title": ""
},
{
"docid": "f64e65df9db7219336eafb20d38bf8cf",
"text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.",
"title": ""
},
{
"docid": "a120d11f432017c3080bb4107dd7ea71",
"text": "Over the last decade, the zebrafish has entered the field of cardiovascular research as a new model organism. This is largely due to a number of highly successful small- and large-scale forward genetic screens, which have led to the identification of zebrafish mutants with cardiovascular defects. Genetic mapping and identification of the affected genes have resulted in novel insights into the molecular regulation of vertebrate cardiac development. More recently, the zebrafish has become an attractive model to study the effect of genetic variations identified in patients with cardiovascular defects by candidate gene or whole-genome-association studies. Thanks to an almost entirely sequenced genome and high conservation of gene function compared with humans, the zebrafish has proved highly informative to express and study human disease-related gene variants, providing novel insights into human cardiovascular disease mechanisms, and highlighting the suitability of the zebrafish as an excellent model to study human cardiovascular diseases. In this review, I discuss recent discoveries in the field of cardiac development and specific cases in which the zebrafish has been used to model human congenital and acquired cardiac diseases.",
"title": ""
},
{
"docid": "581efb9277c3079a0f2bf59949600739",
"text": "Artificial Intelligence methods are becoming very popular in medical applications due to high reliability and ease. From the past decades, Artificial Intelligence techniques such as Artificial Neural Networks, Fuzzy Expert Systems, Robotics etc have found an increased usage in disease diagnosis, patient monitoring, disease risk evaluation, predicting effect of new medicines and robotic handling of surgeries. This paper presents an introduction and survey on different artificial intelligence methods used by researchers for the application of diagnosing or predicting Hypertension. Keywords-Hypertension, Artificial Neural Networks, Fuzzy Systems.",
"title": ""
},
{
"docid": "b236003ad282e973b3ebf270894c2c07",
"text": "Darier's disease is characterized by dense keratotic lesions in the seborrheic areas of the body such as scalp, forehead, nasolabial folds, trunk and inguinal region. It is a rare genodermatosis, an autosomal dominant inherited disease that may be associated with neuropsichiatric disorders. It is caused by ATPA2 gene mutation, presenting cutaneous and dermatologic expressions. Psychiatric symptoms are depression, suicidal attempts, and bipolar affective disorder. We report a case of Darier's disease in a 48-year-old female patient presenting severe cutaneous and psychiatric manifestations.",
"title": ""
},
{
"docid": "1ad08b9ecc0a08f5e0847547c55ea90d",
"text": "Text summarization is the process of creating a shorter version of one or more text documents. Automatic text summarization has become an important way of finding relevant information in large text libraries or in the Internet. Extractive text summarization techniques select entire sentences from documents according to some criteria to form a summary. Sentence scoring is the technique most used for extractive text summarization, today. Depending on the context, however, some techniques may yield better results than some others. This paper advocates the thesis that the quality of the summary obtained with combinations of sentence scoring methods depend on text subject. Such hypothesis is evaluated using three different contexts: news, blogs and articles. The results obtained show the validity of the hypothesis formulated and point at which techniques are more effective in each of those contexts studied.",
"title": ""
},
{
"docid": "acd95dfc27228f107fa44b0dc5039b72",
"text": "How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an on-line fashion. We have also developed an on-line version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.",
"title": ""
},
{
"docid": "87eed35ce26bf0194573f3ed2e6be7ca",
"text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem, because such visualization can reveal deep insights of complex data. However, most of the existing embedding approaches run on an excessively high precision, even when users want to obtain a brief insight from a visualization of large-scale datasets, ignoring the fact that in the end, the outputs are embedded onto a fixed-range pixel-based screen space. Motivated by this observation and directly considering the properties of screen space in an embedding algorithm, we propose Pixel-Aligned Stochastic Neighbor Embedding (PixelSNE), a highly efficient screen resolution-driven 2D embedding method which accelerates Barnes-Hut treebased t-distributed stochastic neighbor embedding (BH-SNE), which is known to be a state-of-the-art 2D embedding method. Our experimental results show a significantly faster running time for PixelSNE compared to BH-SNE for various datasets while maintaining comparable embedding quality.",
"title": ""
},
{
"docid": "9f786e59441784d821da00d07d2fc42e",
"text": "Employees are the most important asset of the organization. It’s a major challenge for the organization to retain its workforce as a lot of cost is incurred on them directly or indirectly. In order to have competitive advantage over the other organizations, the focus has to be on the employees. As ultimately the employees are the face of the organization as they are the building blocks of the organization. Thus their retention is a major area of concern. So attempt has been made to reduce the turnover rate of the organization. Therefore this paper attempts to review the various antecedents of turnover which affect turnover intentions of the employees.",
"title": ""
}
] |
scidocsrr
|
0a9d4d03ae5a56ee88bcb855ccb97ef2
|
Supervised Attentions for Neural Machine Translation
|
[
{
"docid": "34964b0f46c09c5eeb962f26465c3ee1",
"text": "Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and undertranslation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both alignment and translation quality over NMT without coverage.",
"title": ""
},
{
"docid": "6dce88afec3456be343c6a477350aa49",
"text": "In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-toFrench task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).",
"title": ""
},
{
"docid": "8acd410ff0757423d09928093e7e8f63",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
}
] |
[
{
"docid": "f6774efff6e22c96a43e377deb630e16",
"text": "The emergence of various and disparate social media platforms has opened opportunities for the research on cross-platform media analysis. This provides huge potentials to solve many challenging problems which cannot be well explored in one single platform. In this paper, we investigate into cross-platform social relation and behavior information to address the cold-start friend recommendation problem. In particular, we conduct an in-depth data analysis to examine what information can better transfer from one platform to another and the result demonstrates a strong correlation for the bidirectional relation and common contact behavior between our test platforms. Inspired by the observations, we design a random walk-based method to employ and integrate these convinced social information to boost friend recommendation performance. To validate the effectiveness of our cross-platform social transfer learning, we have collected a cross-platform dataset including 3,000 users with recognized accounts in both Flickr and Twitter. We demonstrate the effectiveness of the proposed friend transfer methods by promising results.",
"title": ""
},
{
"docid": "5a7324f328a7b5db8c3cb1cc9b606cbc",
"text": "We consider a multiple-block separable convex programming problem, where the objective function is the sum of m individual convex functions without overlapping variables, and the constraints are linear, aside from side constraints. Based on the combination of the classical Gauss–Seidel and the Jacobian decompositions of the augmented Lagrangian function, we propose a partially parallel splitting method, which differs from existing augmented Lagrangian based splitting methods in the sense that such an approach simplifies the iterative scheme significantly by removing the potentially expensive correction step. Furthermore, a relaxation step, whose computational cost is negligible, can be incorporated into the proposed method to improve its practical performance. Theoretically, we establish global convergence of the new method in the framework of proximal point algorithm and worst-case nonasymptotic O(1/t) convergence rate results in both ergodic and nonergodic senses, where t counts the iteration. The efficiency of the proposed method is further demonstrated through numerical results on robust PCA, i.e., factorizing from incomplete information of an B Junfeng Yang jfyang@nju.edu.cn Liusheng Hou houlsheng@163.com Hongjin He hehjmath@hdu.edu.cn 1 School of Mathematics and Information Technology, Key Laboratory of Trust Cloud Computing and Big Data Analysis, Nanjing Xiaozhuang University, Nanjing 211171, China 2 Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China 3 Department of Mathematics, Nanjing University, Nanjing 210093, China",
"title": ""
},
{
"docid": "27e25565910119837ff0ddf852c8372a",
"text": "Controlled hovering of motor driven flapping wing micro aerial vehicles (FWMAVs) is challenging due to its limited control authority, large inertia, vibration produced by wing strokes, and limited components accuracy due to fabrication methods. In this work, we present a hummingbird inspired FWMAV with 12 grams of weight and 20 grams of maximum lift. We present its full non-linear dynamic model including the full inertia tensor, non-linear input mapping, and damping effect from flapping counter torques (FCTs) and flapping counter forces (FCFs). We also present a geometric flight controller to ensure exponentially stable and globally exponential attractive properties. We experimentally demonstrated the vehicle lifting off and hover with attitude stabilization.",
"title": ""
},
{
"docid": "6e848928859248e0597124cee0560e43",
"text": "The scaling of microchip technologies has enabled large scale systems-on-chip (SoC). Network-on-chip (NoC) research addresses global communication in SoC, involving (i) a move from computation-centric to communication-centric design and (ii) the implementation of scalable communication structures. This survey presents a perspective on existing NoC research. We define the following abstractions: system, network adapter, network, and link to explain and structure the fundamental concepts. First, research relating to the actual network design is reviewed. Then system level design and modeling are discussed. We also evaluate performance analysis techniques. The research shows that NoC constitutes a unification of current trends of intrachip communication rather than an explicit new alternative.",
"title": ""
},
{
"docid": "f702a8c28184a6d49cd2f29a1e4e7ea4",
"text": "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.",
"title": ""
},
{
"docid": "61da3c6eaa2e140bcd218e1d81a7c803",
"text": "Sub-Resolution Assist Feature (SRAF) generation is a very important resolution enhancement technique to improve yield in modern semiconductor manufacturing process. Model- based SRAF generation has been widely used to achieve high accuracy but it is known to be time consuming and it is hard to obtain consistent SRAFs on the same layout pattern configurations. This paper proposes the first ma- chine learning based framework for fast yet consistent SRAF generation with high quality of results. Our technical con- tributions include robust feature extraction, novel feature compaction, model training for SRAF classification and pre- diction, and the final SRAF generation with consideration of practical mask manufacturing constraints. Experimental re- sults demonstrate that, compared with commercial Calibre tool, our machine learning based SRAF generation obtains 10X speed up and comparable performance in terms of edge placement error (EPE) and process variation (PV) band.",
"title": ""
},
{
"docid": "6e678ccfefa93d1d27a36b28ac5737c4",
"text": "BACKGROUND\nBiofilm formation is a major virulence factor in different bacteria. Biofilms allow bacteria to resist treatment with antibacterial agents. The biofilm formation on glass and steel surfaces, which are extremely useful surfaces in food industries and medical devices, has always had an important role in the distribution and transmission of infectious diseases.\n\n\nOBJECTIVES\nIn this study, the effect of coating glass and steel surfaces by copper nanoparticles (CuNPs) in inhibiting the biofilm formation by Listeria monocytogenes and Pseudomonas aeruginosa was examined.\n\n\nMATERIALS AND METHODS\nThe minimal inhibitory concentrations (MICs) of synthesized CuNPs were measured against L. monocytogenes and P. aeruginosa by using the broth-dilution method. The cell-surface hydrophobicity of the selected bacteria was assessed using the bacterial adhesion to hydrocarbon (BATH) method. Also, the effect of the CuNP-coated surfaces on the biofilm formation of the selected bacteria was calculated via the surface assay.\n\n\nRESULTS\nThe MICs for the CuNPs according to the broth-dilution method were ≤ 16 mg/L for L. monocytogenes and ≤ 32 mg/L for P. aeruginosa. The hydrophobicity of P. aeruginosa and L. monocytogenes was calculated as 74% and 67%, respectively. The results for the surface assay showed a significant decrease in bacterial attachment and colonization on the CuNP-covered surfaces.\n\n\nCONCLUSIONS\nOur data demonstrated that the CuNPs inhibited bacterial growth and that the CuNP-coated surfaces decreased the microbial count and the microbial biofilm formation. Such CuNP-coated surfaces can be used in medical devices and food industries, although further studies in order to measure their level of toxicity would be necessary.",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
},
{
"docid": "3c3d8cc7e6a616d46cab7b603f06198c",
"text": "PURPOSE\nTo investigate the impact of human papillomavirus (HPV) on the epidemiology of oral squamous cell carcinomas (OSCCs) in the United States, we assessed differences in patient characteristics, incidence, and survival between potentially HPV-related and HPV-unrelated OSCC sites.\n\n\nPATIENTS AND METHODS\nData from nine Surveillance, Epidemiology, and End Results program registries (1973 to 2004) were used to classify OSCCs by anatomic site as potentially HPV-related (n = 17,625) or HPV-unrelated (n = 28,144). Joinpoint regression and age-period-cohort models were used to assess incidence trends. Life-table analyses were used to compare 2-year overall survival for HPV-related and HPV-unrelated OSCCs.\n\n\nRESULTS\nHPV-related OSCCs were diagnosed at younger ages than HPV-unrelated OSCCs (mean ages at diagnosis, 61.0 and 63.8 years, respectively; P < .001). Incidence increased significantly for HPV-related OSCC from 1973 to 2004 (annual percentage change [APC] = 0.80; P < .001), particularly among white men and at younger ages. By contrast, incidence for HPV-unrelated OSCC was stable through 1982 (APC = 0.82; P = .186) and declined significantly during 1983 to 2004 (APC = -1.85; P < .001). When treated with radiation, improvements in 2-year survival across calendar periods were more pronounced for HPV-related OSCCs (absolute increase in survival from 1973 through 1982 to 1993 through 2004 for localized, regional, and distant stages = 9.9%, 23.1%, and 18.6%, respectively) than HPV-unrelated OSCCs (5.6%, 3.1%, and 9.9%, respectively). During 1993 to 2004, for all stages treated with radiation, patients with HPV-related OSCCs had significantly higher survival rates than those with HPV-unrelated OSCCs.\n\n\nCONCLUSION\nThe proportion of OSCCs that are potentially HPV-related increased in the United States from 1973 to 2004, perhaps as a result of changing sexual behaviors. Recent improvements in survival with radiotherapy may be due in part to a shift in the etiology of OSCCs.",
"title": ""
},
{
"docid": "0cc61499ca4eaba9d23214fc7985f71c",
"text": "We review the recent progress of the latest 100G to 1T class coherent PON technology using a simplified DSP suitable for forthcoming 5G era optical access systems. The highlight is the presentation of the first demonstration of 100 Gb/s/λ × 8 (800 Gb/s) based PON.",
"title": ""
},
{
"docid": "d22c8390e6ea9ea8c7a84e188cd10ba5",
"text": "BACKGROUND\nNutrition interventions targeted to individuals are unlikely to significantly shift US dietary patterns as a whole. Environmental and policy interventions are more promising for shifting these patterns. We review interventions that influenced the environment through food availability, access, pricing, or information at the point-of-purchase in worksites, universities, grocery stores, and restaurants.\n\n\nMETHODS\nThirty-eight nutrition environmental intervention studies in adult populations, published between 1970 and June 2003, were reviewed and evaluated on quality of intervention design, methods, and description (e.g., sample size, randomization). No policy interventions that met inclusion criteria were found.\n\n\nRESULTS\nMany interventions were not thoroughly evaluated or lacked important evaluation information. Direct comparison of studies across settings was not possible, but available data suggest that worksite and university interventions have the most potential for success. Interventions in grocery stores appear to be the least effective. The dual concerns of health and taste of foods promoted were rarely considered. Sustainability of environmental change was never addressed.\n\n\nCONCLUSIONS\nInterventions in \"limited access\" sites (i.e., where few other choices were available) had the greatest effect on food choices. Research is needed using consistent methods, better assessment tools, and longer durations; targeting diverse populations; and examining sustainability. Future interventions should influence access and availability, policies, and macroenvironments.",
"title": ""
},
{
"docid": "774f1a2403acf459a4eb594c5772a362",
"text": "motion selection DTU Orbit (12/12/2018) ISSARS: An integrated software environment for structure-specific earthquake ground motion selection Current practice enables the design and assessment of structures in earthquake prone areas by performing time history analysis with the use of appropriately selected strong ground motions. This study presents a Matlab-based software environment, which is integrated with a finite element analysis package, and aims to improve the efficiency of earthquake ground motion selection by accounting for the variability of critical structural response quantities. This additional selection criterion, which is tailored to the specific structure studied, leads to more reliable estimates of the mean structural response quantities used in design, while fulfils the criteria already prescribed by the European and US seismic codes and guidelines. To demonstrate the applicability of the software environment developed, an existing irregular, multi-storey, reinforced concrete building is studied for a wide range of seismic scenarios. The results highlight the applicability of the software developed and the benefits of applying a structure-specific criterion in the process of selecting suites of earthquake motions for the seismic design and assessment. (C) 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4513872c2240390dca8f4b704e606157",
"text": "We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.",
"title": ""
},
{
"docid": "534809d7f65a645c7f7d7ab1089c080a",
"text": "In this paper, we study the implications of the commonplace assumption that most social media studies make with respect to the nature of message shares (such as retweets) as a predominantly positive interaction. By analyzing two large longitudinal Brazilian Twitter datasets containing 5 years of conversations on two polarizing topics – Politics and Sports, we empirically demonstrate that groups holding antagonistic views can actually retweet each other more often than they retweet other groups. We show that assuming retweets as endorsement interactions can lead to misleading conclusions with respect to the level of antagonism among social communities, and that this apparent paradox is explained in part by the use of retweets to quote the original content creator out of the message’s original temporal context, for humor and criticism purposes. As a consequence, messages diffused on online media can have their polarity reversed over time, what poses challenges for social and computer scientists aiming to classify and track opinion groups on online media. On the other hand, we found that the time users take to retweet a message after it has been originally posted can be a useful signal to infer antagonism in social platforms, and that surges of out-of-context retweets correlate with sentiment drifts triggered by real-world events. We also discuss how such evidences can be embedded in sentiment analysis models.",
"title": ""
},
{
"docid": "b46a967ad85c5b64c0f14f703d385b24",
"text": "Bitcoin has shown great utility around the world with the drastic increase in its value and global consensus method of proof-of-work (POW). Over the years after the revolution in the digital transaction space, we are looking at major scalability issue with old POW consensus method and bitcoin peak limit of processing only 7 transactions per second. With more companies trying to adopt blockchain to modify their existing systems, blockchain working on old consensus methods and with scalability issues can't deliver the optimal solution. Specifically, with new trends like smart contracts and DAPPs, much better performance is needed to support any actual business applications. Such requirements are pushing the new platforms away from old methods of consensus and adoption of off-chain solutions. In this paper, we discuss various scalability issues with the Bitcoin and Ethereum blockchain and recent proposals like the lighting protocol, sharding, super quadratic sharding, DPoS to solve these issues. We also draw the comparison between these proposals on their ability to overcome scalability limits and highlighting major problems in these approaches. In the end, we propose our solution to suffice the scalability issue and conclude with the fact that with better scalability, blockchain has the potential to outrageously support varied domains of the industry.",
"title": ""
},
{
"docid": "9cfa58c71360b596694a27eea19f3f66",
"text": "Introduction. The use of social media is prevalent among college students, and it is important to understand how social media use may impact students' attitudes and behaviour. Prior studies have shown negative outcomes of social media use, but researchers have not fully discovered or fully understand the processes and implications of these negative effects. This research provides additional scientific knowledge by focussing on mediators of social media use and controlling for key confounding variables. Method. Surveys that captured social media use, various attitudes about academics and life, and personal characteristics were completed by 234 undergraduate students at a large U.S. university. Analysis. We used covariance-based structural equation modelling to analyse the response data. Results. Results indicated that after controlling for self-regulation, social media use was negatively associated with academic self-efficacy and academic performance. Additionally, academic self-efficacy mediated the negative relationship between social media use and satisfaction with life. Conclusion. There are negative relationships between social media use and academic performance, as well as with academic self-efficacy beliefs. Academic self-efficacy beliefs mediate the negative relationship between social media use and satisfaction with life. These relationships are present even when controlling for individuals' levels of self-regulation.",
"title": ""
},
{
"docid": "023285cbd5d356266831fc0e8c176d4f",
"text": "The two authorsLakoff, a linguist and Nunez, a psychologistpurport to introduce a new field of study, i.e. \"mathematical idea analysis\", with this book. By \"mathematical idea analysis\", they mean to give a scientifically plausible account of mathematical concepts using the apparatus of cognitive science. This approach is meant to be a contribution to academics and possibly education as it helps to illuminate how we cognitise mathematical concepts, which are supposedly undecipherable and abstruse to laymen. The analysis of mathematical ideas, the authors claim, cannot be done within mathematics, for even metamathematicsrecursive theory, model theory, set theory, higherorder logic still requires mathematical idea analysis in itself! Formalism, by its very nature, voids symbols of their meanings and thus cognition is required to imbue meaning. Thus, there is a need for this new field, in which the authors, if successful, would become pioneers.",
"title": ""
},
{
"docid": "d22e8f2029e114b0c648a2cdfba4978a",
"text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.",
"title": ""
},
{
"docid": "87ea9ac29f561c26e4e6e411f5bb538c",
"text": "Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, an end-toend deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors in space, models patient health state trajectories through explicit memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle irregular timed events by moderating the forgetting and consolidation of memory cells. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden – diabetes and mental health – the results show improved modeling and risk prediction accuracy.",
"title": ""
},
{
"docid": "e795381a345bf3cab74ddfd4d4763c1e",
"text": "Context: Recent research discusses the use of ontologies, dictionaries and thesaurus as a means to improve activity labels of process models. However, the trade-off between quality improvement and extra effort is still an open question. It is suspected that ontology-based support could require additional effort for the modeler. Objective: In this paper, we investigate to which degree ontology-based support potentially increases the effort of modeling. We develop a theoretical perspective grounded in cognitive psychology, which leads us to the definition of three design principles for appropriate ontology-based support. The objective is to evaluate the design principles through empirical experimentation. Method: We tested the effect of presenting relevant content from the ontology to the modeler by means of a quantitative analysis. We performed controlled experiments using a prototype, which generates a simplified and context-aware visual representation of the ontology. It logs every action of the process modeler for analysis. The experiment refers to novice modelers and was performed as between-subject design with vs. without ontology-based support. It was carried out with two different samples. Results: Part of the effort-related variables we measured showed significant statistical difference between the group with and without ontology-based support. Overall, for the collected data, the ontology support achieved good results. Conclusion: We conclude that it is feasible to provide ontology-based support to the modeler in order to improve process modeling without strongly compromising time consumption and cognitive effort.",
"title": ""
}
] |
scidocsrr
|
a878e2419a221c2d3ea14f442da19ba2
|
Effects of Website Interactivity on Online Retail Shopping Behavior
|
[
{
"docid": "57b945df75d8cd446caa82ae02074c3a",
"text": "A key issue facing information systems researchers and practitioners has been the difficulty in creating favorable user reactions to new technologies. Insufficient or ineffective training has been identified as one of the key factors underlying this disappointing reality. Among the various enhancements to training being examined in research, the role of intrinsic motivation as a lever to create favorable user perceptions has not been sufficiently exploited. In this research, two studies were conducted to compare a traditional training method with a training method that included a component aimed at enhancing intrinsic motivation. The results strongly favored the use of an intrinsic motivator during training. Key implications for theory and practice are discussed. 1Allen Lee was the accepting senior editor for this paper. Sometimes when I am at my computer, I say to my wife, \"1'11 be done in just a minute\" and the next thing I know she's standing over me saying, \"It's been an hour!\" (Collins 1989, p. 11). Investment in emerging information technology applications can lead to productivity gains only if they are accepted and used. Several theoretical perspectives have emphasized the importance of user perceptions of ease of use as a key factor affecting acceptance of information technology. Favorable ease of use perceptions are necessary for initial acceptance (Davis et al. 1989), which of course is essential for adoption and continued use. During the early stages of learning and use, ease of use perceptions are significantly affected by training (e.g., Venkatesh and Davis 1996). Investments in training by organizations have been very high and have continued to grow rapidly. Kelly (1982) reported a figure of $100B, which doubled in about a decade (McKenna 1990). In spite of such large investments in training , only 10% of training leads to a change in behavior On trainees' jobs (Georgenson 1982). Therefore, it is important to understand the most effective training methods (e.g., Facteau et al. 1995) and to improve existing training methods in order to foster favorable perceptions among users about the ease of use of a technology, which in turn should lead to acceptance and usage. Prior research in psychology (e.g., Deci 1975) suggests that intrinsic motivation during training leads to beneficial outcomes. However, traditional training methods in information systems research have tended to emphasize imparting knowledge to potential users (e.g., Nelson and Cheney 1987) while not paying Sufficient attention to intrinsic motivation during training. The two field …",
"title": ""
},
{
"docid": "205ef76e947feb4bddbe86b0835e20b3",
"text": "Received: 12 July 2000 Revised: 20 August 2001 : 30 July 2002 Accepted: 15 October 2002 Abstract This paper explores factors that influence consumer’s intentions to purchase online at an electronic commerce website. Specifically, we investigate online purchase intention using two different perspectives: a technology-oriented perspective and a trust-oriented perspective. We summarise and review the antecedents of online purchase intention that have been developed within these two perspectives. An empirical study in which the contributions of both perspectives are investigated is reported. We study the perceptions of 228 potential online shoppers regarding trust and technology and their attitudes and intentions to shop online at particular websites. In terms of relative contributions, we found that the trust-antecedent ‘perceived risk’ and the technology-antecedent ‘perceived ease-of-use’ directly influenced the attitude towards purchasing online. European Journal of Information Systems (2003) 12, 41–48. doi:10.1057/ palgrave.ejis.3000445",
"title": ""
}
] |
[
{
"docid": "617bb88fdb8b76a860c58fc887ab2bc4",
"text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.",
"title": ""
},
{
"docid": "8666fe5a01f032d744a3e798241a30f6",
"text": "Emojis have gone viral on the Internet across platforms and devices. Interwoven into our daily communications, they have become a ubiquitous new language. However, little has been done to analyze the usage of emojis at scale and in depth. Why do some emojis become especially popular while others don’t? How are people using them among the words? In this work, we take the initiative to study the collective usage and behavior of emojis, and specifically, how emojis interact with their context. We base our analysis on a very large corpus collected from a popular emoji keyboard, which contains a full month of inputs from millions of users. Our analysis is empowered by a state-of-the-art machine learning tool that computes the embeddings of emojis and words in a semantic space. We find that emojis with clear semantic meanings are more likely to be adopted. While entity-related emojis are more likely to be used as alternatives to words, sentimentrelated emojis often play a complementary role in a message. Overall, emojis are significantly more prevalent in a senti-",
"title": ""
},
{
"docid": "5a5ae4ab9b802fe6d5481f90a4aa07b7",
"text": "High-dimensional pattern classification was applied to baseline and multiple follow-up MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI) participants with mild cognitive impairment (MCI), in order to investigate the potential of predicting short-term conversion to Alzheimer's Disease (AD) on an individual basis. MCI participants that converted to AD (average follow-up 15 months) displayed significantly lower volumes in a number of grey matter (GM) regions, as well as in the white matter (WM). They also displayed more pronounced periventricular small-vessel pathology, as well as an increased rate of increase of such pathology. Individual person analysis was performed using a pattern classifier previously constructed from AD patients and cognitively normal (CN) individuals to yield an abnormality score that is positive for AD-like brains and negative otherwise. The abnormality scores measured from MCI non-converters (MCI-NC) followed a bimodal distribution, reflecting the heterogeneity of this group, whereas they were positive in almost all MCI converters (MCI-C), indicating extensive patterns of AD-like brain atrophy in almost all MCI-C. Both MCI subgroups had similar MMSE scores at baseline. A more specialized classifier constructed to differentiate converters from non-converters based on their baseline scans provided good classification accuracy reaching 81.5%, evaluated via cross-validation. These pattern classification schemes, which distill spatial patterns of atrophy to a single abnormality score, offer promise as biomarkers of AD and as predictors of subsequent clinical progression, on an individual patient basis.",
"title": ""
},
{
"docid": "20cfcfde25db033db8d54fe7ae6fcca1",
"text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.",
"title": ""
},
{
"docid": "313c68843b2521d553772dd024eec202",
"text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.",
"title": ""
},
{
"docid": "972abdbc8667c24ae080eb2ffb7835e9",
"text": "Two important cues to female physical attractiveness are body mass index (BMI) and shape. In front view, it seems that BMI may be more important than shape; however, is it true in profile where shape cues may be stronger? There is also the question of whether men and women have the same perception of female physical attractiveness. Some studies have suggested that they do not, but this runs contrary to mate selection theory. This predicts that women will have the same perception of female attractiveness as men do. This allows them to judge their own relative value, with respect to their peer group, and match this value with the value of a prospective mate. To clarify these issues we asked 40 male and 40 female undergraduates to rate a set of pictures of real women (50 in front-view and 50 in profile) for attractiveness. BMI was the primary predictor of attractiveness in both front and profile, and the putative visual cues to BMI showed a higher degree of view-invariance than shape cues such as the waist-hip ratio (WHR). Consistent with mate selection theory, there were no significant differences in the rating of attractiveness by male and female raters.",
"title": ""
},
{
"docid": "b262ea4a0a8880d044c77acc84b0c859",
"text": "Online social networks may be important avenues for building and maintaining social capital as adult’s age. However, few studies have explicitly examined the role online communities play in the lives of seniors. In this exploratory study, U.S. seniors were interviewed to assess the impact of Facebook on social capital. Interpretive thematic analysis reveals Facebook facilitates connections to loved ones and may indirectly facilitate bonding social capital. Awareness generated via Facebook often lead to the sharing and receipt of emotional support via other channels. As such, Facebook acted as a catalyst for increasing social capital. The implication of “awareness” as a new dimension of social capital theory is discussed. Additionally, Facebook was found to have potential negative impacts on seniors’ current relationships due to open access to personal information. Finally, common concerns related to privacy, comfort with technology, and inappropriate content were revealed.",
"title": ""
},
{
"docid": "7ea3d3002506e0ea6f91f4bdab09c2d5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "1527c70d0b78a3d2aa6886282425c744",
"text": "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.",
"title": ""
},
{
"docid": "5601a0da8cfaf42d30b139c535ae37db",
"text": "This article presents some key achievements and recommendations from the IoT6 European research project on IPv6 exploitation for the Internet of Things (IoT). It highlights the potential of IPv6 to support the integration of a global IoT deployment including legacy systems by overcoming horizontal fragmentation as well as more direct vertical integration between communicating devices and the cloud.",
"title": ""
},
{
"docid": "8758425824753fea372eeeeb18ee5856",
"text": "By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization problems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark function results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more calls of fitness functions and fewer individuals.",
"title": ""
},
{
"docid": "e7e9d6054a61a1f4a3ab7387be28538a",
"text": "Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.",
"title": ""
},
{
"docid": "2c7fe5484b2184564d71a03f19188251",
"text": "This paper focuses on running scans in a main memory data processing system at \"bare metal\" speed. Essentially, this means that the system must aim to process data at or near the speed of the processor (the fastest component in most system configurations). Scans are common in main memory data processing environments, and with the state-of-the-art techniques it still takes many cycles per input tuple to apply simple predicates on a single column of a table. In this paper, we propose a technique called BitWeaving that exploits the parallelism available at the bit level in modern processors. BitWeaving operates on multiple bits of data in a single cycle, processing bits from different columns in each cycle. Thus, bits from a batch of tuples are processed in each cycle, allowing BitWeaving to drop the cycles per column to below one in some case. BitWeaving comes in two flavors: BitWeaving/V which looks like a columnar organization but at the bit level, and BitWeaving/H which packs bits horizontally. In this paper we also develop the arithmetic framework that is needed to evaluate predicates using these BitWeaving organizations. Our experimental results show that both these methods produce significant performance benefits over the existing state-of-the-art methods, and in some cases produce over an order of magnitude in performance improvement.",
"title": ""
},
{
"docid": "3aca00d6a5038876340b1fbe08e5ddb6",
"text": "People who design, use, and are affected by autonomous artificially intelligent agents want to be able to trust such agents—that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc and have not been formally related to each other or to formal trust models. This article presents a survey of algorithmic assurances, i.e., programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent’s core functionality, with seven notable classes ranging from integral assurances (which impact an agent’s core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.",
"title": ""
},
{
"docid": "08d1a9f3edc449ff08b45caaaf56f6ad",
"text": "Despite the theoretical and demonstrated empirical significance of parental coping strategies for the wellbeing of families of children with disabilities, relatively little research has focused explicitly on coping in mothers and fathers of children with autism. In the present study, 89 parents of preschool children and 46 parents of school-age children completed a measure of the strategies they used to cope with the stresses of raising their child with autism. Factor analysis revealed four reliable coping dimensions: active avoidance coping, problem-focused coping, positive coping, and religious/denial coping. Further data analysis suggested gender differences on the first two of these dimensions but no reliable evidence that parental coping varied with the age of the child with autism. Associations were also found between coping strategies and parental stress and mental health. Practical implications are considered including reducing reliance on avoidance coping and increasing the use of positive coping strategies.",
"title": ""
},
{
"docid": "c19bc89db255ecf88bc1514d8bd7d018",
"text": "Fulfilling the requirements of point-of-care testing (POCT) training regarding proper execution of measurements and compliance with internal and external quality control specifications is a great challenge. Our aim was to compare the values of the highly critical parameter hemoglobin (Hb) determined with POCT devices and central laboratory analyzer in the highly vulnerable setting of an emergency department in a supra maximal care hospital to assess the quality of POCT performance. In 2548 patients, Hb measurements using POCT devices (POCT-Hb) were compared with Hb measurements performed at the central laboratory (Hb-ZL). Additionally, sub collectives (WHO anemia classification, patients with Hb <8 g/dl and suprageriatric patients (age >85y.) were analyzed. Overall, the correlation between POCT-Hb and Hb-ZL was highly significant (r = 0.96, p<0.001). Mean difference was -0.44g/dl. POCT-Hb values tended to be higher than Hb-ZL values (t(2547) = 36.1, p<0.001). Standard deviation of the differences was 0.62 g/dl. Only in 26 patients (1%), absolute differences >2.5g/dl occurred. McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition for male, female and total patients (♂ p<0.001; ♀ p<0.001, total p<0.001). Hb-ZL resulted significantly more often in anemia diagnosis. In samples with Hb<8g/dl, McNemar´s test yielded no significant difference (p = 0.169). In suprageriatric patients, McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition in male, female and total patients (♂ p<0.01; ♀ p = 0.002, total p<0.001). The difference between Hb-ZL and POCT-Hb with Hb<8g/dl was not statistically significant (<8g/dl, p = 1.000). Overall, we found a highly significant correlation between the analyzed hemoglobin concentration measurement methods, i.e. POCT devices and at the central laboratory. The results confirm the successful implementation of the presented POCT concept. Nevertheless some limitations could be identified in anemic patients stressing the importance of carefully examining clinically implausible results.",
"title": ""
},
{
"docid": "7f067f869481f06e865880e1d529adc8",
"text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.",
"title": ""
},
{
"docid": "fc9babe40365e5dc943fccf088f7a44f",
"text": "The network performance of virtual machines plays a critical role in Network Functions Virtualization (NFV), and several technologies have been developed to address hardware-level virtualization shortcomings. Recent advances in operating system level virtualization and deployment platforms such as Docker have made containers an ideal candidate for high performance application encapsulation and deployment. However, Docker and other solutions typically use lower-performing networking mechanisms. In this paper, we explore the feasibility of using technologies designed to accelerate virtual machine networking with containers, in addition to quantifying the network performance of container-based VNFs compared to the state-of-the-art virtual machine solutions. Our results show that containerized applications can provide lower latency and delay variation, and can take advantage of high performance networking technologies previously only used for hardware virtualization.",
"title": ""
},
{
"docid": "a53f26ef068d11ea21b9ba8609db6ddf",
"text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fd59754c40f05710496d3b9738f97e47",
"text": "The extent to which mental health consumers encounter stigma in their daily lives is a matter of substantial importance for their recovery and quality of life. This article summarizes the results of a nationwide survey of 1,301 mental health consumers concerning their experience of stigma and discrimination. Survey results and followup interviews with 100 respondents revealed experience of stigma from a variety of sources, including communities, families, churches, coworkers, and mental health caregivers. The majority of respondents tended to try to conceal their disorders and worried a great deal that others would find out about their psychiatric status and treat them unfavorably. They reported discouragement, hurt, anger, and lowered self-esteem as results of their experiences, and they urged public education as a means for reducing stigma. Some reported that involvement in advocacy and speaking out when stigma and discrimination were encountered helped them to cope with stigma. Limitations to generalization of results include the self-selection, relatively high functioning of participants, and respondent connections to a specific advocacy organization-the National Alliance for the Mentally Ill.",
"title": ""
}
] |
scidocsrr
|
7727c17c6bb7423759ec4ff377681fb4
|
Facial Expression Recognition using Convolutional Neural Networks: State of the Art
|
[
{
"docid": "dfacd79df58a78433672f06fdb10e5a2",
"text": "“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.",
"title": ""
}
] |
[
{
"docid": "e510140bfc93089e69cb762b968de5e9",
"text": "Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.",
"title": ""
},
{
"docid": "751843f6085ba854dc75d9a6828bed13",
"text": "With the developments in information technology and improvements in communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals through Internet or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on Artificial Neural Networks (ANN) and Logistic Regression (LR) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of ANN and LR methods in credit card fraud detection with a real data set.",
"title": ""
},
{
"docid": "8f660dd12e7936a556322f248a9e2a2a",
"text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results",
"title": ""
},
{
"docid": "d2694577861e75535e59e316bd6a9015",
"text": "Despite being a new term, ‘fake news’ has evolved rapidly. This paper argues that it should be reserved for cases of deliberate presentation of (typically) false or misleading claims as news, where these are misleading by design. The phrase ‘by design’ here refers to systemic features of the design of the sources and channels by which fake news propagates and, thereby, manipulates the audience’s cognitive processes. This prospective definition is then tested: first, by contrasting fake news with other forms of public disinformation; second, by considering whether it helps pinpoint conditions for the (recent) proliferation of fake news. Résumé: En dépit de son utilisation récente, l’expression «fausses nouvelles» a évolué rapidement. Cet article soutient qu'elle devrait être réservée aux présentations intentionnelles d’allégations (typiquement) fausses ou trompeuses comme si elles étaient des nouvelles véridiques et où elles sont faussées à dessein. L'expression «à dessein» fait ici référence à des caractéristiques systémiques de la conception des sources et des canaux par lesquels les fausses nouvelles se propagent et par conséquent, manipulent les processus cognitifs du public. Cette définition prospective est ensuite mise à l’épreuve: d'abord, en opposant les fausses nouvelles à d'autres formes de désinformation publique; deuxièmement, en examinant si elle aide à cerner les conditions de la prolifération (récente) de fausses nou-",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "6db737f9042631ddda9bae7c89b00701",
"text": "A self-assessment of time management is developed for middle-school students. A sample of entering seventh-graders (N = 814) from five states across the USA completed this instrument, with 340 students retested 6 months later. Exploratory and confirmatory factor analysis suggested two factors (i.e., Meeting Deadlines and Planning) that adequately explain the variance in time management for this age group. Scales show evidence of reliability and validity; with high internal consistency, reasonable consistency of factor structure over time, moderate to high correlations with Conscientiousness, low correlations with the remaining four personality dimensions of the Big Five, and reasonable prediction of students’ grades. Females score significantly higher on both factors of time management, with gender differences in Meeting Deadlines (but not Planning) mediated by Conscientiousness. Potential applications of the instrument for evaluation, diagnosis, and remediation in educational settings are discussed. 2009 Elsevier Ltd. All rights reserved. 1. The assessment of time management in middle-school students In our technologically enriched society, individuals are constantly required to multitask, prioritize, and work against deadlines in a timely fashion (Orlikowsky & Yates, 2002). Time management has caught the attention of educational researchers, industrial organizational psychologists, and entrepreneurs, for its possible impact on academic achievement, job performance, and quality of life (Macan, 1994). However, research on time management has not kept pace with this enthusiasm, with extant investigations suffering from a number of problems. Claessens, Van Eerde, Rutte, and Roe’s (2007) review of the literature suggest that there are three major limitations to research on time management. First, many measures of time management have limited validity evidence. Second, many studies rely solely on one-shot self-report assessment, such that evidence for a scale’s generalizability over time cannot be collected. Third, school (i.e., K-12) populations have largely been ignored. For example, all studies in the Claessens et al. (2007) review focus on adult workplace samples (e.g., teachers, engineers) or university students, rather than students in K-12. The current study involves the development of a time management assessment tailored specifically to middle-school students (i.e., adolescents in the sixth to eighth grade of schooling). Time management may be particularly important at the onset of adolescence for three reasons. First, the possibility of early identification and remediation of poor time management practices. Second, the transition into secondary education, from a learning environment involving one teacher to one of time-tabled classes for different subjects with different teachers setting assignments and tests that may occur contiguously. Successfully navigating this new learning environment requires the development of time management skills. Third, adolescents use large amounts of their discretionary time on television, computer gaming, internet use, and sports: Average estimates are 3=4 and 2=4 h per day for seventh-grade boys and girls, respectively (Van den Bulck, 2004). With less time left to do more administratively complex schoolwork, adolescents clearly require time management skills to succeed academically. 1.1. Definitions and assessments of time management Time management has been defined and operationalized in several different ways: As a means for monitoring and controlling time, as setting goals in life and keeping track of time use, as prioritizing goals and generating tasks from the goals, and as the perception of a more structured and purposive life (e.g., Bond & Feather, 1988; Britton & Tesser, 1991; Burt & Kemp, 1994; Eilam & Aharon, 2003). The various definitions all converge on the same essential element: The completion of tasks within an expected timeframe while maintaining outcome quality, through mechanisms such as planning, organizing, prioritizing, or multitasking. To the same effect, Claessens et al. (2007) defined time management as ‘‘behaviors that aim at achieving an effective use of time while performing certain goal-directed activities” (p. 36). Four instruments have been used to assess time management in adults: The Time Management Behavior Scale (TMBS; 0191-8869/$ see front matter 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2009.02.018 * Corresponding author. Tel.: +1 609 734 1049. E-mail address: lliu@ets.org (O.L. Liu). Personality and Individual Differences 47 (2009) 174–179",
"title": ""
},
{
"docid": "9f7099655f70ff203c16802903e6acdc",
"text": "Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.",
"title": ""
},
{
"docid": "c548cbfc3b1630392acd504b6e854c03",
"text": "Much of capital market research in accounting over the past 20 years has assumed that the price adjustment process to information is instantaneous and/or trivial. This assumption has had an enormous influence on the way we select research topics, design empirical tests, and interpret research findings. In this discussion, I argue that price discovery is a complex process, deserving of more attention. I highlight significant problems associated with a na.ıve view of market efficiency, and advocate a more general model involving noise traders. Finally, I discuss the implications of recent evidence against market efficiency for future research. r 2001 Elsevier Science B.V. All rights reserved. JEL classification: M4; G0; B2; D8",
"title": ""
},
{
"docid": "6639c05f14e220f4555c664b0c7b0466",
"text": "Previous attempts for data augmentation are designed manually, and the augmentation policies are dataset-specific. Recently, an automatic data augmentation approach, named AutoAugment, is proposed using reinforcement learning. AutoAugment searches for the augmentation polices in the discrete search space, which may lead to a sub-optimal solution. In this paper, we employ the Augmented Random Search method (ARS) to improve the performance of AutoAugment. Our key contribution is to change the discrete search space to continuous space, which will improve the searching performance and maintain the diversities between sub-policies. With the proposed method, state-of-the-art accuracies are achieved on CIFAR-10, CIFAR-100, and ImageNet (without additional data). Our code is available at https://github.com/gmy2013/ARS-Aug.",
"title": ""
},
{
"docid": "2781df07db142da8eefbe714631a59b2",
"text": "Snapchat is a social media platform that allows users to send images, videos, and text with a specified amount of time for the receiver(s) to view the content before it becomes permanently inaccessible to the receiver. Using focus group methodology and in-depth interviews, the current study sought to understand young adult (18e23 years old; n 1⁄4 34) perceptions of how Snapchat behaviors influenced their interpersonal relationships (family, friends, and romantic). Young adults indicated that Snapchat served as a double-edged swordda communication modality that could lead to relational challenges, but also facilitate more congruent communication within young adult interpersonal relationships. © 2016 Elsevier Ltd. All rights reserved. Technology is now a regular part of contemporary young adult (18e25 years old) life (Coyne, Padilla-Walker, & Howard, 2013; Vaterlaus, Jones, Patten, & Cook, 2015). With technological convergence (i.e. accessibility of multiple media on one device; Brown & Bobkowski, 2011) young adults can access both entertainment media (e.g., television, music) and social media (e.g., social networking, text messaging) on a single device. Among adults, smartphone ownership is highest among young adults (85% of 18e29 year olds; Smith, 2015). Perrin (2015) reported that 90% of young adults (ages 18e29) use social media. Facebook remains the most popular social networking platform, but several new social media apps (i.e., applications) have begun to gain popularity among young adults (e.g., Twitter, Instagram, Pinterest; Duggan, Ellison, Lampe, Lenhart, & Madden, 2015). Considering the high frequency of social media use, Subrahmanyam and Greenfield (2008) have advocated for more research on how these technologies influence interpersonal relationships. The current exploratory study aterlaus), Kathryn_barnett@ (C. Roche), youngja2@unk. was designed to understand the perceived role of Snapchat (see www.snapchat.com) in young adults' interpersonal relationships (i.e. family, social, and romantic). 1. Theoretical framework Uses and Gratifications Theory (U&G) purports that media and technology users are active, self-aware, and goal directed (Katz, Blumler, & Gurevitch, 1973). Technology consumers link their need gratification with specific technology options, which puts different technology sources in competition with one another to satisfy a consumer's needs. Since the emergence of U&G nearly 80 years ago, there have been significant advances in media and technology, which have resulted in many more media and technology options for consumers (Ruggiero, 2000). Writing about the internet and U&G in 2000, Roggiero forecasted: “If the internet is a technology that many predict will be genuinely transformative, it will lead to profound changes in media users' personal and social habits and roles” (p.28). Advances in accessibility to the internet and the development of social media, including Snapchat, provide support for the validity of this prediction. Despite the advances in technology, the needs users seek to gratify are likely more consistent over time. Supporting this point Katz, Gurevitch, and Haas J.M. Vaterlaus et al. / Computers in Human Behavior 62 (2016) 594e601 595",
"title": ""
},
{
"docid": "88abea475884eeec1049a573d107c6c9",
"text": "This paper extends the traditional pinhole camera projection geometry used in computer graphics to a more realistic camera model which approximates the effects of a lens and an aperture function of an actual camera. This model allows the generation of synthetic images which have a depth of field and can be focused on an arbitrary plane; it also permits selective modeling of certain optical characteristics of a lens. The model can be expanded to include motion blur and special-effect filters. These capabilities provide additional tools for highlighting important areas of a scene and for portraying certain physical characteristics of an object in an image.",
"title": ""
},
{
"docid": "14c278147defd19feb4e18d31a3fdcfb",
"text": "Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications with different performance requirements. Currently, cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which lead to inefficient utilization of resources. Earlier solutions, concentrating on a single type of SLAs (Service Level Agreements) or resource usage patterns of applications, are not suitable for cloud computing environments. In this paper, we tackle the resource allocation problem within a datacenter that runs different type of application workloads, particularly non-interactive and transactional applications. We propose admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures the SLA requirements of users. In our experimental study, the proposed mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations.",
"title": ""
},
{
"docid": "a9baecb9470242c305942f7bc98494ab",
"text": "This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.",
"title": ""
},
{
"docid": "1a5c009f059ea28fd2d692d1de4eb913",
"text": "We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.",
"title": ""
},
{
"docid": "39bf7e3a8e75353a3025e2c0f18768f9",
"text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.",
"title": ""
},
{
"docid": "d0f187a8f7f6d4a6f8061a486f89c6bd",
"text": "The science of ecology was born from the expansive curiosity of the biologists of the late 19th century, who wished to understand the distribution, abundance and interactions of the earth's organisms. Why do we have so many species, and why not more, they asked--and what causes them to be distributed as they are? What are the characteristics of a biological community that cause it to recover in a particular way after a disturbance?",
"title": ""
},
{
"docid": "edb0442d3e3216a5e1add3a03b05858a",
"text": "The resilience perspective is increasingly used as an approach for understanding the dynamics of social–ecological systems. This article presents the origin of the resilience perspective and provides an overview of its development to date. With roots in one branch of ecology and the discovery of multiple basins of attraction in ecosystems in the 1960–1970s, it inspired social and environmental scientists to challenge the dominant stable equilibrium view. The resilience approach emphasizes non-linear dynamics, thresholds, uncertainty and surprise, how periods of gradual change interplay with periods of rapid change and how such dynamics interact across temporal and spatial scales. The history was dominated by empirical observations of ecosystem dynamics interpreted in mathematical models, developing into the adaptive management approach for responding to ecosystem change. Serious attempts to integrate the social dimension is currently taking place in resilience work reflected in the large numbers of sciences involved in explorative studies and new discoveries of linked social–ecological systems. Recent advances include understanding of social processes like, social learning and social memory, mental models and knowledge–system integration, visioning and scenario building, leadership, agents and actor groups, social networks, institutional and organizational inertia and change, adaptive capacity, transformability and systems of adaptive governance that allow for management of essential ecosystem services. r 2006 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "f0fa6c2b9216192ed0cf419e9f3c9666",
"text": "Primary task of a recommender system is to improve user’s experience by recommending relevant and interesting items to the users. To this effect, diversity in item suggestion is as important as the accuracy of recommendations. Existing literature aimed at improving diversity primarily suggests a 2-stage mechanism – an existing CF scheme for rating prediction, followed by a modified ranking strategy. This approach requires heuristic selection of parameters and ranking strategies. Also most works focus on diversity from either the user or system’s perspective. In this work, we propose a single stage optimization based solution to achieve high diversity while maintaining requisite levels of accuracy. We propose to incorporate additional diversity enhancing constraints, in the matrix factorization model for collaborative filtering. However, unlike traditional MF scheme generating dense user and item latent factor matrices, our base MF model recovers a dense user and a sparse item latent factor matrix; based on a recent work. The idea is motivated by the fact that although a user will demonstrate some affinity towards all latent factors, an item will never possess all features; thereby yielding a sparse structure. We also propose an algorithm for our formulation. The superiority of our model over existing state of the art techniques is demonstrated by the results of experiments conducted on real world movie database. © 2016 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "dd97e87fee154f610e406c1cf9170abe",
"text": "Magnetically-driven micrometer to millimeter-scale robotic devices have recently shown great capabilities for remote applications in medical procedures, in microfluidic tools and in microfactories. Significant effort recently has been on the creation of mobile or stationary devices with multiple independently-controllable degrees of freedom (DOF) for multiagent or complex mechanism motions. In most applications of magnetic microrobots, however, the relatively large distance from the field generation source and the microscale devices results in controlling magnetic field signals which are applied homogeneously over all agents. While some progress has been made in this area allowing up to six independent DOF to be individually commanded, there has been no rigorous effort in determining the maximum achievable number of DOF for systems with homogeneous magnetic field input. In this work, we show that this maximum is eight and we introduce the theoretical basis for this conclusion, relying on the number of independent usable components in a magnetic field at a point. In order to verify the claim experimentally, we develop a simple demonstration mechanism with 8 DOF designed specifically to show independent actuation. Using this mechanism with $500 \\mu \\mathrm{m}$ magnetic elements, we demonstrate eight independent motions of 0.6 mm with 8.6 % coupling using an eight coil system. These results will enable the creation of richer outputs in future microrobotic devices.",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
}
] |
scidocsrr
|
2dd3f5c65f29db879483195fa0d87466
|
A Robot-Partner for Preschool Children Learning English Using Socio-Cognitive Conflict
|
[
{
"docid": "8cffd66433d70a04b79f421233f2dcf2",
"text": "By engaging in construction-based robotics activities, children as young as four can play to learn a range of concepts. The TangibleK Robotics Program paired developmentally appropriate computer programming and robotics tools with a constructionist curriculum designed to engage kindergarten children in learning computational thinking, robotics, programming, and problem-solving. This paper documents three kindergarten classrooms’ exposure to computer programming concepts and explores learning outcomes. Results point to strengths of the curriculum and areas where further redesign of the curriculum and technologies would be appropriate. Overall, the study demonstrates that kindergartners were both interested in and able to learn many aspects of robotics, programming, and computational thinking with the TangibleK curriculum design. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "211b858db72c962efaedf66f2ed9479d",
"text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.",
"title": ""
}
] |
[
{
"docid": "3c4f19544e9cc51d307c6cc9aea63597",
"text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.",
"title": ""
},
{
"docid": "3a2168e93c1f8025e93de1a7594e17d5",
"text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.",
"title": ""
},
{
"docid": "865da040d64e56774f20d1f856aa8845",
"text": "on Walden Pond (Massachusetts, USA) using diatoms and stable isotopes Dörte Köster,1∗ Reinhard Pienitz,1∗ Brent B. Wolfe,2 Sylvia Barry,3 David R. Foster,3 and Sushil S. Dixit4 Paleolimnology-Paleoecology Laboratory, Centre d’études nordiques, Department of Geography, Université Laval, Québec, Québec, G1K 7P4, Canada Department of Geography and Environmentals Studies, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada Harvard University, Harvard Forest, Post Office Box 68, Petersham, Massachusetts, 01366-0068, USA Environment Canada, National Guidelines & Standards Office, 351 St. Joseph Blvd., 8th Floor, Ottawa, Ontario, K1A 0H3, Canada ∗Corresponding authors: E-mail: doerte.koster.1@ulaval.ca, reinhard.pienitz@cen.ulaval.ca",
"title": ""
},
{
"docid": "0cd863fc634b75f1b93137698d42080d",
"text": "Prior research has established that peer tutors can benefit academically from their tutoring experiences. However, although tutor learning has been observed across diverse settings, the magnitude of these gains is often underwhelming. In this review, the authors consider how analyses of tutors’ actual behaviors may help to account for variation in learning outcomes and how typical tutoring behaviors may create or undermine opportunities for learning. The authors examine two tutoring activities that are commonly hypothesized to support tutor learning: explaining and questioning. These activities are hypothesized to support peer tutors’ learning via reflective knowledge-building, which includes self-monitoring of comprehension, integration of new and prior knowledge, and elaboration and construction of knowledge. The review supports these hypotheses but also finds that peer tutors tend to exhibit a pervasive knowledge-telling bias. Peer tutors, even when trained, focus more on delivering knowledge rather than developing it. As a result, the true potential for tutor learning may rarely be achieved. The review concludes by offering recommendations for how future research can utilize tutoring process data to understand how tutors learn and perhaps develop new training methods.",
"title": ""
},
{
"docid": "bd6375ea90153d2e5b2846930922fc6e",
"text": "OBJECTIVE\nBrain-computer interfaces (BCIs) have the potential to be valuable clinical tools. However, the varied nature of BCIs, combined with the large number of laboratories participating in BCI research, makes uniform performance reporting difficult. To address this situation, we present a tutorial on performance measurement in BCI research.\n\n\nAPPROACH\nA workshop on this topic was held at the 2013 International BCI Meeting at Asilomar Conference Center in Pacific Grove, California. This paper contains the consensus opinion of the workshop members, refined through discussion in the following months and the input of authors who were unable to attend the workshop.\n\n\nMAIN RESULTS\nChecklists for methods reporting were developed for both discrete and continuous BCIs. Relevant metrics are reviewed for different types of BCI research, with notes on their use to encourage uniform application between laboratories.\n\n\nSIGNIFICANCE\nGraduate students and other researchers new to BCI research may find this tutorial a helpful introduction to performance measurement in the field.",
"title": ""
},
{
"docid": "cb1048d4bffb141074a4011279054724",
"text": "Question Generation (QG) is the task of generating reasonable questions from a text. It is a relatively new research topic and has its potential usage in intelligent tutoring systems and closed-domain question answering systems. Current approaches include template or syntax based methods. This thesis proposes a novel approach based entirely on semantics. Minimal Recursion Semantics (MRS) is a meta-level semantic representation with emphasis on scope underspecification. With the English Resource Grammar and various tools from the DELPH-IN community, a natural language sentence can be interpreted as an MRS structure by parsing, and an MRS structure can be realized as a natural language sentence through generation. There are three issues emerging from semantics-based QG: (1) sentence simplification for complex sentences, (2) question transformation for declarative sentences, and (3) generation ranking. Three solutions are also proposed: (1) MRS decomposition through a Connected Dependency MRS Graph, (2) MRS transformation from declarative sentences to interrogative sentences, and (3) question ranking by simple language models atop a MaxEnt-based model. The evaluation is conducted in the context of the Question Generation Shared Task and Generation Challenge 2010. The performance of proposed method is compared against other syntax and rule based systems. The result also reveals the challenges of current research on question generation and indicates direction for future work.",
"title": ""
},
{
"docid": "6ae9bfc681e2a9454196f4aa0c49a4da",
"text": "Previous research has indicated that exposure to traditional media (i.e., television, film, and print) predicts the likelihood of internalization of a thin ideal; however, the relationship between exposure to internet-based social media on internalization of this ideal remains less understood. Social media differ from traditional forms of media by allowing users to create and upload their own content that is then subject to feedback from other users. This meta-analysis examined the association linking the use of social networking sites (SNSs) and the internalization of a thin ideal in females. Systematic searches were performed in the databases: PsychINFO, PubMed, Web of Science, Communication and Mass Media Complete, and ProQuest Dissertations and Theses Global. Six studies were included in the meta-analysis that yielded 10 independent effect sizes and a total of 1,829 female participants ranging in age from 10 to 46 years. We found a positive association between extent of use of SNSs and extent of internalization of a thin ideal with a small to moderate effect size (r = 0.18). The positive effect indicated that more use of SNSs was associated with significantly higher internalization of a thin ideal. A comparison was also made between study outcomes measuring broad use of SNSs and outcomes measuring SNS use solely as a function of specific appearance-related features (e.g., posting or viewing photographs). The use of appearance-related features had a stronger relationship with the internalization of a thin ideal than broad use of SNSs. The finding suggests that the ability to interact with appearance-related features online and be an active participant in media creation is associated with body image disturbance. Future research should aim to explore the way SNS users interact with the media posted online and the relationship linking the use of specific appearance features and body image disturbance.",
"title": ""
},
{
"docid": "314ffaaf39e2345f90e85fc5c5fdf354",
"text": "With the fast development pace of deep submicron technology, the size and density of semiconductor memory grows rapidly. However, keeping a high level of yield and reliability for memory products is more and more difficult. Both the redundancy repair and ECC techniques have been widely used for enhancing the yield and reliability of memory chips. Specifically, the redundancy repair and ECC techniques are conventionally used to repair or correct the hard faults and soft errors, respectively. In this paper, we propose an integrated ECC and redundancy repair scheme for memory reliability enhancement. Our approach can identify the hard faults and soft errors during the memory normal operation mode, and repair the hard faults during the memory idle time as long as there are unused redundant elements. We also develop a method for evaluating the memory reliability. Experimental results show that the proposed approach is effective, e.g., the MTTF of a 32K /spl times/ 64 memory is improved by 1.412 hours (7.1%) with our integrated ECC and repair scheme.",
"title": ""
},
{
"docid": "0b8f4d14483d8fca51f882759f3194ad",
"text": "Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning.",
"title": ""
},
{
"docid": "3f7684d107f22cb6e8a3006249d8582f",
"text": "Substrate Integrated Waveguide has been an emerging technology for the realization of microwave and millimeter wave regions. It is the planar form of the conventional rectangular waveguide. It has profound applications at higher frequencies, since prevalent platforms like microstrip and coplanar waveguide have loss related issues. This paper discusses basic concepts of SIW, design aspects and their applications to leaky wave antennas. A brief overview of recent works on Substrate integrated Waveguide based Leaky Wave Antennas has been provided.",
"title": ""
},
{
"docid": "975d1b5edfc68e8041794db9cc50d0d2",
"text": "I’ve taken to writing this series of posts on a statistical view of deep learning with two principal motivations in mind. The first was as a personal exercise to make concrete and to test the limits of the way that I think about and use deep learning in my every day work. The second, was to highlight important statistical connections and implications of deep learning that I have not seen made in the popular courses, reviews and books on deep learning, but which are extremely important to keep in mind. This document forms a collection of these essays originally posted at blog.shakirm.com.",
"title": ""
},
{
"docid": "4cfcbac8ec942252b79f2796fa7490f0",
"text": "Over the next few years the amount of biometric data being at the disposal of various agencies and authentication service providers is expected to grow significantly. Such quantities of data require not only enormous amounts of storage but unprecedented processing power as well. To be able to face this future challenges more and more people are looking towards cloud computing, which can address these challenges quite effectively with its seemingly unlimited storage capacity, rapid data distribution and parallel processing capabilities. Since the available literature on how to implement cloud-based biometric services is extremely scarce, this paper capitalizes on the most important challenges encountered during the development work on biometric services, presents the most important standards and recommendations pertaining to biometric services in the cloud and ultimately, elaborates on the potential value of cloud-based biometric solutions by presenting a few existing (commercial) examples. In the final part of the paper, a case study on fingerprint recognition in the cloud and its integration into the e-learning environment Moodle is presented.",
"title": ""
},
{
"docid": "0965f4f7b820f9561710837c7bb7b4c1",
"text": "With the success of image classification problems, deep learning is expanding its application areas. In this paper, we apply deep learning to decode a polar code. As an initial step for memoryless additive Gaussian noise channel, we consider a deep feed-forward neural network and investigate its decoding performances with respect to numerous configurations: the number of hidden layers, the number of nodes for each layer, and activation functions. Generally, the higher complex network yields a better performance. Comparing the performances of regular list decoding, we provide a guideline for the configuration parameters. Although the training of deep learning may require high computational complexity, it should be noted that the field application of trained networks can be accomplished at a low level complexity. Considering the level of performance and complexity, we believe that deep learning is a competitive decoding tool.",
"title": ""
},
{
"docid": "b045350bfb820634046bff907419d1bf",
"text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.",
"title": ""
},
{
"docid": "1ca92ec69901cda036fce2bb75512019",
"text": "Information Retrieval deals with searching and retrieving information within the documents and it also searches the online databases and internet. Web crawler is defined as a program or software which traverses the Web and downloads web documents in a methodical, automated manner. Based on the type of knowledge, web crawler is usually divided in three types of crawling techniques: General Purpose Crawling, Focused crawling and Distributed Crawling. In this paper, the applicability of Web Crawler in the field of web search and a review on Web Crawler to different problem domains in web search is discussed.",
"title": ""
},
{
"docid": "0e08bd9133a46b15adec11d961eeed3f",
"text": "This article presents a review of recent literature of intersection behavior analysis for three types of intersection participants; vehicles, drivers, and pedestrians. In this survey, behavior analysis of each participant group is discussed based on key features and elements used for intersection design, planning and safety analysis. Different methods used for data collection, behavior recognition and analysis are reviewed for each group and a discussion is provided on the state of the art along with challenges and future research directions in the field.",
"title": ""
},
{
"docid": "e9eefe7d683a8b02a8456cc5ff0ebe9d",
"text": "The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4% of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past.",
"title": ""
},
{
"docid": "f1710683991c33e146a48ed4f08c7ae3",
"text": "The rapid growth of social media, especially Twitter in Indonesia, has produced a large amount of user generated texts in the form of tweets. Since Twitter only provides the name and location of its users, we develop a classification system that predicts latent attributes of Twitter user based on his tweets. Latent attribute is an attribute that is not stated directly. Our system predicts age and job attributes of Twitter users that use Indonesian language. Classification model is developed by employing lexical features and three learning algorithms (Naïve Bayes, SVM, and Random Forest). Based on the experimental results, it can be concluded that the SVM method produces the best accuracy for balanced data.",
"title": ""
},
{
"docid": "6ad5035563dc8edf370772a432f6fea8",
"text": "We employ the new geometric active contour models, previously formulated, for edge detection and segmentation of magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound medical imagery. Our method is based on defining feature-based metrics on a given image which in turn leads to a novel snake paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus, the snake is attracted very quickly and efficiently to the desired feature.",
"title": ""
}
] |
scidocsrr
|
35dc79435be5fb76fe57d5813197c79b
|
A Discourse-Driven Content Model for Summarising Scientific Articles Evaluated in a Complex Question Answering Task
|
[
{
"docid": "565941db0284458e27485d250493fd2a",
"text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.",
"title": ""
}
] |
[
{
"docid": "be8815170248d7635a46f07c503e32a3",
"text": "ÐStochastic discrimination is a general methodology for constructing classifiers appropriate for pattern recognition. It is based on combining arbitrary numbers of very weak components, which are usually generated by some pseudorandom process, and it has the property that the very complex and accurate classifiers produced in this way retain the ability, characteristic of their weak component pieces, to generalize to new data. In fact, it is often observed, in practice, that classifier performance on test sets continues to rise as more weak components are added, even after performance on training sets seems to have reached a maximum. This is predicted by the underlying theory, for even though the formal error rate on the training set may have reached a minimum, more sophisticated measures intrinsic to this method indicate that classifier performance on both training and test sets continues to improve as complexity increases. In this paper, we begin with a review of the method of stochastic discrimination as applied to pattern recognition. Through a progression of examples keyed to various theoretical issues, we discuss considerations involved with its algorithmic implementation. We then take such an algorithmic implementation and compare its performance, on a large set of standardized pattern recognition problems from the University of California Irvine, and Statlog collections, to many other techniques reported on in the literature, including boosting and bagging. In doing these studies, we compare our results to those reported in the literature by the various authors for the other methods, using the same data and study paradigms used by them. Included in this paper is an outline of the underlying mathematical theory of stochastic discrimination and a remark concerning boosting, which provides a theoretical justification for properties of that method observed in practice, including its ability to generalize. Index TermsÐPattern recognition, classification algorithms, stochastic discrimination, SD.",
"title": ""
},
{
"docid": "78c89f8aec24989737575c10b6bbad90",
"text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.",
"title": ""
},
{
"docid": "e43d32bdad37002f70d797dd3d5bd5eb",
"text": "Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into “slices”, and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at https://sites.google.com/view/dnc-rl/.",
"title": ""
},
{
"docid": "ce901f6509da9ab13d66056319c15bd8",
"text": "In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems.",
"title": ""
},
{
"docid": "2eaa686e4808b3c613a5061dc5bb14a7",
"text": "To date, there is little information on the impact of more aggressive treatment regimen such as BEACOPP (bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, and prednisone) on the fertility of male patients with Hodgkin lymphoma (HL). We evaluated the impact of BEACOPP regimen on fertility status in 38 male patients with advanced-stage HL enrolled into trials of the German Hodgkin Study Group (GHSG). Before treatment, 6 (23%) patients had normozoospermia and 20 (77%) patients had dysspermia. After treatment, 34 (89%) patients had azoospermia, 4 (11%) had other dysspermia, and no patients had normozoospermia. There was no difference in azoospermia rate between patients treated with BEACOPP baseline and those given BEACOPP escalated (93% vs 87%, respectively; P > .999). After treatment, most of patients (93%) had abnormal values of follicle-stimulating hormone, whereas the number of patients with abnormal levels of testosterone and luteinizing hormone was less pronounced-57% and 21%, respectively. In univariate analysis, none of the evaluated risk factors (ie, age, clinical stage, elevated erythrocyte sedimentation rate, B symptoms, large mediastinal mass, extranodal disease, and 3 or more lymph nodes) was statistically significant. Male patients with HL are at high risk of infertility after treatment with BEACOPP.",
"title": ""
},
{
"docid": "ee07cf061a1a3b7283c22434dcabd4eb",
"text": "Over the past decade, machine learning techniques and in particular predictive modeling and pattern recognition in biomedical sciences, from drug delivery systems to medical imaging, have become one of the most important methods of assisting researchers in gaining a deeper understanding of issues in their entirety and solving complex medical problems. Deep learning is a powerful machine learning algorithm in classification that extracts low-to high-level features. In this paper, we employ a convolutional neural network to distinguish an Alzheimers brain from a normal, healthy brain. The importance of classifying this type of medical data lies in its potential to develop a predictive model or system in order to recognize the symptoms of Alzheimers disease when compared with normal subjects and to estimate the stages of the disease. Classification of clinical data for medical conditions such as Alzheimers disease has always been challenging, and the most problematic aspect has always been selecting the strongest discriminative features. Using the Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimers subjects from normal controls, where the accuracy of testing data reached 96.85%. This experiment suggests that the shift and scale invariant features extracted by CNN followed by deep learning classification represents the most powerful method of distinguishing clinical data from healthy data in fMRI. This approach also allows for expansion of the methodology to predict more complicated systems.",
"title": ""
},
{
"docid": "89bcf5b0af2f8bf6121e28d36ca78e95",
"text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4",
"title": ""
},
{
"docid": "ff0d9abbfce64e83576d7e0eb235a46b",
"text": "For multi-copter unmanned aerial vehicles (UAVs) sensing of the actual altitude is an important task. Many functions providing increased flight safety and easy maneuverability rely on altitude data. Commonly used sensors provide the altitude only relative to the starting position, or are limited in range and/or resolution. With the 77 GHz FMCW radar-based altimeter presented in this paper not only the actual altitude over ground but also obstacles such as trees and bushes can be detected. The capability of this solution is verified by measurements over different terrain and vegetation.",
"title": ""
},
{
"docid": "06ba81270357c9bcf1dd8f1871741537",
"text": "The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using",
"title": ""
},
{
"docid": "e85e66b6ad6324a07ca299bf4f3cd447",
"text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.",
"title": ""
},
{
"docid": "149ffd270f39a330f4896c7d3aa290be",
"text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.",
"title": ""
},
{
"docid": "68420190120449343006879e23be8789",
"text": "Recent findings suggest that consolidation of emotional memories is influenced by menstrual phase in women. In contrast to other phases, in the mid-luteal phase when progesterone levels are elevated, cortisol levels are increased and correlated with emotional memory. This study examined the impact of progesterone on cortisol and memory consolidation of threatening stimuli under stressful conditions. Thirty women were recruited for the high progesterone group (in the mid-luteal phase) and 26 for the low progesterone group (in non-luteal phases of the menstrual cycle). Women were shown a series of 20 neutral or threatening images followed immediately by either a stressor (cold pressor task) or control condition. Participants returned two days later for a surprise free recall test of the images and salivary cortisol responses were monitored. High progesterone levels were associated with higher baseline and stress-evoked cortisol levels, and enhanced memory of negative images when stress was received. A positive correlation was found between stress-induced cortisol levels and memory recall of threatening images. These findings suggest that progesterone mediates cortisol responses to stress and subsequently predicts memory recall for emotionally arousing stimuli.",
"title": ""
},
{
"docid": "1bea3fdeb0ca47045a64771bd3925e11",
"text": "The goal of Word Sense Disambiguation (WSD) is to identify the correct meaning of a word in the particular context. Traditional supervised methods only use labeled data (context), while missing rich lexical knowledge such as the gloss which defines the meaning of a word sense. Recent studies have shown that incorporating glosses into neural networks for WSD has made significant improvement. However, the previous models usually build the context representation and gloss representation separately. In this paper, we find that the learning for the context and gloss representation can benefit from each other. Gloss can help to highlight the important words in the context, thus building a better context representation. Context can also help to locate the key words in the gloss of the correct word sense. Therefore, we introduce a co-attention mechanism to generate co-dependent representations for the context and gloss. Furthermore, in order to capture both word-level and sentence-level information, we extend the attention mechanism in a hierarchical fashion. Experimental results show that our model achieves the state-of-the-art results on several standard English all-words WSD test datasets.",
"title": ""
},
{
"docid": "2acb16f1e67f141220dc05b90ac23385",
"text": "By combining patch-clamp methods with two-photon microscopy, it is possible to target recordings to specific classes of neurons in vivo. Here we describe methods for imaging and recording from the soma and dendrites of neurons identified using genetically encoded probes such as green fluorescent protein (GFP) or functional indicators such as Oregon Green BAPTA-1. Two-photon targeted patching can also be adapted for use with wild-type brains by perfusing the extracellular space with a membrane-impermeable dye to visualize the cells by their negative image and target them for electrical recordings, a technique termed \"shadowpatching.\" We discuss how these approaches can be adapted for single-cell electroporation to manipulate specific cells genetically. These approaches thus permit the recording and manipulation of rare genetically, morphologically, and functionally distinct subsets of neurons in the intact nervous system.",
"title": ""
},
{
"docid": "df679dcd213842a786c1ad9587c66f77",
"text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in",
"title": ""
},
{
"docid": "9c857daee24f793816f1cee596e80912",
"text": "Introduction Since the introduction of a new UK Ethics Committee Authority (UKECA) in 2004 and the setting up of the Central Office for Research Ethics Committees (COREC), research proposals have come under greater scrutiny than ever before. The era of self-regulation in UK research ethics has ended (Kerrison and Pollock, 2005). The UKECA recognise various committees throughout the UK that can approve proposals for research in NHS facilities (National Patient Safety Agency, 2007), and the scope of research for which approval must be sought is defined by the National Research Ethics Service, which has superceded COREC. Guidance on sample size (Central Office for Research Ethics Committees, 2007: 23) requires that 'the number should be sufficient to achieve worthwhile results, but should not be so high as to involve unnecessary recruitment and burdens for participants'. It also suggests that formal sample estimation size should be based on the primary outcome, and that if there is more than one outcome then the largest sample size should be chosen. Sample size is a function of three factors – the alpha level, beta level and magnitude of the difference (effect size) hypothesised. Referring to the expected size of effect, COREC (2007: 23) guidance states that 'it is important that the difference is not unrealistically high, as this could lead to an underestimate of the required sample size'. In this paper, issues of alpha, beta and effect size will be considered from a practical perspective. A freely-available statistical software package called GPower (Buchner et al, 1997) will be used to illustrate concepts and provide practical assistance to novitiate researchers and members of research ethics committees. There are a wide range of freely available statistical software packages, such as PS (Dupont and Plummer, 1997) and STPLAN (Brown et al, 2000). Each has features worth exploring, but GPower was chosen because of its ease of use and the wide range of study designs for which it caters. Using GPower, sample size and power can be estimated or checked by those with relatively little technical knowledge of statistics. Alpha and beta errors and power Researchers begin with a research hypothesis – a 'hunch' about the way that the world might be. For example, that treatment A is better than treatment B. There are logical reasons why this can never be demonstrated as absolutely true, but evidence that it may or may not be true can be obtained by …",
"title": ""
},
{
"docid": "6d329c1fa679ac201387c81f59392316",
"text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.",
"title": ""
},
{
"docid": "b0eec6d5b205eafc6fcfc9710e9cf696",
"text": "The reflectarray antenna is a substitution of reflector antennas by making use of planar phased array techniques [1]. The array elements are specially designed, providing proper phase compensations to the spatial feed through various techniques [2–4]. The bandwidth limitation due to microstrip structures has led to various multi-band designs [5–6]. In these designs, the multi-band performance is realized through multi-layer structures, causing additional volume requirement and fabrication cost. An alternative approach is provided in [7–8], where single-layer structures are adopted. The former [7] implements a dual-band linearly polarized reflectarray whereas the latter [8] establishes a single-layer tri-band concept with circular polarization (CP). In this paper, a prototype based on the conceptual structure in [8] is designed, fabricated, and measured. The prototype is composed of three sub-arrays on a single layer. They have pencil beam patterns at 32 GHz (Ka-band), 8.4 GHz (X-band), and 7.1 GHz (C-band), respectively. Considering the limited area, two phase compensation techniques are adopted by these sub-arrays. The varied element size (VES) technique is applied to the C-band, whereas the element rotation (ER) technique is used in both X-band and Ka-band.",
"title": ""
},
{
"docid": "42db85c2e0e243c5e31895cfc1f03af6",
"text": "This survey presents recent progress on Affective Computing (AC) using mobile devices. AC has been one of the most active research topics for decades. The primary limitation of traditional AC research refers to as impermeable emotions. This criticism is prominent when emotions are investigated outside social contexts. It is problematic because some emotions are directed at other people and arise from interactions with them. The development of smart mobile wearable devices (e.g., Apple Watch, Google Glass, iPhone, Fitbit) enables the wild and natural study for AC in the aspect of computer science. This survey emphasizes the AC study and system using smart wearable devices. Various models, methodologies and systems are discussed in order to examine the state of the art. Finally, we discuss remaining challenges and future works.",
"title": ""
},
{
"docid": "0506a05ff43ae7590809015bfb37cf01",
"text": "The balanced business scorecard is a widely-used management framework for optimal measurement of organizational performance. Explains that the scorecard originated in an attempt to address the problem of systems apparently not working. However, the problem proved to be less the information systems than the broader organizational systems, specifically business performance measurement. Discusses the fundamental points to cover in implementation of the scorecard. Presents ten “golden rules” developed as a means of bringing the framework closer to practical application. The Nolan Norton Institute developed the balanced business scorecard in 1990, resulting in the much-referenced Harvard Business Review article, “Measuring performance in the organization of the future”, by Robert Kaplan and David Norton. The balanced scorecard supplemented traditional financial measures with three additional perspectives: customers, internal business processes and learning and growth. Currently, the balanced business scorecard is a powerful and widely-accepted framework for defining performance measures and communicating objectives and vision to the organization. Many companies around the world have worked with the balanced business scorecard but experiences vary. Based on practical experiences of clients of Nolan, Norton & Co. and KPMG in putting the balanced business scorecard to work, the following ten golden rules for its implementation have been determined: 1 There are no standard solutions: all businesses differ. 2 Top management support is essential. 3 Strategy is the starting point. 4 Determine a limited and balanced number of objectives and measures. 5 No in-depth analyses up front, but refine and learn by doing. 6 Take a bottom-up and top-down approach. 7 It is not a systems issue, but systems are an issue. 8 Consider delivery systems at the start. 9 Consider the effect of performance indicators on behaviour. 10 Not all measures can be quantified.",
"title": ""
}
] |
scidocsrr
|
abbd4694897bb5c4fd5866f00de2d593
|
Aesthetics and credibility in web site design
|
[
{
"docid": "e7c8abf3387ba74ca0a6a2da81a26bc4",
"text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "36a615660b8f0c60bef06b5a57887bd1",
"text": "Quantum cryptography is an emerging technology in which two parties can secure network communications by applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of the laws of quantum mechanics. Quantum cryptography was born in the early seventies when Steven Wiesner wrote \"Conjugate Coding\", which took more than ten years to end this paper. The quantum cryptography relies on two important elements of quantum mechanics - the Heisenberg Uncertainty principle and the principle of photon polarization. The Heisenberg Uncertainty principle states that, it is not possible to measure the quantum state of any system without distributing that system. The principle of photon polarization states that, an eavesdropper can not copy unknown qubits i.e. unknown quantum states, due to no-cloning theorem which was first presented by Wootters and Zurek in 1982. This research paper concentrates on the theory of quantum cryptography, and how this technology contributes to the network security. This research paper summarizes the current state of quantum cryptography, and the real–world application implementation of this technology, and finally the future direction in which the quantum cryptography is headed forwards.",
"title": ""
},
{
"docid": "dfa5334f77bba5b1eeb42390fed1bca3",
"text": "Personality was studied as a conditioner of the effects of stressful life events on illness onset. Two groups of middle and upper level executives had comparably high degrees of stressful life events in the previous 3 years, as measured by the Holmes and Rahe Schedule of Recent Life Events. One group (n = 86) suffered high stress without falling ill, whereas the other (n = 75) reported becoming sick after their encounter with stressful life events. Illness was measured by the Wyler, Masuda, and Holmes Seriousness of Illness Survey. Discriminant function analysis, run on half of the subjects in each group and cross-validated on the remaining cases, supported the prediction that high stress/low illness executives show, by comparison with high stress/high illness executives, more hardiness, that is, have a stronger commitment to self, an attitude of vigorousness toward the environment, a sense of meaningfulness, and an internal locus of control.",
"title": ""
},
{
"docid": "bf08d673b40109d6d6101947258684fd",
"text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.",
"title": ""
},
{
"docid": "f285815e47ea0613fb1ceb9b69aee7df",
"text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.",
"title": ""
},
{
"docid": "aa418cfd93eaba0d47084d0b94be69b8",
"text": "Single-trial classification of Event-Related Potentials (ERPs) is needed in many real-world brain-computer interface (BCI) applications. However, because of individual differences, the classifier needs to be calibrated by using some labeled subject specific training samples, which may be inconvenient to obtain. In this paper we propose a weighted adaptation regularization (wAR) approach for offline BCI calibration, which uses data from other subjects to reduce the amount of labeled data required in offline single-trial classification of ERPs. Our proposed model explicitly handles class-imbalance problems which are common in many real-world BCI applications. War can improve the classification performance, given the same number of labeled subject-specific training samples, or, equivalently, it can reduce the number of labeled subject-specific training samples, given a desired classification accuracy. To reduce the computational cost of wAR, we also propose a source domain selection (SDS) approach. Our experiments show that wARSDS can achieve comparable performance with wAR but is much less computationally intensive. We expect wARSDS to find broad applications in offline BCI calibration.",
"title": ""
},
{
"docid": "35b82263484452d83519c68a9dfb2778",
"text": "S Music and the Moving Image Conference May 27th 29th, 2016 1. Loewe Friday, May 27, 2016, 9:30AM – 11:00AM MUSIC EDITING: PROCESS TO PRACTICE—BRIDGING THE VARIOUS PERSPECTIVES IN FILMMAKING AND STORY-TELLING Nancy Allen, Film Music Editor While the technical aspects of music editing and film-making continue to evolve, the fundamental nature of story-telling remains the same. Ideally, the role of the music editor exists at an intersection between the Composer, Director, and Picture Editor, where important creative decisions are made. This privileged position allows the Music Editor to better explore how to tell the story through music and bring the evolving vision of the film into tighter focus. 2. Loewe Friday, May 27, 2016, 11:30 AM – 1:00 PM GREAT EXPECTATIONS? THE CHANGING ROLE OF AUDIOVISUAL INCONGRUENCE IN CONTEMPORARY MULTIMEDIA Dave Ireland, University of Leeds Film-music moments that are perceived to be incongruent, misfitting or inappropriate have often been described as highly memorable. These claims can in part be explained by the separate processing of sonic and visual information that can occur when incongruent combinations subvert expectations of an audiovisual pairing in which the constituent components share a greater number of properties. Drawing upon a sequence from the TV sitcom Modern Family in which images of violent destruction are juxtaposed with performance of tranquil classical music, this paper highlights the increasing prevalence of such uses of audiovisual difference in contemporary multimedia. Indeed, such principles even now underlie a form of Internet meme entitled ‘Whilst I play unfitting music’. Such examples serve to emphasize the evolving functions of incongruence, emphasizing the ways in which such types of audiovisual pairing now also serve as a marker of authorial style and a source of intertextual parody. Drawing upon psychological theories of expectation and ideas from semiotics that facilitate consideration of the potential disjunction between authorial intent and perceiver response, this paper contends that such forms of incongruence should be approached from a psycho-semiotic perspective. Through consideration of the aforementioned examples, it will be demonstrated that this approach allows for: more holistic understanding of evolving expectations and attitudes towards audiovisual incongruence that may shape perceiver response; and a more nuanced mode of analyzing factors that may influence judgments of film-music fit and appropriateness. MUSICAL META-MORPHOSIS: BREAKING THE FOURTH WALL THROUGH DIEGETIC-IZING AND METACAESURA Rebecca Eaton, Texas State University In “The Fantastical Gap,” Stilwell suggests that metadiegetic music—which puts the audience “inside a character’s head”— begets such a strong spectator bond that it becomes “a kind of musical ‘direct address,’ threatening to break the fourth wall that is the screen.” While Stillwell theorizes a breaking of the fourth wall through audience over-identification, in this paper I define two means of film music transgression that potentially unsuture an audience, exposing film qua film: “diegetic-izing” and “metacaesura.” While these postmodern techniques 1) reveal film as a constructed artifact, and 2) thus render the spectator a more, not less, “troublesome viewing subject,” my analyses demonstrate that these breaches of convention still further the narrative aims of their respective films. Both Buhler and Stilwell analyze music that gradually dissolves from non-diegetic to diegetic. “Diegeticizing” unexpectedly reveals what was assumed to be nondiegetic as diegetic, subverting Gorbman’s first principle of invisibility. In parodies including Blazing Saddles and Spaceballs, this reflexive uncloaking plays for laughs. The Truman Show and the Hunger Games franchise skewer live soundtrack musicians and timpani—ergo, film music itself—as tools of emotional manipulation or propaganda. “Metacaesura” serves as another means of breaking the fourth wall. Metacaesura arises when non-diegetic music cuts off in media res. While diegeticizing renders film music visible, metacaesura renders it audible (if only in hindsight). In Honda’s “Responsible You,” Pleasantville, and The Truman Show, the dramatic cessation of nondiegetic music compels the audience to acknowledge the constructedness of both film and their own worlds. Partial Bibliography Brown, Tom. Breaking the Fourth Wall: Direct Address in the Cinema. Edinburgh: Edinburgh University Press, 2012. Buhler, James. “Analytical and Interpretive Approaches to Film Music (II): Interpreting Interactions of Music and Film.” In Film Music: An Anthology of Critical Essays, edited by K.J. Donnelly, 39-61. Edinburgh University Press, 2001. Buhler, James, Anahid Kassabian, David Neumeyer, and Robynn Stillwell. “Roundtable on Film Music.” Velvet Light Trap 51 (Spring 2003): 73-91. Buhler, James, Caryl Flinn, and David Neumeyer, eds. Music and Cinema. Hanover: Wesleyan/University Press of New England, 2000. Eaton, Rebecca M. Doran. “Unheard Minimalisms: The Function of the Minimalist Technique in Film Scores.” PhD diss., The University of Texas at Austin, 2008. Gorbman, Claudia. Unheard Melodies: Narrative Film Music. Bloomington: University of Indiana Press, 1987. Harries, Dan. Film Parody. London: British Film Institute, 2000. Kassabian, Anahid. Hearing Film: Tracking Identifications in Contemporary Hollywood Film Music. New York: Routledge, 2001. Neumeyer, David. “Diegetic/nondiegetic: A Theoretical Model.” Music and the Moving Image 2.1 (2009): 26–39. Stilwell, Robynn J. “The Fantastical Gap Between Diegetic and Nondiegetic.” In Beyond the Soundtrack, edited by Daniel Goldmark, Lawrence Kramer, and Richard Leppert, 184202. Berkeley: The University of California Press, 2007. REDEFINING PERSPECTIVE IN ATONEMENT: HOW MUSIC SET THE STAGE FOR MODERN MEDIA CONSUMPTION Lillie McDonough, New York University One of the most striking narrative devices in Joe Wright’s film adaptation of Atonement (2007) is in the way Dario Marianelli’s original score dissolves the boundaries between diagetic and non-diagetic music at key moments in the drama. I argue that these moments carry us into a liminal state where the viewer is simultaneously in the shoes of a first person character in the world of the film and in the shoes of a third person viewer aware of the underscore as a hallmark of the fiction of a film in the first place. This reflects the experience of Briony recalling the story, both as participant and narrator, at the metalevel of the audience. The way the score renegotiates the customary musical playing space creates a meta-narrative that resembles one of the fastest growing forms of digital media of today: videogames. At their core, video games work by placing the player in a liminal state of both a viewer who watches the story unfold and an agent who actively takes part in the story’s creation. In fact, the growing trend towards hyperrealism and virtual reality intentionally progressively erodes the boundaries between the first person agent in real the world and agent on screen in the digital world. Viewed through this lens, the philosophy behind the experience of Atonement’s score and sound design appears to set the stage for way our consumption of media has developed since Atonement’s release in 2007. Mainly, it foreshadows and highlights a prevalent desire to progressively blur the lines between media and life. 3. Room 303, Friday, May 27, 2016, 11:30 AM – 1:00 PM HOLLYWOOD ORCHESTRATORS AND GHOSTWRITERS OF THE 1960s AND 1970s: THE CASE OF MOACIR SANTOS Lucas Bonetti, State University of Campinas In Hollywood in the 1960s and 1970s, freelance film composers trying to break into the market saw ghostwriting as opportunities to their professional networks. Meanwhile, more renowned composers saw freelancers as means of easing their work burdens. The phenomenon was so widespread that freelancers even sometimes found themselves ghostwriting for other ghostwriters. Ghostwriting had its limitations, though: because freelancers did not receive credit, they could not grow their resumes. Moreover, their music often had to follow such strict guidelines that they were not able to showcase their own compositional voices. Being an orchestrator raised fewer questions about authorship, and orchestrators usually did not receive credit for their work. Typically, composers provided orchestrators with detailed sketches, thereby limiting their creative possibilities. This story would suggest that orchestrators were barely more than copyists—though with more intense workloads. This kind of thankless work was especially common in scoring for episodic television series of the era, where the fast pace of the industry demanded more agility and productivity. Brazilian composer Moacir Santos worked as a Hollywood ghostwriter and orchestrator starting in 1968. His experiences exemplify the difficulties of these professions during this era. In this paper I draw on an interview-based research I conducted in the Los Angeles area to show how Santos’s experiences showcase the difficulties of being a Hollywood outsider at the time. In particular, I examine testimony about racial prejudice experienced by Santos, and how misinformation about his ghostwriting activity has led to misunderstandings among scholars about his contributions. SING A SONG!: CHARITY BAILEY AND INTERRACIAL MUSIC EDUCATION ON 1950s NYC TELEVISION Melinda Russell, Carleton College Rhode Island native Charity Bailey (1904-1978) helped to define a children’s music market in print and recordings; in each instance the contents and forms she developed are still central to American children’s musical culture and practice. After study at Juilliard and Dalcroze, Bailey taught music at the Little Red School House in Greenwich Village from 1943-1954, where her students included Mary Travers and Eric Weissberg. Bailey’s focus on African, African-American, and Car",
"title": ""
},
{
"docid": "bdfb48fcd7ef03d913a41ca8392552b6",
"text": "Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.",
"title": ""
},
{
"docid": "dd51e9bed7bbd681657e8742bb5bf280",
"text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a",
"title": ""
},
{
"docid": "ed0d2151f5f20a233ed8f1051bc2b56c",
"text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.",
"title": ""
},
{
"docid": "852c85ecbed639ea0bfe439f69fff337",
"text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.",
"title": ""
},
{
"docid": "30db2040ab00fd5eec7b1eb08526f8e8",
"text": "We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.",
"title": ""
},
{
"docid": "19f604732dd88b01e1eefea1f995cd54",
"text": "Power electronic transformer (PET) technology is one of the promising technology for medium/high power conversion systems. With the cutting-edge improvements in the power electronics and magnetics, makes it possible to substitute conventional line frequency transformer traction (LFTT) technology with the PET technology. Over the past years, research and field trial studies are conducted to explore the technical challenges associated with the operation, functionalities, and control of PET-based traction systems. This paper aims to review the essential requirements, technical challenges, and the existing state of the art of PET traction system architectures. Finally, this paper discusses technical considerations and introduces the new research possibilities especially in the power conversion stages, PET design, and the power switching devices.",
"title": ""
},
{
"docid": "d9950f75380758d0a0f4fd9d6e885dfd",
"text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.",
"title": ""
},
{
"docid": "b1e2326ebdf729e5b55822a614b289a9",
"text": "The work presented in this paper is targeted at the first phase of the test and measurements product life cycle, namely standardisation. During this initial phase of any product, the emphasis is on the development of standards that support new technologies while leaving the scope of implementations as open as possible. To allow the engineer to freely create and invent tools that can quickly help him simulate or emulate his ideas are paramount. Within this scope, a traffic generation system has been developed for IEC 61850 Sampled Values which will help in the evaluation of the data models, data acquisition, data fusion, data integration and data distribution between the various devices and components that use this complex set of evolving standards in Smart Grid systems.",
"title": ""
},
{
"docid": "4a72f9b04ba1515c0d01df0bc9b60ed7",
"text": "Distributed generators (DGs) sometimes provide the lowest cost solution to handling low-voltage or overload problems. In conjunction with handling such problems, a DG can be placed for optimum efficiency or optimum reliability. Such optimum placements of DGs are investigated. The concept of segments, which has been applied in previous reliability studies, is used in the DG placement. The optimum locations are sought for time-varying load patterns. It is shown that the circuit reliability is a function of the loading level. The difference of DG placement between optimum efficiency and optimum reliability varies under different load conditions. Observations and recommendations concerning DG placement for optimum reliability and efficiency are provided in this paper. Economic considerations are also addressed.",
"title": ""
},
{
"docid": "91bf842f809dd369644ffd2b10b9c099",
"text": "We tackle the problem of multi-label classification of fashion images, learning from noisy data with minimal human supervision. We present a new dataset of full body poses, each with a set of 66 binary labels corresponding to the information about the garments worn in the image obtained in an automatic manner. As the automatically-collected labels contain significant noise, we manually correct the labels for a small subset of the data, and use these correct labels for further training and evaluation. We build upon a recent approach that both cleans the noisy labels and learns to classify, and introduce simple changes that can significantly improve the performance.",
"title": ""
},
{
"docid": "4fea653dd0dd8cb4ac941b2368ceb78f",
"text": "During present study the antibacterial activity of black pepper (Piper nigrum Linn.) and its mode of action on bacteria were done. The extracts of black pepper were evaluated for antibacterial activity by disc diffusion method. The minimum inhibitory concentration (MIC) was determined by tube dilution method and mode of action was studied on membrane leakage of UV260 and UV280 absorbing material spectrophotometrically. The diameter of the zone of inhibition against various Gram positive and Gram negative bacteria was measured. The MIC was found to be 50-500ppm. Black pepper altered the membrane permeability resulting the leakage of the UV260 and UV280 absorbing material i.e., nucleic acids and proteins into the extra cellular medium. The results indicate excellent inhibition on the growth of Gram positive bacteria like Staphylococcus aureus, followed by Bacillus cereus and Streptococcus faecalis. Among the Gram negative bacteria Pseudomonas aeruginosa was more susceptible followed by Salmonella typhi and Escherichia coli.",
"title": ""
},
{
"docid": "e812bed02753b807d1e03a2e05e87cb8",
"text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
}
] |
scidocsrr
|
0cdb5afc9455ba4b14067708656b9a4a
|
Design of Power-Rail ESD Clamp Circuit With Ultra-Low Standby Leakage Current in Nanoscale
|
[
{
"docid": "7af416164218d6ccb1d9772b77a5cd5c",
"text": "Considering gate-oxide reliability, a new electrostatic discharge (ESD) protection scheme with an on-chip ESD bus (ESD_BUS) and a high-voltage-tolerant ESD clamp circuit for 1.2/2.5 V mixed-voltage I/O interfaces is proposed. The devices used in the high-voltage-tolerant ESD clamp circuit are all 1.2 V low-voltage N- and P-type MOS devices that can be safely operated under the 2.5-V bias conditions without suffering from the gate-oxide reliability issue. The four-mode (positive-to-VSS, negative-to-VSS, positive-to-VDD, and negative-to-VDD) ESD stresses on the mixed-voltage I/O pad and pin-to-pin ESD stresses can be effectively discharged by the proposed ESD protection scheme. The experimental results verified in a 0.13-mum CMOS process have confirmed that the proposed new ESD protection scheme has high human-body model (HBM) and machine-model (MM) ESD robustness with a fast turn-on speed. The proposed new ESD protection scheme, which is designed with only low- voltage devices, is an excellent and cost-efficient solution to protect mixed-voltage I/O interfaces.",
"title": ""
}
] |
[
{
"docid": "c6810bcd06378091799af4210f4f8573",
"text": "F or years, business academics and practitioners have operated in the belief that sustained competitive advantage could accrue from a variety of industrylevel entry barriers, such as technological supremacy, patent protections, and government regulations. However, technological change and diffusion, rapid innovation, and deregulation have eroded these widely recognized barriers. In today’s environment, which requires flexibility, innovation, and speed-to-market, effectively developing and managing employees’ knowledge, experiences, skills, and expertise—collectively defined as “human capital”—has become a key success factor for sustained organizational performance.",
"title": ""
},
{
"docid": "710e81da55d50271b55ac9a4f2d7f986",
"text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "223470104e0ca1b04df1955df5afaa63",
"text": "Wine is the product of complex interactions between fungi, yeasts and bacteria that commence in the vineyard and continue throughout the fermentation process until packaging. Although grape cultivar and cultivation provide the foundations of wine flavour, microorganisms, especially yeasts, impact on the subtlety and individuality of the flavour response. Consequently, it is important to identify and understand the ecological interactions that occur between the different microbial groups, species and strains. These interactions encompass yeast-yeast, yeast-filamentous fungi and yeast-bacteria responses. The surface of healthy grapes has a predominance of Aureobasidium pullulans, Metschnikowia, Hanseniaspora (Kloeckera), Cryptococcus and Rhodotorula species depending on stage of maturity. This microflora moderates the growth of spoilage and mycotoxigenic fungi on grapes, the species and strains of yeasts that contribute to alcoholic fermentation, and the bacteria that contribute to malolactic fermentation. Damaged grapes have increased populations of lactic and acetic acid bacteria that impact on yeasts during alcoholic fermentation. Alcoholic fermentation is characterised by the successional growth of various yeast species and strains, where yeast-yeast interactions determine the ecology. Through yeast-bacterial interactions, this ecology can determine progression of the malolactic fermentation, and potential growth of spoilage bacteria in the final product. The mechanisms by which one species/strain impacts on another in grape-wine ecosystems include: production of lytic enzymes, ethanol, sulphur dioxide and killer toxin/bacteriocin like peptides; nutrient depletion including removal of oxygen, and production of carbon dioxide; and release of cell autolytic components. Cell-cell communication through quorum sensing molecules needs investigation.",
"title": ""
},
{
"docid": "20a484c01402cdc464cf0b46e577686e",
"text": "Healthcare costs have increased dramatically and the demand for highquality care will only grow in our aging society. At the same time,more event data are being collected about care processes. Healthcare Information Systems (HIS) have hundreds of tables with patient-related event data. Therefore, it is quite natural to exploit these data to improve care processes while reducing costs. Data science techniqueswill play a crucial role in this endeavor. Processmining can be used to improve compliance and performance while reducing costs. The chapter sets the scene for process mining in healthcare, thus serving as an introduction to this SpringerBrief.",
"title": ""
},
{
"docid": "3dd518c87372b51a9284e4b8aa2e4fb4",
"text": "Traditional background modeling and subtraction methods have a strong assumption that the scenes are of static structures with limited perturbation. These methods will perform poorly in dynamic scenes. In this paper, we present a solution to this problem. We first extend the local binary patterns from spatial domain to spatio-temporal domain, and present a new online dynamic texture extraction operator, named spatio- temporal local binary patterns (STLBP). Then we present a novel and effective method for dynamic background modeling and subtraction using STLBP. In the proposed method, each pixel is modeled as a group of STLBP dynamic texture histograms which combine spatial texture and temporal motion information together. Compared with traditional methods, experimental results show that the proposed method adapts quickly to the changes of the dynamic background. It achieves accurate detection of moving objects and suppresses most of the false detections for dynamic changes of nature scenes.",
"title": ""
},
{
"docid": "7afe5c6affbaf30b4af03f87a018a5b3",
"text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.",
"title": ""
},
{
"docid": "6a128aa00edaf147df327e7736eeb4c9",
"text": "Query segmentation is essential to query processing. It aims to tokenize query words into several semantic segments and help the search engine to improve the precision of retrieval. In this paper, we present a novel unsupervised learning approach to query segmentation based on principal eigenspace similarity of queryword-frequency matrix derived from web statistics. Experimental results show that our approach could achieve superior performance of 35.8% and 17.7% in Fmeasure over the two baselines respectively, i.e. MI (Mutual Information) approach and EM optimization approach.",
"title": ""
},
{
"docid": "92683433c212b8d9afc85f5ed2b88999",
"text": "Language Models (LMs) for Automatic Speech Recognition (ASR) are typically trained on large text corpora from news articles, books and web documents. These types of corpora, however, are unlikely to match the test distribution of ASR systems, which expect spoken utterances. Therefore, the LM is typically adapted to a smaller held-out in-domain dataset that is drawn from the test distribution. We propose three LM adaptation approaches for Deep NN and Long Short-Term Memory (LSTM): (1) Adapting the softmax layer in the Neural Network (NN); (2) Adding a non-linear adaptation layer before the softmax layer that is trained only in the adaptation phase; (3) Training the extra non-linear adaptation layer in pre-training and adaptation phases. Aiming to improve upon a hierarchical Maximum Entropy (MaxEnt) second-pass LM baseline, which factors the model into word-cluster and word models, we build an NN LM that predicts only word clusters. Adapting the LSTM LM by training the adaptation layer in both training and adaptation phases (Approach 3), we reduce the cluster perplexity by 30% on a held-out dataset compared to an unadapted LSTM LM. Initial experiments using a state-of-the-art ASR system show a 2.3% relative reduction in WER on top of an adapted MaxEnt LM.",
"title": ""
},
{
"docid": "7c1b3ba1b8e33ed866ae90b3ddf80ce6",
"text": "This paper presents a universal tuning system for harmonic operation of series-resonant inverters (SRI), based on a self-oscillating switching method. In the new tuning system, SRI can instantly operate in one of the switching frequency harmonics, e.g., the first, third, or fifth harmonic. Moreover, the new system can utilize pulse density modulation (PDM), phase shift (PS), and power–frequency control methods for each harmonic. Simultaneous combination of PDM and PS control method is also proposed for smoother power regulation. In addition, this paper investigates performance of selected harmonic operation based on phase-locked loop (PLL) circuits. In comparison with the fundamental harmonic operation, PLL circuits suffer from stability problem for the other harmonic operations. The proposed method has been verified using laboratory prototypes with resonant frequencies of 20 up to 75 kHz and output power of about 200 W.",
"title": ""
},
{
"docid": "901fbd46cdd4403c8398cb21e1c75ba1",
"text": "Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy. This paper aims at classifying the TCP network traffic as an attack or normal using HMM. The paper's main objective is to build an anomaly detection system, a predictive model capable of discriminating between normal and abnormal behavior of network traffic. In the training phase, special attention is given to the initialization and model selection issues, which makes the training phase particularly effective. For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used. Result of tests on the KDD Cup 1999 data set shows that the proposed system is able to classify network traffic in proportion to the number of features used for training HMM. We are extending our work on a larger data set for building an anomaly detection system.",
"title": ""
},
{
"docid": "cfd0cadbdf58ee01095aea668f0da4fe",
"text": "A unique and miniaturized dual-band coplanar waveguide (CPW)-fed antenna is presented. The proposed antenna comprises a rectangular patch that is surrounded by upper and lower ground-plane sections that are interconnected by a high-impedance microstrip line. The proposed antenna structure generates two separate impedance bandwidths to cover frequency bands of GSM and Wi-Fi/WLAN. The antenna realized is relatively small in size $(17\\times 20\\ {\\hbox{mm}}^{2})$ and operates over frequency ranges 1.60–1.85 and 4.95–5.80 GHz, making it suitable for GSM and Wi-Fi/WLAN applications. In addition, the antenna is circularly polarized in the GSM band. Experimental results show the antenna exhibits monopole-like radiation characteristics and a good antenna gain over its operating bands. The measured and simulated results presented show good agreement.",
"title": ""
},
{
"docid": "36e4c1d930ea33962a51f293e4c3a60e",
"text": "Address Space Layout Randomization (ASLR) can increase the cost of exploiting memory corruption vulnerabilities. One major weakness of ASLR is that it assumes the secrecy of memory addresses and is thus ineffective in the face of memory disclosure vulnerabilities. Even fine-grained variants of ASLR are shown to be ineffective against memory disclosures. In this paper we present an approach that synchronizes randomization with potential runtime disclosure. By applying rerandomization to the memory layout of a process every time it generates an output, our approach renders disclosures stale by the time they can be used by attackers to hijack control flow. We have developed a fully functioning prototype for x86_64 C programs by extending the Linux kernel, GCC, and the libc dynamic linker. The prototype operates on C source code and recompiles programs with a set of augmented information required to track pointer locations and support runtime rerandomization. Using this augmented information we dynamically relocate code segments and update code pointer values during runtime. Our evaluation on the SPEC CPU2006 benchmark, along with other applications, show that our technique incurs a very low performance overhead (2.1% on average).",
"title": ""
},
{
"docid": "8b84dc47c6a9d39ef1d094aa173a954c",
"text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.",
"title": ""
},
{
"docid": "1288abeaddded1564b607c9f31924697",
"text": "Dynamic time warping (DTW) is used for the comparison and processing of nonlinear signals and constitutes a widely researched field of study. The method has been initially designed for, and applied to, signals representing audio data. Afterwords it has been successfully modified and applied to many other fields of study. In this paper, we present the results of researches on the generalized DTW method designed for use with rotational sets of data parameterized by quaternions. The need to compare and process quaternion time series has been gaining in importance recently. Three-dimensional motion data processing is one of the most important applications here. Specifically, it is applied in the context of motion capture, and in many cases all rotational signals are described in this way. We propose a construction of generalized method called quaternion dynamic time warping (QDTW), which makes use of specific properties of quaternion space. It allows for the creation of a family of algorithms that deal with the higher order features of the rotational trajectory. This paper focuses on the analysis of the properties of this new approach. Numerical results show that the proposed method allows for efficient element assignment. Moreover, when used as the measure of similarity for a clustering task, the method helps to obtain good clustering performance both for synthetic and real datasets.",
"title": ""
},
{
"docid": "ce83a16a6ccce5ccc58577b25ab33788",
"text": "In this paper, we address the problem of automatically extracting disease-symptom relationships from health question-answer forums due to its usefulness for medical question answering system. To cope with the problem, we divide our main task into two subtasks since they exhibit different challenges: (1) disease-symptom extraction across sentences, (2) disease-symptom extraction within a sentence. For both subtasks, we employed machine learning approach leveraging several hand-crafted features, such as syntactic features (i.e., information from part-of-speech tags) and pre-trained word vectors. Furthermore, we basically formulate our problem as a binary classification task, in which we classify the \"indicating\" relation between a pair of Symptom and Disease entity. To evaluate the performance, we also collected and annotated corpus containing 463 pairs of question-answer threads from several Indonesian health consultation websites. Our experiment shows that, as our expected, the first subtask is relatively more difficult than the second subtask. For the first subtask, the extraction of disease-symptom relation only achieved 36% in terms of F1 measure, while the second one was 76%. To the best of our knowledge, this is the first work addressing such relation extraction task for both \"across\" and \"within\" sentence, especially in Indonesia.",
"title": ""
},
{
"docid": "2def5b7bb42a5b3b2eec57ff5dfc2da0",
"text": "Deepened periodontal pockets exert a significant pathological burden on the host and its immune system, particularly in a patient with generalized moderate to severe periodontitis. This burden is extensive and longitudinal, occurring over decades of disease development. Considerable diagnostic and prognostic successes in this regard have come from efforts to measure the depths of the pockets and their contents, including level of inflammatory mediators, cellular exudates and microbes; however, the current standard of care for measuring these pockets, periodontal probing, is an analog technology in a digital age. Measurements obtained by probing are variable, operator dependent and influenced by site-specific factors. Despite these limitations, manual probing is still the standard of care for periodontal diagnostics globally. However, it is becoming increasingly clear that this technology needs to be updated to be compatible with the digital technologies currently being used to image other orofacial structures, such as maxillary sinuses, alveolar bone, nerve foramina and endodontic canals in 3 dimensions. This review aims to summarize the existing technology, as well as new imaging strategies that could be utilized for accurate evaluation of periodontal pocket dimensions.",
"title": ""
},
{
"docid": "c4332dfb8e8117c3deac7d689b8e259b",
"text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.",
"title": ""
},
{
"docid": "59da726302c06abef243daee87cdeaa7",
"text": "The present research aims at gaining a better insight on the psychological barriers to the introduction of social robots in society at large. Based on social psychological research on intergroup distinctiveness, we suggested that concerns toward this technology are related to how we define and defend our human identity. A threat to distinctiveness hypothesis was advanced. We predicted that too much perceived similarity between social robots and humans triggers concerns about the negative impact of this technology on humans, as a group, and their identity more generally because similarity blurs category boundaries, undermining human uniqueness. Focusing on the appearance of robots, in two studies we tested the validity of this hypothesis. In both studies, participants were presented with pictures of three types of robots that differed in their anthropomorphic appearance varying from no resemblance to humans (mechanical robots), to some body shape resemblance (biped humanoids) to a perfect copy of human body (androids). Androids raised the highest concerns for the potential damage to humans, followed by humanoids and then mechanical robots. In Study 1, we further demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause. In Study 2, we gained a clearer insight in the processes B Maria Paola Paladino mariapaola.paladino@unitn.it Francesco Ferrari francesco.ferrari-1@unitn.it Jolanda Jetten j.jetten@psy.uq.edu.au 1 Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy 2 School of Psychology, The University of Queensland, St Lucia, QLD 4072, Australia underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction and that this perception was responsible for the higher perceived damage to humans. Implications of these findings for social robotics are discussed.",
"title": ""
},
{
"docid": "7e800094f52080194d94bdedf1d92b9c",
"text": "IMPORTANCE\nHealth care-associated infections (HAIs) account for a large proportion of the harms caused by health care and are associated with high costs. Better evaluation of the costs of these infections could help providers and payers to justify investing in prevention.\n\n\nOBJECTIVE\nTo estimate costs associated with the most significant and targetable HAIs.\n\n\nDATA SOURCES\nFor estimation of attributable costs, we conducted a systematic review of the literature using PubMed for the years 1986 through April 2013. For HAI incidence estimates, we used the National Healthcare Safety Network of the Centers for Disease Control and Prevention (CDC).\n\n\nSTUDY SELECTION\nStudies performed outside the United States were excluded. Inclusion criteria included a robust method of comparison using a matched control group or an appropriate regression strategy, generalizable populations typical of inpatient wards and critical care units, methodologic consistency with CDC definitions, and soundness of handling economic outcomes.\n\n\nDATA EXTRACTION AND SYNTHESIS\nThree review cycles were completed, with the final iteration carried out from July 2011 to April 2013. Selected publications underwent a secondary review by the research team.\n\n\nMAIN OUTCOMES AND MEASURES\nCosts, inflated to 2012 US dollars.\n\n\nRESULTS\nUsing Monte Carlo simulation, we generated point estimates and 95% CIs for attributable costs and length of hospital stay. On a per-case basis, central line-associated bloodstream infections were found to be the most costly HAIs at $45,814 (95% CI, $30,919-$65,245), followed by ventilator-associated pneumonia at $40,144 (95% CI, $36,286-$44,220), surgical site infections at $20,785 (95% CI, $18,902-$22,667), Clostridium difficile infection at $11,285 (95% CI, $9118-$13,574), and catheter-associated urinary tract infections at $896 (95% CI, $603-$1189). The total annual costs for the 5 major infections were $9.8 billion (95% CI, $8.3-$11.5 billion), with surgical site infections contributing the most to overall costs (33.7% of the total), followed by ventilator-associated pneumonia (31.6%), central line-associated bloodstream infections (18.9%), C difficile infections (15.4%), and catheter-associated urinary tract infections (<1%).\n\n\nCONCLUSIONS AND RELEVANCE\nWhile quality improvement initiatives have decreased HAI incidence and costs, much more remains to be done. As hospitals realize savings from prevention of these complications under payment reforms, they may be more likely to invest in such strategies.",
"title": ""
},
{
"docid": "f8f8c96e6abede6bc226a0c9f171e9e1",
"text": "Simulation is the research tool of choice for a majority of the mobile ad hoc network (MANET) community. However, while the use of simulation has increased, the credibility of the simulation results has decreased. To determine the state of MANET simulation studies, we surveyed the 2000-2005 proceedings of the ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). From our survey, we found significant shortfalls. We present the results of our survey in this paper. We then summarize common simulation study pitfalls found in our survey. Finally, we discuss the tools available that aid the development of rigorous simulation studies. We offer these results to the community with the hope of improving the credibility of MANET simulation-based studies.",
"title": ""
}
] |
scidocsrr
|
4bdc7f25ba00efc2f132798402bbb89b
|
Predicting Age Range of Users over Microblog Dataset
|
[
{
"docid": "ebc107147884d89da4ef04eba2d53a73",
"text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.",
"title": ""
}
] |
[
{
"docid": "35ce8c11fa7dd22ef0daf9d0bd624978",
"text": "Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, which propagate through pipeline systems impacting the performance of downstream applications. The detection of OOV regions in the output of a LVCSR system is typically addressed as a binary classification task, where each region is independently classified using local information. In this paper, we show that jointly predicting OOV regions, and including contextual information from each region, leads to substantial improvement in OOV detection. Compared to the state-of-the-art, we reduce the missed OOV rate from 42.6% to 28.4% at 10% false alarm rate.",
"title": ""
},
{
"docid": "fd76b7a11f8e071ebe045997ee598bbb",
"text": "γ-Aminobutyric acid (GABA) has high physiological activity in plant stress physiology. This study showed that the application of exogenous GABA by root drenching to moderately (MS, 150 mM salt concentration) and severely salt-stressed (SS, 300 mM salt concentration) plants significantly increased endogenous GABA concentration and improved maize seedling growth but decreased glutamate decarboxylase (GAD) activity compared with non-treated ones. Exogenous GABA alleviated damage to membranes, increased in proline and soluble sugar content in leaves, and reduced water loss. After the application of GABA, maize seedling leaves suffered less oxidative damage in terms of superoxide anion (O2·-) and malondialdehyde (MDA) content. GABA-treated MS and SS maize seedlings showed increased enzymatic antioxidant activity compared with that of untreated controls, and GABA-treated MS maize seedlings had a greater increase in enzymatic antioxidant activity than SS maize seedlings. Salt stress severely damaged cell function and inhibited photosynthesis, especially in SS maize seedlings. Exogenous GABA application could reduce the accumulation of harmful substances, help maintain cell morphology, and improve the function of cells during salt stress. These effects could reduce the damage to the photosynthetic system from salt stress and improve photosynthesis and chlorophyll fluorescence parameters. GABA enhanced the salt tolerance of maize seedlings.",
"title": ""
},
{
"docid": "43d46b56cdf20c8b8b67831caddfe4db",
"text": "This research addresses a challenging issue that is to recognize spoken Arabic letters, that are three letters of hijaiyah that have indentical pronounciation when pronounced by Indonesian speakers but actually has different makhraj in Arabic, the letters are sa, sya and tsa. The research uses Mel-Frequency Cepstral Coefficients (MFCC) based feature extraction and Artificial Neural Network (ANN) classification method. The result shows the proposed method obtain a good accuracy with an average acuracy is 92.42%, with recognition accuracy each letters (sa, sya, and tsa) prespectivly 92.38%, 93.26% and 91.63%.",
"title": ""
},
{
"docid": "ecd4dd9d8807df6c8194f7b4c7897572",
"text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.",
"title": ""
},
{
"docid": "973426438175226bb46c39cc0a390d97",
"text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.",
"title": ""
},
{
"docid": "04c7d8265e8b41aee67e5b11b3bc4fa2",
"text": "Stretchable microelectromechanical systems (MEMS) possess higher mechanical deformability and adaptability than devices based on conventional solid and flexible substrates, hence they are particularly desirable for biomedical, optoelectronic, textile and other innovative applications. The stretchability performance can be evaluated by the failure strain of the embedded routing and the strain applied to the elastomeric substrate. The routings are divided into five forms according to their geometry: straight; wavy; wrinkly; island-bridge; and conductive-elastomeric. These designs are reviewed and their resistance-to-failure performance is investigated. The failure modeling, numerical analysis, and fabrication of routings are presented. The current review concludes with the essential factors of the stretchable electrical routing for achieving high performance, including routing angle, width and thickness. The future challenges of device integration and reliability assessment of the stretchable routings are addressed.",
"title": ""
},
{
"docid": "da816b4a0aea96feceefe22a67c45be4",
"text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.",
"title": ""
},
{
"docid": "aa749c00010e5391710738cc235c1c35",
"text": "Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in di↵erent formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "c7f38e2284ad6f1258fdfda3417a6e14",
"text": "Millimeter wave (mmWave) systems must overcome heavy signal attenuation to support high-throughput wireless communication links. The small wavelength in mmWave systems enables beamforming using large antenna arrays to combat path loss with directional transmission. Beamforming with multiple data streams, known as precoding, can be used to achieve even higher performance. Both beamforming and precoding are done at baseband in traditional microwave systems. In mmWave systems, however, the high cost of mixed-signal and radio frequency chains (RF) makes operating in the passband and analog domains attractive. This hardware limitation places additional constraints on precoder design. In this paper, we consider single user beamforming and precoding in mmWave systems with large arrays. We exploit the structure of mmWave channels to formulate the precoder design problem as a sparsity constrained least squares problem. Using the principle of basis pursuit, we develop a precoding algorithm that approximates the optimal unconstrained precoder using a low dimensional basis representation that can be efficiently implemented in RF hardware. We present numerical results on the performance of the proposed algorithm and show that it allows mmWave systems to approach waterfilling capacity.",
"title": ""
},
{
"docid": "7a05f2c12c3db9978807eb7c082db087",
"text": "This paper discusses the importance, the complexity and the challenges of mapping mobile robot’s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.",
"title": ""
},
{
"docid": "eb271acef996a9ba0f84a50b5055953b",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "aba7cb0f5f50a062c42b6b51457eb363",
"text": "Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin’s role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student’s predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.",
"title": ""
},
{
"docid": "9197a5d92bd19ad29a82679bb2a94285",
"text": "Equation (1.1) expresses v0 as a convex combination of the neighbouring points v1, . . . , vk. In the simplest case k = 3, the weights λ1, λ2, λ3 are uniquely determined by (1.1) and (1.2) alone; they are the barycentric coordinates of v0 with respect to the triangle [v1, v2, v3], and they are positive. This motivates calling any set of non-negative weights satisfying (1.1–1.2) for general k, a set of coordinates for v0 with respect to v1, . . . , vk. There has long been an interest in generalizing barycentric coordinates to k-sided polygons with a view to possible multisided extensions of Bézier surfaces; see for example [8 ]. In this setting, one would normally be free to choose v1, . . . , vk to form a convex polygon but would need to allow v0 to be any point inside the polygon or on the polygon, i.e. on an edge or equal to a vertex. More recently, the need for such coordinates arose in methods for parameterization [2 ] and morphing [5 ], [6 ] of triangulations. Here the points v0, v1, . . . , vk will be vertices of a (planar) triangulation and so the point v0 will never lie on an edge of the polygon formed by v1, . . . , vk. If we require no particular properties of the coordinates, the problem is easily solved. Because v0 lies in the convex hull of v1, . . . , vk, there must exist at least one triangle T = [vi1 , vi2 , vi3 ] which contains v0, and so we can take λi1 , λi2 , λi3 to be the three barycentric coordinates of v0 with respect to T , and make the remaining coordinates zero. However, these coordinates depend randomly on the choice of triangle. An improvement is to take an average of such coordinates over certain covering triangles, as proposed in [2 ]. The resulting coordinates depend continuously on v0, v1, . . . , vk, yet still not smoothly. The",
"title": ""
},
{
"docid": "95e2a5dfa0b5e8d8719ae86f17f6d653",
"text": "Time series classification is an increasing research topic due to the vast amount of time series data that is being created over a wide variety of fields. The particularity of the data makes it a challenging task and different approaches have been taken, including the distance based approach. 1-NN has been a widely used method within distance based time series classification due to its simplicity but still good performance. However, its supremacy may be attributed to being able to use specific distances for time series within the classification process and not to the classifier itself. With the aim of exploiting these distances within more complex classifiers, new approaches have arisen in the past few years that are competitive or which outperform the 1-NN based approaches. In some cases, these new methods use the distance measure to transform the series into feature vectors, bridging the gap between time series and traditional classifiers. In other cases, the distances are employed to obtain a time series kernel and enable the use of kernel methods for time series classification. One of the main challenges is that a kernel function must be positive semi-definite, a matter that is also addressed within this review. The presented review includes a taxonomy of all those methods that aim to classify time series using a distance based approach, as well as a discussion of the strengths and weaknesses of each method.",
"title": ""
},
{
"docid": "749e11a625e94ab4e1f03a74aa6b3ab2",
"text": "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.",
"title": ""
},
{
"docid": "30b508c7b576c88705098ac18657664b",
"text": "The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.",
"title": ""
},
{
"docid": "adba3380818a72270aea9452d2b77af2",
"text": "Web-based programming exercises are a useful way for students to practice and master essential concepts and techniques presented in introductory programming courses. Although these systems are used fairly widely, we have a limited understanding of how students use these systems, and what can be learned from the data collected by these systems.\n In this paper, we perform a preliminary exploratory analysis of data collected by the CloudCoder programming exercise system from five introductory courses taught in two programming languages across three colleges and universities. We explore a number of interesting correlations in the data that confirm existing hypotheses. Finally, and perhaps most importantly, we demonstrate the effectiveness and future potential of systems like CloudCoder to help us study novice programmers.",
"title": ""
},
{
"docid": "896eac4a4b782075119998ce6cfbf366",
"text": "In recent years, sustainability has been a major focus of fashion business operations because fashion industry development causes harmful effects to the environment, both indirectly and directly. The sustainability of the fashion industry is generally based on several levels and this study focuses on investigating the optimal supplier selection problem for sustainable materials supply in fashion clothing production. Following the ground rule that sustainable development is based on the Triple Bottom Line (TBL), this paper has framed twelve criteria from the economic, environmental and social perspectives for evaluating suppliers. The well-established multi-criteria decision making tool Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is employed for ranking potential suppliers among the pool of suppliers. Through a real case study, the proposed approach has been applied and some managerial implications are derived.",
"title": ""
},
{
"docid": "213acf777983f4339d6ee25a4467b1be",
"text": "RoadGraph is a graph based environmental model for driver assistance systems. It integrates information from different sources like digital maps, onboard sensors and V2X communication into one single model about vehicle's environment. At the moment of information aggregation some function independent situation analysis is done. In this paper the concept of the RoadGraph is described in detail and first results are shown.",
"title": ""
}
] |
scidocsrr
|
5dc89122ca1e53951781f75b21942cfb
|
DAGER: Deep Age, Gender and Emotion Recognition Using Convolutional Neural Network
|
[
{
"docid": "18cf88b01ff2b20d17590d7b703a41cb",
"text": "Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.",
"title": ""
}
] |
[
{
"docid": "2af524d484b7bb82db2dd92727a49fff",
"text": "Computer-based multimedia learning environments — consisting of pictures (such as animation) and words (such as narration) — offer a potentially powerful venue for improving student understanding. How can we use words and pictures to help people understand how scientific systems work, such as how a lightning storm develops, how the human respiratory system operates, or how a bicycle tire pump works? This paper presents a cognitive theory of multimedia learning which draws on dual coding theory, cognitive load theory, and constructivist learning theory. Based on the theory, principles of instructional design for fostering multimedia learning are derived and tested. The multiple representation principle states that it is better to present an explanation in words and pictures than solely in words. The contiguity principle is that it is better to present corresponding words and pictures simultaneously rather than separately when giving a multimedia explanation. The coherence principle is that multimedia explanations are better understood when they include few rather than many extraneous words and sounds. The modality principle is that it is better to present words as auditory narration than as visual on-screen text. The redundancy principle is that it is better to present animation and narration than to present animation, narration, and on-screen text. By beginning with a cognitive theory of how learners process multimedia information, we have been able to conduct focused research that yields some preliminary principles of instructional design for multimedia messages. 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d60fb42ca7082289c907c0e2e2c343fc",
"text": "As mentioned in the paper, the direct optimization of group assignment variables with reduced gradients yields faster convergence than optimization via softmax reparametrization. Figure 1 shows the distribution plots, which are provided by TensorFlow, of class-to-group assignments using two methods. Despite starting with lower variance, when the distribution of group assignment variables diverged to",
"title": ""
},
{
"docid": "f38bdbabdceacbf8b50739e6dd065876",
"text": "Treatment of high-strength phenolic wastewater by a novel two-step method was investigated in the present study. The two-step treatment method consisted of chemical coagulation of the wastewater by metal chloride followed by further phenol reduction by resin adsorption. The present combined treatment was found to be highly efficient in removing the phenol concentration from the aqueous solution and was proved capable of lowering the initial phenol concentration from over 10,000 mg/l to below direct discharge level (1mg/l). In the experimental tests, appropriate conditions were identified for optimum treatment operation. Theoretical investigations were also performed for batch equilibrium adsorption and column adsorption of phenol by macroreticular resin. The empirical Freundlich isotherm was found to represent well the equilibrium phenol adsorption. The column model with appropriately identified model parameters could accurately predict the breakthrough times.",
"title": ""
},
{
"docid": "0dd75eaa062ea30742e03b71d17119c5",
"text": "Ayahuasca is a hallucinogenic beverage that combines the action of the 5-HT2A/2C agonist N,N-dimethyltryptamine (DMT) from Psychotria viridis with the monoamine oxidase inhibitors (MAOIs) induced by beta-carbonyls from Banisteriopsis caapi. Previous investigations have highlighted the involvement of ayahuasca with the activation of brain regions known to be involved with episodic memory, contextual associations and emotional processing after ayahuasca ingestion. Moreover long term users show better performance in neuropsychological tests when tested in off-drug condition. This study evaluated the effects of long-term administration of ayahuasca on Morris water maze (MWM), fear conditioning and elevated plus maze (EPM) performance in rats. Behavior tests started 48h after the end of treatment. Freeze-dried ayahuasca doses of 120, 240 and 480 mg/kg were used, with water as the control. Long-term administration consisted of a daily oral dose for 30 days by gavage. The behavioral data indicated that long-term ayahuasca administration did not affect the performance of animals in MWM and EPM tasks. However the dose of 120 mg/kg increased the contextual conditioned fear response for both background and foreground fear conditioning. The tone conditioned response was not affected after long-term administration. In addition, the increase in the contextual fear response was maintained during the repeated sessions several weeks after training. Taken together, these data showed that long-term ayahuasca administration in rats can interfere with the contextual association of emotional events, which is in agreement with the fact that the beverage activates brain areas related to these processes.",
"title": ""
},
{
"docid": "c8b4ea815c449872fde2df910573d137",
"text": "Two clinically distinct forms of Blount disease (early-onset and late-onset), based on whether the lower-limb deformity develops before or after the age of four years, have been described. Although the etiology of Blount disease may be multifactorial, the strong association with childhood obesity suggests a mechanical basis. A comprehensive analysis of multiplanar deformities in the lower extremity reveals tibial varus, procurvatum, and internal torsion along with limb shortening. Additionally, distal femoral varus is commonly noted in the late-onset form. When a patient has early-onset disease, a realignment tibial osteotomy before the age of four years decreases the risk of recurrent deformity. Gradual correction with distraction osteogenesis is an effective means of achieving an accurate multiplanar correction, especially in patients with late-onset disease.",
"title": ""
},
{
"docid": "bffddca72c7e9d6e5a8c760758a98de0",
"text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.",
"title": ""
},
{
"docid": "60d0af0788a1b6641c722eafd0d1b8bb",
"text": "Enhancing the quality of image is a continuous process in image processing related research activities. For some applications it becomes essential to have best quality of image such as in forensic department, where in order to retrieve maximum possible information, image has to be enlarged in terms of size, with higher resolution and other features associated with it. Such obtained high quality images have also a concern in satellite imaging, medical science, High Definition Television (HDTV), etc. In this paper a novel approach of getting high resolution image from a single low resolution image is discussed. The Non Sub-sampled Contourlet Transform (NSCT) based learning is used to learn the NSCT coefficients at the finer scale of the unknown high-resolution image from a dataset of high resolution images. The cost function consisting of a data fitting term and a Gabor prior term is optimized using an Iterative Back Projection (IBP). By making use of directional decomposition property of the NSCT and the Gabor filter bank with various orientations, the proposed method is capable to reconstruct an image with less edge artifacts. The validity of the proposed approach is proven through simulation on several images. RMS measures, PSNR measures and illustrations show the success of the proposed method.",
"title": ""
},
{
"docid": "b7b664d1749b61f2f423d7080a240a60",
"text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.",
"title": ""
},
{
"docid": "cfb565adac45aec4597855d4b6d86e97",
"text": "3 Cooccurrence and frequency counts 11 12 3.1 Surface cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 13 3.2 Textual cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 3.3 Syntactic cooccurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 15 3.4 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 16",
"title": ""
},
{
"docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d",
"text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.",
"title": ""
},
{
"docid": "32ca9711622abd30c7c94f41b91fa3f6",
"text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
"title": ""
},
{
"docid": "7343d29bfdc1a4466400f8752dce4622",
"text": "We present a novel method for detecting occlusions and in-painting unknown areas of a light field photograph, based on previous work in obstruction-free photography and light field completion. An initial guess at separating the occluder from the rest of the photograph is computed by aligning backgrounds of the images and using this information to generate an occlusion mask. The masked pixels are then synthesized using a patch-based texture synthesis algorithm, with the median image as the source of each patch.",
"title": ""
},
{
"docid": "174e4ef91fa7e2528e0e5a2a9f1e0c7c",
"text": "This paper describes the development of a human airbag system which is designed to reduce the impact force from slippage falling-down. A micro inertial measurement unit (muIMU) which is based on MEMS accelerometers and gyro sensors is developed as the motion sensing part of the system. A weightless recognition algorithm is used for real-time falling determination. With the algorithm, the microcontroller integrated with muIMU can discriminate falling-down motion from normal human motions and trigger an airbag system when a fall occurs. Our airbag system is designed to be fast response with moderate input pressure, i.e., the experimental response time is less than 0.3 second under 0.4 MPa (gage pressure). Also, we present our progress on development of the inflator and the airbags",
"title": ""
},
{
"docid": "5a092bc5bac7e36c71ad764768c2ac5a",
"text": "Adolescence is characterized by making risky decisions. Early lesion and neuroimaging studies in adults pointed to the ventromedial prefrontal cortex and related structures as having a key role in decision-making. More recent studies have fractionated decision-making processes into its various components, including the representation of value, response selection (including inter-temporal choice and cognitive control), associative learning, and affective and social aspects. These different aspects of decision-making have been the focus of investigation in recent studies of the adolescent brain. Evidence points to a dissociation between the relatively slow, linear development of impulse control and response inhibition during adolescence versus the nonlinear development of the reward system, which is often hyper-responsive to rewards in adolescence. This suggests that decision-making in adolescence may be particularly modulated by emotion and social factors, for example, when adolescents are with peers or in other affective ('hot') contexts.",
"title": ""
},
{
"docid": "473eb35bb5d3a85a4e9f5867aaf3c363",
"text": "This paper develops techniques using which humans can be visually recognized. While face recognition would be one approach to this problem, we believe that it may not be always possible to see a person?s face. Our technique is complementary to face recognition, and exploits the intuition that human motion patterns and clothing colors can together encode several bits of information. Treating this information as a \"temporary fingerprint\", it may be feasible to recognize an individual with reasonable consistency, while allowing her to turn off the fingerprint at will.\n One application of visual fingerprints relates to augmented reality, in which an individual looks at other people through her camera-enabled glass (e.g., Google Glass) and views information about them. Another application is in privacy-preserving pictures ? Alice should be able to broadcast her \"temporary fingerprint\" to all cameras in the vicinity along with a privacy preference, saying \"remove me\". If a stranger?s video happens to include Alice, the device can recognize her fingerprint in the video and erase her completely. This paper develops the core visual fingerprinting engine ? InSight ? on the platform of Android smartphones and a backend server running MATLAB and OpenCV. Results from real world experiments show that 12 individuals can be discriminated with 90% accuracy using 6 seconds of video/motion observations. Video based emulation confirms scalability up to 40 users.",
"title": ""
},
{
"docid": "7994b0cad77119ed42c964be6a05ab94",
"text": "CONTEXT-AWARE ARGUMENT MINING AND ITS APPLICATIONS IN EDUCATION",
"title": ""
},
{
"docid": "95db5921ba31588e962ffcd8eb6469b0",
"text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this",
"title": ""
},
{
"docid": "937bb3c066500ddffe8d3d78b3580c26",
"text": "Multimodal semantic representation is an evolving area of research in natural language processing as well as computer vision. Combining or integrating perceptual information, such as visual features, with linguistic features is recently being actively studied. This paper presents a novel bimodal autoencoder model for multimodal representation learning: the autoencoder learns in order to enhance linguistic feature vectors by incorporating the corresponding visual features. During the runtime, owing to the trained neural network, visually enhanced multimodal representations can be achieved even for words for which direct visual-linguistic correspondences are not learned. The empirical results obtained with standard semantic relatedness tasks demonstrate that our approach is generally promising. We further investigate the potential efficacy of the enhanced word embeddings in discriminating antonyms and synonyms from vaguely related words.",
"title": ""
},
{
"docid": "64e573006e2fb142dba1b757b1e4f20d",
"text": "Online learning algorithms often have to operate in the presence of concept drift (i.e., the concepts to be learned can change with time). This paper presents a new categorization for concept drift, separating drifts according to different criteria into mutually exclusive and nonheterogeneous categories. Moreover, although ensembles of learning machines have been used to learn in the presence of concept drift, there has been no deep study of why they can be helpful for that and which of their features can contribute or not for that. As diversity is one of these features, we present a diversity analysis in the presence of different types of drifts. We show that, before the drift, ensembles with less diversity obtain lower test errors. On the other hand, it is a good strategy to maintain highly diverse ensembles to obtain lower test errors shortly after the drift independent on the type of drift, even though high diversity is more important for more severe drifts. Longer after the drift, high diversity becomes less important. Diversity by itself can help to reduce the initial increase in error caused by a drift, but does not provide the faster recovery from drifts in long-term.",
"title": ""
}
] |
scidocsrr
|
4d8b0f9058f9468c453375d60c45c2eb
|
A General Framework for Temporal Calibration of Multiple Proprioceptive and Exteroceptive Sensors
|
[
{
"docid": "74ae28cf8b7f458b857b49748573709d",
"text": "Muscle fiber conduction velocity is based on the ti me delay estimation between electromyography recording channels. The aims of this study is to id entify the best estimator of generalized correlati on methods in the case where time delay is constant in order to extent these estimator to the time-varyin g delay case . The fractional part of time delay was c lculated by using parabolic interpolation. The re sults indicate that Eckart filter and Hannan Thomson (HT ) give the best results in the case where the signa l to noise ratio (SNR) is 0 dB.",
"title": ""
}
] |
[
{
"docid": "aeabcc9117801db562d83709fda22722",
"text": "The world’s population is aging at a phenomenal rate. Certain types of cognitive decline, in particular some forms of memory impairment, occur much more frequently in the elderly. This paper describes Autominder, a cognitive orthotic system intended to help older adults adapt to cognitive decline and continue the satisfactory performance of routine activities, thereby potentially enabling them to remain in their own homes longer. Autominder achieves this goal by providing adaptive, personalized reminders of (basic, instrumental, and extended) activities of daily living. Cognitive orthotic systems on the market today mainly provide alarms for prescribed activities at fixed times that are specified in advance. In contrast, Autominder uses a range of AI techniques to model an individual’s daily plans, observe and reason about the execution of those plans, and make decisions about whether and when it is most appropriate to issue reminders. Autominder is currently deployed on a mobile robot, and is being developed as part of the Initiative on Personal Robotic Assistants for the Elderly (the Nursebot project). © 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dcd21065898c9dd108617a3db8dad6a1",
"text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.",
"title": ""
},
{
"docid": "d6477bab69274263bc208d19d9ec3ec2",
"text": "Software APIs often contain too many methods and parameters for developers to memorize or navigate effectively. Instead, developers resort to finding answers through online search engines and systems such as Stack Overflow. However, the process of finding and integrating a working solution is often very time-consuming. Though code search engines have increased in quality, there remain significant language- and workflow-gaps in meeting end-user needs. Novice and intermediate programmers often lack the language to query, and the expertise in transferring found code to their task. To address this problem, we present CodeMend, a system to support finding and integration of code. CodeMend leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets. We also demonstrate a novel, mixed-initiative, interface to support query and integration steps. Through CodeMend, end-users describe their goal in natural language. The system makes salient the relevant API functions, the lines in the end-user's program that should be changed, as well as proposing the actual change. We demonstrate the utility and accuracy of CodeMend through lab and simulation studies.",
"title": ""
},
{
"docid": "c4ebb90bad820a3aba5f0746791b3b5c",
"text": "This paper is concerned with the problem of finding a sparse graph capturing the conditional dependence between the entries of a Gaussian random vector, where the only available information is a sample correlation matrix. A popular approach is to solve a graphical lasso problem with a sparsity-promoting regularization term. This paper derives a simple condition under which the computationally-expensive graphical lasso behaves the same as the simple heuristic method of thresholding. This condition depends only on the solution of graphical lasso and makes no direct use of the sample correlation matrix or the regularization coefficient. It is also proved that this condition is always satisfied if the solution of graphical lasso is replaced by its first-order Taylor approximation. The condition is tested on several random problems and it is shown that graphical lasso and the thresholding method (based on the correlation matrix) lead to a similar result (if not equivalent), provided the regularization term is high enough to seek a sparse graph.",
"title": ""
},
{
"docid": "4d449388969075c56b921f9183fbc7b5",
"text": "Tasks such as question answering and semantic search are dependent on the ability of querying & reasoning over large-scale commonsense knowledge bases (KBs). However, dealing with commonsense data demands coping with problems such as the increase in schema complexity, semantic inconsistency, incompleteness and scalability. This paper proposes a selective graph navigation mechanism based on a distributional relational semantic model which can be applied to querying & reasoning over heterogeneous knowledge bases (KBs). The approach can be used for approximative reasoning, querying and associational knowledge discovery. In this paper we focus on commonsense reasoning as the main motivational scenario for the approach. The approach focuses on addressing the following problems: (i) providing a semantic selection mechanism for facts which are relevant and meaningful in a specific reasoning & querying context and (ii) allowing coping with information incompleteness in large KBs. The approach is evaluated using ConceptNet as a commonsense KB, and achieved high selectivity, high scalability and high accuracy in the selection of meaningful navigational paths. Distributional semantics is also used as a principled mechanism to cope with information incompleteness.",
"title": ""
},
{
"docid": "53a05c0438a0a26c8e3e74e1fa7b192b",
"text": "This paper presents a simple method based on sinusoidal-amplitude detector for realizing the resolver-signal demodulator. The proposed demodulator consists of two full-wave rectifiers, two ±unity-gain amplifiers, and two sinusoidal-amplitude detectors with control switches. Two output voltages are proportional to sine and cosine envelopes of resolver-shaft angle without low-pass filter. Experimental results demonstrating characteristic of the proposed circuit are included.",
"title": ""
},
{
"docid": "1c89a187c4d930120454dfffaa1e7d5b",
"text": "Many researches in face recognition have been dealing with the challenge of the great variability in head pose, lighting intensity and direction,facial expression, and aging. The main purpose of this overview is to describe the recent 3D face recognition algorithms. The last few years more and more 2D face recognition algorithms are improved and tested on less than perfect images. However, 3D models hold more information of the face, like surface information, that can be used for face recognition or subject discrimination. Another major advantage is that 3D face recognition is pose invariant. A disadvantage of most presented 3D face recognition methods is that they still treat the human face as a rigid object. This means that the methods aren’t capable of handling facial expressions. Although 2D face recognition still seems to outperform the 3D face recognition methods, it is expected that this will change in the near future.",
"title": ""
},
{
"docid": "83c81ecb870e84d4e8ab490da6caeae2",
"text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.",
"title": ""
},
{
"docid": "02926cfd609755bc938512545af08cb7",
"text": "An efficient genetic transformation method for kabocha squash (Cucurbita moschata Duch cv. Heiankogiku) was established by wounding cotyledonary node explants with aluminum borate whiskers prior to inoculation with Agrobacterium. Adventitious shoots were induced from only the proximal regions of the cotyledonary nodes and were most efficiently induced on Murashige–Skoog agar medium with 1 mg/L benzyladenine. Vortexing with 1% (w/v) aluminum borate whiskers significantly increased Agrobacterium infection efficiency in the proximal region of the explants. Transgenic plants were screened at the T0 generation by sGFP fluorescence, genomic PCR, and Southern blot analyses. These transgenic plants grew normally and T1 seeds were obtained. We confirmed stable integration of the transgene and its inheritance in T1 generation plants by sGFP fluorescence and genomic PCR analyses. The average transgenic efficiency for producing kabocha squashes with our method was about 2.7%, a value sufficient for practical use.",
"title": ""
},
{
"docid": "1fa6ee7cf37d60c182aa7281bd333649",
"text": "To cope with the explosion of information in mathematics and physics, we need a unified mathematical language to integrate ideas and results from diverse fields. Clifford Algebra provides the key to a unifled Geometric Calculus for expressing, developing, integrating and applying the large body of geometrical ideas running through mathematics and physics.",
"title": ""
},
{
"docid": "29dcdc7c19515caad04c6fb58e7de4ea",
"text": "The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from realworld USGS elevation data is described, and an implementation in C++ is given.",
"title": ""
},
{
"docid": "3e80fb154cb594dc15f5318b774cf0c3",
"text": "Progressive multifocal leukoencephalopathy (PML) is a rare, subacute, demyelinating disease of the central nervous system caused by JC virus. Studies of PML from HIV Clade C prevalent countries are scarce. We sought to study the clinical, neuroimaging, and pathological features of PML in HIV Clade C patients from India. This is a prospective cum retrospective study, conducted in a tertiary care Neurological referral center in India from Jan 2001 to May 2012. Diagnosis was considered “definite” (confirmed by histopathology or JCV PCR in CSF) or “probable” (confirmed by MRI brain). Fifty-five patients of PML were diagnosed between January 2001 and May 2012. Complete data was available in 38 patients [mean age 39 ± 8.9 years; duration of illness—82.1 ± 74.7 days). PML was prevalent in 2.8 % of the HIV cohort seen in our Institute. Hemiparesis was the commonest symptom (44.7 %), followed by ataxia (36.8 %). Definitive diagnosis was possible in 20 cases. Eighteen remained “probable” wherein MRI revealed multifocal, symmetric lesions, hypointense on T1, and hyperintense on T2/FLAIR. Stereotactic biopsy (n = 11) revealed demyelination, enlarged oligodendrocytes with intranuclear inclusions and astrocytosis. Immunohistochemistry revelaed the presence of JC viral antigen within oligodendroglial nuclei and astrocytic cytoplasm. No differences in clinical, radiological, or pathological features were evident from PML associated with HIV Clade B. Clinical suspicion of PML was entertained in only half of the patients. Hence, a high index of suspicion is essential for diagnosis. There are no significant differences between clinical, radiological, and pathological picture of PML between Indian and Western countries.",
"title": ""
},
{
"docid": "126aaec3593ab395c046098d5136fe10",
"text": "This paper presents the SocioMetric Badges Corpus, a new corpus for social interaction studies collected during a 6 weeks contiguous period in a research institution, monitoring the activity of 53 people. The design of the corpus was inspired by the need to provide researchers and practitioners with: a) raw digital trace data that could be used to directly address the task of investigating, reconstructing and predicting people's actual social behavior in complex organizations, b) information about participants' individual characteristics (e.g., personality traits), along with c) data concerning the general social context (e.g., participants' social networks) and the specific situations they find themselves in.",
"title": ""
},
{
"docid": "5d97670a243d1b1b5b5d1d6c47570820",
"text": "In the 21st century, social media has burgeoned into one of the most used channels of communication in the society. As social media becomes well recognised for its potential as a social communication channel, recent years have witnessed an increased interest of using social media in higher education (Alhazmi, & Abdul Rahman, 2013; Al-rahmi, Othman, & Musa, 2014; Al-rahmi, & Othman, 2013a; Chen, & Bryer, 2012; Selwyn, 2009, 2012 to name a few). A survey by Pearson (Seaman, & Tinti-kane, 2013), The Social Media Survey 2013 shows that 41% of higher education faculty in the U.S.A. population has use social media in teaching in 2013 compared to 34% of them using it in 2012. The survey results also show the increase use of social media for teaching by educators and faculty professionals has increase because they see the potential in applying and integrating social media technology to their teaching. Many higher education institutions and educators are now finding themselves expected to catch up with the world of social media applications and social media users. This creates a growing phenomenon for the educational use of social media to create, engage, and share existing or newly produced information between lecturers and students as well as among the students. Facebook has quickly become the social networking site of choice by university students due to its remarkable adoption rates of Facebook in universities (Muñoz, & Towner, 2009; Roblyer et al., 2010; Sánchez, Cortijo, & Javed, 2014). With this in mind, this paper aims to investigate the use of Facebook closed group by undergraduate students in a private university in the Klang Valley, Malaysia. It is also to analyse the interaction pattern among the students using the Facebook closed group pages.",
"title": ""
},
{
"docid": "1d084096acea83a62ecc6b010b302622",
"text": "The investigation of human activity patterns from location-based social networks like Twitter is an established approach of how to infer relationships and latent information that characterize urban structures. Researchers from various disciplines have performed geospatial analysis on social media data despite the data’s high dimensionality, complexity and heterogeneity. However, user-generated datasets are of multi-scale nature, which results in limited applicability of commonly known geospatial analysis methods. Therefore in this paper, we propose a geographic, hierarchical self-organizing map (Geo-H-SOM) to analyze geospatial, temporal and semantic characteristics of georeferenced tweets. The results of our method, which we validate in a case study, demonstrate the ability to explore, abstract and cluster high-dimensional geospatial and semantic information from crowdsourced data. ARTICLE HISTORY Received 8 April 2015 Accepted 19 September 2015",
"title": ""
},
{
"docid": "6e8a9c37672ec575821da5c9c3145500",
"text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e75f830b902ca7d0e8d9e9fa03a62440",
"text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.",
"title": ""
},
{
"docid": "239e37736832f6f0de050ed1749ba648",
"text": "An approach for capturing and modeling individual entertainment (“fun”) preferences is applied to users of the innovative Playware playground, an interactive physical playground inspired by computer games, in this study. The goal is to construct, using representative statistics computed from children’s physiological signals, an estimator of the degree to which games provided by the playground engage the players. For this purpose children’s heart rate (HR) signals, and their expressed preferences of how much “fun” particular game variants are, are obtained from experiments using games implemented on the Playware playground. A comprehensive statistical analysis shows that children’s reported entertainment preferences correlate well with specific features of the HR signal. Neuro-evolution techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given HR features. These models are expressed as artificial neural networks and are demonstrated and evaluated on two Playware games and two control tasks requiring physical activity. The best network is able to correctly match expressed preferences in 64% of cases on previously unseen data (p−value 6 · 10−5). The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed.",
"title": ""
},
{
"docid": "3224233a8a91c8d44e366b7b2ab8e7a1",
"text": "In this work we describe the scenario of fully-immersive desktop VR, which serves the overall goal to seamlessly integrate with existing workflows and workplaces of data analysts and researchers, such that they can benefit from the gain in productivity when immersed in their data-spaces. Furthermore, we provide a literature review showing the status quo of techniques and methods available for realizing this scenario under the raised restrictions. Finally, we propose a concept of an analysis framework and the decisions made and the decisions still to be taken, to outline how the described scenario and the collected methods are feasible in a real use case.",
"title": ""
},
{
"docid": "7c6fa8d48ad058f1c65f1c775b71e4b5",
"text": "A new method for determining nucleotide sequences in DNA is described. It is similar to the \"plus and minus\" method [Sanger, F. & Coulson, A. R. (1975) J. Mol. Biol. 94, 441-448] but makes use of the 2',3'-dideoxy and arabinonucleoside analogues of the normal deoxynucleoside triphosphates, which act as specific chain-terminating inhibitors of DNA polymerase. The technique has been applied to the DNA of bacteriophage varphiX174 and is more rapid and more accurate than either the plus or the minus method.",
"title": ""
}
] |
scidocsrr
|
7d786ea784346a8ed03ca411fb44aed2
|
Automatic nonverbal behavior indicators of depression and PTSD: the effect of gender
|
[
{
"docid": "f5bf18165f82b2fabdf43fbfed70a0fd",
"text": "Depression is a typical mood disorder, and the persons who are often in this state face the risk in mental and even physical problems. In recent years, there has therefore been increasing attention in machine based depression analysis. In such a low mood, both the facial expression and voice of human beings appear different from the ones in normal states. This paper presents a novel method, which comprehensively models visual and vocal modalities, and automatically predicts the scale of depression. On one hand, Motion History Histogram (MHH) extracts the dynamics from corresponding video and audio data to represent characteristics of subtle changes in facial and vocal expression of depression. On the other hand, for each modality, the Partial Least Square (PLS) regression algorithm is applied to learn the relationship between the dynamic features and depression scales using training data, and then predict the depression scale for an unseen one. Predicted values of visual and vocal clues are further combined at decision level for final decision. The proposed approach is evaluated on the AVEC2013 dataset and experimental results clearly highlight its effectiveness and better performance than baseline results provided by the AVEC2013 challenge organiser.",
"title": ""
}
] |
[
{
"docid": "ffc2079d68489ea7fae9f55ffd288018",
"text": "Soft robot arms possess unique capabilities when it comes to adaptability, flexibility, and dexterity. In addition, soft systems that are pneumatically actuated can claim high power-to-weight ratio. One of the main drawbacks of pneumatically actuated soft arms is that their stiffness cannot be varied independently from their end-effector position in space. The novel robot arm physical design presented in this article successfully decouples its end-effector positioning from its stiffness. An experimental characterization of this ability is coupled with a mathematical analysis. The arm combines the light weight, high payload to weight ratio and robustness of pneumatic actuation with the adaptability and versatility of variable stiffness. Light weight is a vital component of the inherent safety approach to physical human-robot interaction. To characterize the arm, a neural network analysis of the curvature of the arm for different input pressures is performed. The curvature-pressure relationship is also characterized experimentally.",
"title": ""
},
{
"docid": "2bb194184bea4b606ec41eb9eee0bfaa",
"text": "Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.",
"title": ""
},
{
"docid": "b9b68f6e2fd049d588d6bdb0c4878640",
"text": "Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks -- at the level of small network subgraphs -- remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns.\n Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"title": ""
},
{
"docid": "6a32d9e43d7f4558fa6dbbc596ce4496",
"text": "Automatically mapping natural language into programming language semantics has always been a major and interesting challenge. In this paper, we approach such problem by carrying out mapping at syntactic level and then applying machine learning algorithms to derive an automatic translator of natural language questions into their associated SQL queries. For this purpose, we design a dataset of relational pairs containing syntactic trees of questions and queries and we encode them in Support Vector Machines by means of kernel functions. Pair classification experiments suggest that our approach is promising in deriving shared semantics between the languages above.",
"title": ""
},
{
"docid": "49ff096deb6621438286942b792d6af3",
"text": "Fast fashion is a business model that offers (the perception of) fashionable clothes at affordable prices. From an operations standpoint, fast fashion requires a highly responsive supply chain that can support a product assortment that is periodically changing. Though the underlying principles are simple, the successful execution of the fast-fashion business model poses numerous challenges. We present a careful examination of this business model and discuss its execution by analyzing the most prominent firms in the industry. We then survey the academic literature for research that is specifically relevant or directly related to fast fashion. Our goal is to expose the main components of fast fashion and to identify untapped research opportunities.",
"title": ""
},
{
"docid": "2ac5b08573e8b243ac0eb5b6ab10c73d",
"text": "The use of virtual reality (VR) display systems has escalated over the last 5 yr and may have consequences for those working within vision research. This paper provides a brief review of the literature pertaining to the representation of depth in stereoscopic VR displays. Specific attention is paid to the response of the accommodation system with its cross-links to vergence eye movements, and to the spatial errors that arise when portraying three-dimensional space on a two-dimensional window. It is suggested that these factors prevent large depth intervals of three-dimensional visual space being rendered with integrity through dual two-dimensional arrays.",
"title": ""
},
{
"docid": "522363d36c93b692265c42f9f3976461",
"text": "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1% over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7%, while keeping the false positive rate (FP) at the same level as the baseline at about 3.77%. Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.",
"title": ""
},
{
"docid": "67269d2f4cc4b4ac07c855e3dfaca4ca",
"text": "Electronic textiles, or e-textiles, are an increasingly important part of wearable computing, helping to make pervasive devices truly wearable. These soft, fabric-based computers can function as lovely embodiments of Mark Weiser's vision of ubiquitous computing: providing useful functionality while disappearing discreetly into the fabric of our clothing. E-textiles also give new, expressive materials to fashion designers, textile designers, and artists, and garments stemming from these disciplines usually employ technology in visible and dramatic style. Integrating computer science, electrical engineering, textile design, and fashion design, e-textiles cross unusual boundaries, appeal to a broad spectrum of people, and provide novel opportunities for creative experimentation both in engineering and design. Moreover, e-textiles are cutting- edge technologies that capture people's imagination in unusual ways. (What other emerging pervasive technology has Vogue magazine featured?) Our work aims to capitalize on these unique features by providing a toolkit that empowers novices to design, engineer, and build their own e-textiles.",
"title": ""
},
{
"docid": "b52eb0d80b64fc962b17fb08ce446e12",
"text": "INTRODUCTION\nPriapism describes a persistent erection arising from dysfunction of mechanisms regulating penile tumescence, rigidity, and flaccidity. A correct diagnosis of priapism is a matter of urgency requiring identification of underlying hemodynamics.\n\n\nAIMS\nTo define the types of priapism, address its pathogenesis and epidemiology, and develop an evidence-based guideline for effective management.\n\n\nMETHODS\nSix experts from four countries developed a consensus document on priapism; this document was presented for peer review and debate in a public forum and revisions were made based on recommendations of chairpersons to the International Consultation on Sexual Medicine. This report focuses on guidelines written over the past decade and reviews the priapism literature from 2003 to 2009. Although the literature is predominantly case series, recent reports have more detailed methodology including duration of priapism, etiology of priapism, and erectile function outcomes.\n\n\nMAIN OUTCOME MEASURES\nConsensus recommendations were based on evidence-based literature, best medical practices, and bench research.\n\n\nRESULTS\nBasic science supporting current concepts in the pathophysiology of priapism, and clinical research supporting the most effective treatment strategies are summarized in this review.\n\n\nCONCLUSIONS\nPrompt diagnosis and appropriate management of priapism are necessary to spare patients ineffective interventions and maximize erectile function outcomes. Future research is needed to understand corporal smooth muscle pathology associated with genetic and acquired conditions resulting in ischemic priapism. Better understanding of molecular mechanisms involved in the pathogenesis of stuttering ischemic priapism will offer new avenues for medical intervention. Documenting erectile function outcomes based on duration of ischemic priapism, time to interventions, and types of interventions is needed to establish evidence-based guidance. In contrast, pathogenesis of nonischemic priapism is understood, and largely attributable to trauma. Better documentation of onset of high-flow priapism in relation to time of injury, and response to conservative management vs. angiogroaphic or surgical interventions is needed to establish evidence-based guidance.",
"title": ""
},
{
"docid": "4f64e7ff2bed569d73da9cae011e995d",
"text": "Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and bring the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which enhances the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-ofthe- art results on four datasets, including PASCAL VOC 2012, CamVid, GATECH, COCO Stuff. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6% in the test set.",
"title": ""
},
{
"docid": "719fab5525df0847e2cdd015bb2795ff",
"text": "The future smart grid is envisioned as a large scale cyberphysical system encompassing advanced power, communications, control, and computing technologies. To accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyberphysical systems. In this context, this article is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: microgrid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game-theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the application of game theory in smart grid systems tailored to the interdisciplinary characteristics of these systems that integrate components from power systems, networking, communications, and control.",
"title": ""
},
{
"docid": "1865cf66083c30d74b555eab827d0f5f",
"text": "Business intelligence and analytics (BIA) is about the development of technologies, systems, practices, and applications to analyze critical business data so as to gain new insights about business and markets. The new insights can be used for improving products and services, achieving better operational efficiency, and fostering customer relationships. In this article, we will categorize BIA research activities into three broad research directions: (a) big data analytics, (b) text analytics, and (c) network analytics. The article aims to review the state-of-the-art techniques and models and to summarize their use in BIA applications. For each research direction, we will also determine a few important questions to be addressed in future research.",
"title": ""
},
{
"docid": "e8933b0afcd695e492d5ddd9f87aeb81",
"text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.",
"title": ""
},
{
"docid": "150f27f47e9ffd6cd4bc0756bd08aed4",
"text": "Sunni extremism poses a significant danger to society, yet it is relatively easy for these extremist organizations to spread jihadist propaganda and recruit new members via the Internet, Darknet, and social media. The sheer volume of these sites make them very difficult to police. This paper discusses an approach that can assist with this problem, by automatically identifying a subset of web pages and social media content (or any text) that contains extremist content. The approach utilizes machine learning, specifically neural networks and deep learning, to classify text as containing “extremist” or “benign” (i.e., not extremist) content. This method is robust and can effectively learn to classify extremist multilingual text of varying length. This study also involved the construction of a high quality dataset for training and testing, put together by a team of 40 people (some with fluency in Arabic) who expended 9,500 hours of combined effort. This dataset should facilitate future research on this topic.",
"title": ""
},
{
"docid": "e0382c9d739281b4bc78f4a69827ac37",
"text": "Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning.",
"title": ""
},
{
"docid": "bf21fd50b793f74d5d0b026177552d2e",
"text": "This paper aims to evaluate the security and accuracy of Multi-Factor Biometric Authentication (MFBA) schemes that are based on applying UserBased Transformations (UBTs) on biometric features. Typically, UBTs employ transformation keys generated from passwords/PINs or retrieved from tokens. In this paper, we not only highlight the importance of simulating the scenario of compromised transformation keys rigorously, but also show that there has been misevaluation of this scenario as the results can be easily misinterpreted. In particular, we expose the falsehood of the widely reported claim in the literature that in the case of stolen keys, authentication accuracy drops but remains close to the authentication accuracy of biometric only system. We show that MFBA systems setup to operate at zero (%) Equal Error Rates (EER) can be undermined in the event of keys being compromised where the False Acceptance Rate reaches unacceptable levels. We demonstrate that for commonly used recognition schemes the FAR could be as high as 21%, 56%, and 66% for iris, fingerprint, and face biometrics respectively when using stolen transformation keys compared to near zero (%) EER when keys are assumed secure. We also discuss the trade off between improving accuracy of biometric systems using additional authentication factor(s) and compromising the security when the additional factor(s) are compromised. Finally, we propose mechanisms to enhance the security as well as the accuracy of MFBA schemes.",
"title": ""
},
{
"docid": "22445127362a9a2b16521a4a48f24686",
"text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.",
"title": ""
},
{
"docid": "0332be71a529382e82094239db31ea25",
"text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).",
"title": ""
},
{
"docid": "bb6314a8e6ec728d09aa37bfffe5c835",
"text": "In recent years, Convolutional Neural Network (CNN) has been extensively applied in the field of computer vision, which has also made remarkable achievements. However, the CNN models are computation-intensive and memory-consuming, which hinders the deployment of CNN-based methods on resource-limited embedded platforms. Therefore, this paper gives insight into low numerical precision Convolutional Neural Networks. At first, an image classification CNN model is quantized into 8-bit dynamic fixed-point with no more than 1% accuracy drop and then the method of conducting inference on low-cost ARM processor has been proposed. Experimental results verified the effectiveness of this method. Besides, our proof-of-concept prototype implementation can obtain a frame rate of 4.54fps when running on single Cortex-A72 core under 1.8GHz working frequency and 6.48 watts of gross power consumption.",
"title": ""
},
{
"docid": "0cbb4731b58c440752847874bfdad63a",
"text": "In order to increase accuracy of the linear array CCD edge detection system, a wavelet-based sub-pixel edge detection method is proposed, the basic process is like this: firstly, according to the step gradient features, automatically calculate the pixel-level border of the CCD image. Then use the wavelet transform algorithm to devide the image’s edge location in sub-pixel level, thus detecting the sub-pixel edge. In this way we prove that the method has no principle error and at the same time possesses a good anti-noise performance. Experiments show that under the circumstance of no special requirements, the accuracy of the method is greater than 0.02 pixel, thus verifying the correctness of the theory.",
"title": ""
}
] |
scidocsrr
|
714fb6dba1be46c6082bc417faf4dcbb
|
Robust 2D/3D face mask presentation attack detection scheme by exploring multiple features and comparison score level fusion
|
[
{
"docid": "db5865f8f8701e949a9bb2f41eb97244",
"text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.",
"title": ""
},
{
"docid": "2967df08ad0b9987ce2d6cb6006d3e69",
"text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.",
"title": ""
}
] |
[
{
"docid": "53477003e3c57381201a69e7cc54cfc9",
"text": "Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.",
"title": ""
},
{
"docid": "69f853b90b837211e24155a2f55b9a95",
"text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.",
"title": ""
},
{
"docid": "630e44732755c47fc70be111e40c7b67",
"text": "An algebra for geometric reasoning is developed that is amenable to software implementation. The features of the algebra are chosen to support geometric programming of the variety found in computer graphics and computer aided geometric design applications. The implementation of the algebra in C++ is described, and several examples illustrating the use of this software are given.",
"title": ""
},
{
"docid": "071ba3d1cec138011f398cae8589b77b",
"text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ff91ed2072c93eeae5f254fb3de0d780",
"text": "Machine learning requires access to all the data used for training. Recently, Google Research proposed Federated Learning as an alternative, where the training data is distributed over a federation of clients that each only access their own training data; the partially trained model is updated in a distributed fashion to maintain a situation where the data from all participating clients remains unknown. In this research we construct different distributions of the DMOZ dataset over the clients in the network and compare the resulting performance of Federated Averaging when learning a classifier. We find that the difference in spread of topics for each client has a strong correlation with the performance of the Federated Averaging algorithm.",
"title": ""
},
{
"docid": "2382ab2b71be5dfbd1ba9fb4bf6536fc",
"text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.",
"title": ""
},
{
"docid": "737bc68c51d2ae7665c47a060da3e25f",
"text": "Self-regulatory strategies of goal setting and goal striving are analyzed in three experiments. Experiment 1 uses fantasy realization theory (Oettingen, in: J. Brandstätter, R.M. Lerner (Eds.), Action and Self Development: Theory and Research through the Life Span, Sage Publications Inc, Thousand Oaks, CA, 1999, pp. 315-342) to analyze the self-regulatory processes of turning free fantasies about a desired future into binding goals. School children 8-12 years of age who had to mentally elaborate a desired academic future as well as present reality standing in its way, formed stronger goal commitments than participants solely indulging in the desired future or merely dwelling on present reality (Experiment 1). Effective implementation of set goals is addressed in the second and third experiments (Gollwitzer, Am. Psychol. 54 (1999) 493-503). Adolescents who had to furnish a set educational goal with relevant implementation intentions (specifying where, when, and how they would start goal pursuit) were comparatively more successful in meeting the goal (Experiment 2). Linking anticipated si tuations with goal-directed behaviors (i.e., if-then plans) rather than the mere thinking about good opportunities to act makes implementation intentions facilitate action initiation (Experiment 3). ©2001 Elsevier Science Ltd. All rights reserved. _____________________________________________________________________________________ Successful goal attainment demands completing two different tasks. People have to first turn their desires into binding goals, and second they have to attain the set goal. Both tasks benefit from selfregulatory strategies. In this article we describe a series of experiments with children, adolescents, and young adults that investigate self-regulatory processes facilitating effective goal setting and successful goal striving. The experimental studies investigate (1) different routes to goal setting depending on how",
"title": ""
},
{
"docid": "3c8ac7bd31d133b4d43c0d3a0f08e842",
"text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.",
"title": ""
},
{
"docid": "893437dbc30509dc5a1133ab74d4b78b",
"text": "Light scattered from multiple surfaces can be used to retrieve information of hidden environments. However, full three-dimensional retrieval of an object hidden from view by a wall has only been achieved with scanning systems and requires intensive computational processing of the retrieved data. Here we use a non-scanning, single-photon single-pixel detector in combination with a deep convolutional artificial neural network: this allows us to locate the position and to also simultaneously provide the actual identity of a hidden person, chosen from a database of people (N = 3). Artificial neural networks applied to specific computational imaging problems can therefore enable novel imaging capabilities with hugely simplified hardware and processing times.",
"title": ""
},
{
"docid": "5b55b1c913aa9ec461c6c51c3d00b11b",
"text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.",
"title": ""
},
{
"docid": "da4d3534f0f8cf463d4dfff9760b68f4",
"text": "While recommendation approaches exploiting different input sources have started to proliferate in the literature, an explicit study of the effect of the combination of heterogeneous inputs is still missing. On the other hand, in this context there are sides to recommendation quality requiring further characterisation and methodological research –a gap that is acknowledged in the field. We present a comparative study on the influence that different types of information available in social systems have on item recommendation. Aiming to identify which sources of user interest evidence –tags, social contacts, and user-item interaction data– are more effective to achieve useful recommendations, and in what aspect, we evaluate a number of content-based, collaborative filtering, and social recommenders on three datasets obtained from Delicious, Last.fm, and MovieLens. Aiming to determine whether and how combining such information sources may enhance over individual recommendation approaches, we extend the common accuracy-oriented evaluation practice with various metrics to measure further recommendation quality dimensions, namely coverage, diversity, novelty, overlap, and relative diversity between ranked item recommendations. We report empiric observations showing that exploiting tagging information by content-based recommenders provides high coverage and novelty, and combining social networking and collaborative filtering information by hybrid recommenders results in high accuracy and diversity. This, along with the fact that recommendation lists from the evaluated approaches had low overlap and relative diversity values between them, gives insights that meta-hybrid recommenders combining the above strategies may provide valuable, balanced item suggestions in terms of performance and non-performance metrics.",
"title": ""
},
{
"docid": "729b29b5ab44102541f3ebf8d24efec3",
"text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.",
"title": ""
},
{
"docid": "4e2b0d647da57a96085786c5aa2d15d9",
"text": "We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropyregularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.",
"title": ""
},
{
"docid": "e0217457b00d4c1ba86fc5d9faede342",
"text": "This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones.",
"title": ""
},
{
"docid": "02138b6fea0d80a6c365cafcc071e511",
"text": "Quantum scrambling is the dispersal of local information into many-body quantum entanglements and correlations distributed throughout an entire system. This concept accompanies the dynamics of thermalization in closed quantum systems, and has recently emerged as a powerful tool for characterizing chaos in black holes1–4. However, the direct experimental measurement of quantum scrambling is difficult, owing to the exponential complexity of ergodic many-body entangled states. One way to characterize quantum scrambling is to measure an out-of-time-ordered correlation function (OTOC); however, because scrambling leads to their decay, OTOCs do not generally discriminate between quantum scrambling and ordinary decoherence. Here we implement a quantum circuit that provides a positive test for the scrambling features of a given unitary process5,6. This approach conditionally teleports a quantum state through the circuit, providing an unambiguous test for whether scrambling has occurred, while simultaneously measuring an OTOC. We engineer quantum scrambling processes through a tunable three-qubit unitary operation as part of a seven-qubit circuit on an ion trap quantum computer. Measured teleportation fidelities are typically about 80 per cent, and enable us to experimentally bound the scrambling-induced decay of the corresponding OTOC measurement. A quantum circuit in an ion-trap quantum computer provides a positive test for the scrambling features of a given unitary process.",
"title": ""
},
{
"docid": "8321eecac6f8deb25ffd6c1b506c8ee3",
"text": "Propelled by a fast evolving landscape of techniques and datasets, data science is growing rapidly. Against this background, topological data analysis (TDA) has carved itself a niche for the analysis of datasets that present complex interactions and rich structures. Its distinctive feature, topology, allows TDA to detect, quantify and compare the mesoscopic structures of data, while also providing a language able to encode interactions beyond networks. Here we briefly present the TDA paradigm and some applications, in order to highlight its relevance to the data science community.",
"title": ""
},
{
"docid": "db2e7cc9ea3d58e0c625684248e2ef80",
"text": "PURPOSE\nTo review applications of Ajzen's theory of planned behavior in the domain of health and to verify the efficiency of the theory to explain and predict health-related behaviors.\n\n\nMETHODS\nMost material has been drawn from Current Contents (Social and Behavioral Sciences and Clinical Medicine) from 1985 to date, together with all peer-reviewed articles cited in the publications thus identified.\n\n\nFINDINGS\nThe results indicated that the theory performs very well for the explanation of intention; an averaged R2 of .41 was observed. Attitude toward the action and perceived behavioral control were most often the significant variables responsible for this explained variation in intention. The prediction of behavior yielded an averaged R2 of .34. Intention remained the most important predictor, but in half of the studies reviewed perceived behavioral control significantly added to the prediction.\n\n\nCONCLUSIONS\nThe efficiency of the model seems to be quite good for explaining intention, perceived behavioral control being as important as attitude across health-related behavior categories. The efficiency of the theory, however, varies between health-related behavior categories.",
"title": ""
},
{
"docid": "06e04aec6dccf454b63c98b4c5e194e3",
"text": "Existing measures of peer pressure and conformity may not be suitable for screening large numbers of adolescents efficiently, and few studies have differentiated peer pressure from theoretically related constructs, such as conformity or wanting to be popular. We developed and validated short measures of peer pressure, peer conformity, and popularity in a sample ( n= 148) of adolescent boys and girls in grades 11 to 13. Results showed that all measures constructed for the study were internally consistent. Although all measures of peer pressure, conformity, and popularity were intercorrelated, peer pressure and peer conformity were stronger predictors of risk behaviors than measures assessing popularity, general conformity, or dysphoria. Despite a simplified scoring format, peer conformity vignettes were equal to if not better than the peer pressure measures in predicting risk behavior. Findings suggest that peer pressure and peer conformity are potentially greater risk factors than a need to be popular, and that both peer pressure and peer conformity can be measured with short scales suitable for large-scale testing.",
"title": ""
},
{
"docid": "6c532169b4e169b9060ab9e17cb42602",
"text": "The complete nucleotide sequence of tomato infectious chlorosis virus (TICV) was determined and compared with those of other members of the genus Crinivirus. RNA 1 is 8,271 nucleotides long with three open reading frames and encodes proteins involved in replication. RNA 2 is 7,913 nucleotides long and encodes eight proteins common within the genus Crinivirus that are involved in genome protection, movement and other functions yet to be identified. Similarity between TICV and other criniviruses varies throughout the genome but TICV is related more closely to lettuce infectious yellows virus than to any other crinivirus, thus identifying a third group within the genus.",
"title": ""
}
] |
scidocsrr
|
370c6f1eee3d5470541dfaf9052d800c
|
Regressing a 3D Face Shape from a Single Image
|
[
{
"docid": "9ecb74866ca42b7fd559145deaed52a4",
"text": "We present an efficient and robust method of locating a set of feature points in an object of interest. From a training set we construct a joint model of the appearance of each feature together with their relative positions. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Models (AAM) [T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, in: Proceedings of the 5th European Conference on Computer Vision 1998, vol. 2, Freiburg, Germany, 1998.]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to a wide range of data sets, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on photographs of human faces, magnetic resonance (MR) images of the brain and a set of dental panoramic tomograms. We also show improved tracking performance on a challenging set of in car video sequences. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1d73817f8b1b54a82308106ee526a62b",
"text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.",
"title": ""
},
{
"docid": "2f1ba4ba5cff9a6e614aa1a781bf1b13",
"text": "Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.",
"title": ""
}
] |
[
{
"docid": "c4577ac95efb55a07e0748a10a9d4658",
"text": "This paper describes the design of a six-axis microelectromechanical systems (MEMS) force-torque sensor. A movable body is suspended by flexures that allow deflections and rotations along the x-, y-, and z-axes. The orientation of this movable body is sensed by seven capacitors. Transverse sensing is used for all capacitors, resulting in a high sensitivity. A batch fabrication process is described as capable of fabricating these multiaxis sensors with a high yield. The force sensor is experimentally investigated, and a multiaxis calibration method is described. Measurements show that the resolution is on the order of a micro-Newton and nano-Newtonmeter. This is the first six-axis MEMS force sensor that has been successfully developed.",
"title": ""
},
{
"docid": "f6574fbbdd53b2bc92af485d6c756df0",
"text": "A comparative analysis between Nigerian English (NE) and American English (AE) is presented in this article. The study is aimed at highlighting differences in the speech parameters, and how they influence speech processing and automatic speech recognition (ASR). The UILSpeech corpus of Nigerian-Accented English isolated word recordings, read speech utterances, and video recordings are used as a reference for Nigerian English. The corpus captures the linguistic diversity of Nigeria with data collected from native speakers of Hausa, Igbo, and Yoruba languages. The UILSpeech corpus is intended to provide a unique opportunity for application and expansion of speech processing techniques to a limited resource language dialect. The acoustic-phonetic differences between American English (AE) and Nigerian English (NE) are studied in terms of pronunciation variations, vowel locations in the formant space, mean fundamental frequency, and phone model distances in the acoustic space, as well as through visual speech analysis of the speakers’ articulators. A strong impact of the AE–NE acoustic mismatch on ASR is observed. A combination of model adaptation and extension of the AE lexicon for newly established NE pronunciation variants is shown to substantially improve performance of the AE-trained ASR system in the new NE task. This study is a part of the pioneering efforts towards incorporating speech technology in Nigerian English and is intended to provide a development basis for other low resource language dialects and languages.",
"title": ""
},
{
"docid": "27e60092f83e7572a5a7776113d8c97c",
"text": "Although cuckoo hashing has significant applications in both theoretical and practical settings, a relevant downside is that it requires lookups to multiple locations. In many settings, where lookups are expensive, cuckoo hashing becomes a less compelling alternative. One such standard setting is when memory is arranged in large pages, and a major cost is the number of page accesses. We propose the study of cuckoo hashing with pages, advocating approaches where each key has several possible locations, or cells, on a single page, and additional choices on a second backup page. We show experimentally that with k cell choices on one page and a single backup cell choice, one can achieve nearly the same loads as when each key has k+1 random cells to choose from, with most lookups requiring just one page access, even when keys are placed online using a simple algorithm. While our results are currently experimental, they suggest several interesting new open theoretical questions for cuckoo hashing with pages.",
"title": ""
},
{
"docid": "deed140862c62fa8be4a8a58ffc1d7dc",
"text": "Gender-affirmation surgery is often the final gender-confirming medical intervention sought by those patients suffering from gender dysphoria. In the male-to-female (MtF) transgendered patient, the creation of esthetic and functional external female genitalia with a functional vaginal channel is of the utmost importance. The aim of this review and meta-analysis is to evaluate the epidemiology, presentation, management, and outcomes of neovaginal complications in the MtF transgender reassignment surgery patients. PUBMED was searched in accordance with PRISMA guidelines for relevant articles (n = 125). Ineligible articles were excluded and articles meeting all inclusion criteria went on to review and analysis (n = 13). Ultimately, studies reported on 1,684 patients with an overall complication rate of 32.5% and a reoperation rate of 21.7% for non-esthetic reasons. The most common complication was stenosis of the neo-meatus (14.4%). Wound infection was associated with an increased risk of all tissue-healing complications. Use of sacrospinous ligament fixation (SSL) was associated with a significantly decreased risk of prolapse of the neovagina. Gender-affirmation surgery is important in the treatment of gender dysphoric patients, but there is a high complication rate in the reported literature. Variability in technique and complication reporting standards makes it difficult to assess the accurately the current state of MtF gender reassignment surgery. Further research and implementation of standards is necessary to improve patient outcomes. Clin. Anat. 31:191-199, 2018. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "fb75e0c18c4852afac162b60554b67b1",
"text": "OBJECTIVE\nTo evaluate the feasibility and safety of home rehabilitation of the hand using a robotic glove, and, in addition, its effectiveness, in hemiplegic patients after stroke.\n\n\nMETHODS\nIn this non-randomized pilot study, 21 hemiplegic stroke patients (Ashworth spasticity index ≤ 3) were prescribed, after in-hospital rehabilitation, a 2-month home-program of intensive hand training using the Gloreha Lite glove that provides computer-controlled passive mobilization of the fingers. Feasibility was measured by: number of patients who completed the home-program, minutes of exercise and number of sessions/patient performed. Safety was assessed by: hand pain with a visual analog scale (VAS), Ashworth spasticity index for finger flexors, opponents of the thumb and wrist flexors, and hand edema (circumference of forearm, wrist and fingers), measured at start (T0) and end (T1) of rehabilitation. Hand motor function (Motricity Index, MI), fine manual dexterity (Nine Hole Peg Test, NHPT) and strength (Grip test) were also measured at T0 and T1.\n\n\nRESULTS\nPatients performed, over a mean period 56 (49-63) days, a total of 1699 (1353-2045) min/patient of exercise with Gloreha Lite, 5.1 (4.3-5.8) days/week. Seventeen patients (81%) completed the full program. The mean VAS score of hand pain, Ashworth spasticity index and hand edema did not change significantly at T1 compared to T0. The MI, NHPT and Grip test improved significantly (p = 0.0020, 0.0156 and 0.0024, respectively) compared to baseline.\n\n\nCONCLUSION\nGloreha Lite is feasible and safe for use in home rehabilitation. The efficacy data show a therapeutic effect which need to be confirmed by a randomized controlled study.",
"title": ""
},
{
"docid": "cebc36cd572740069ab22e8181c405c4",
"text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"title": ""
},
{
"docid": "c3f7a3a4e31a610e6ecc149cede3db30",
"text": "OBJECTIVES\nCross-language qualitative research occurs when a language barrier is present between researchers and participants. The language barrier is frequently mediated through the use of a translator or interpreter. The purpose of this analysis of cross-language qualitative research was threefold: (1) review the methods literature addressing cross-language research; (2) synthesize the methodological recommendations from the literature into a list of criteria that could evaluate how researchers methodologically managed translators and interpreters in their qualitative studies; (3) test these criteria on published cross-language qualitative studies.\n\n\nDATA SOURCES\nA group of 40 purposively selected cross-language qualitative studies found in nursing and health sciences journals.\n\n\nREVIEW METHODS\nThe synthesis of the cross-language methods literature produced 14 criteria to evaluate how qualitative researchers managed the language barrier between themselves and their study participants. To test the criteria, the researcher conducted a summative content analysis framed by discourse analysis techniques of the 40 cross-language studies.\n\n\nRESULTS\nThe evaluation showed that only 6 out of 40 studies met all the criteria recommended by the cross-language methods literature for the production of trustworthy results in cross-language qualitative studies. Multiple inconsistencies, reflecting disadvantageous methodological choices by cross-language researchers, appeared in the remaining 33 studies. To name a few, these included rendering the translator or interpreter as an invisible part of the research process, failure to pilot test interview questions in the participant's language, no description of translator or interpreter credentials, failure to acknowledge translation as a limitation of the study, and inappropriate methodological frameworks for cross-language research.\n\n\nCONCLUSIONS\nThe finding about researchers making the role of the translator or interpreter invisible during the research process supports studies completed by other authors examining this issue. The analysis demonstrated that the criteria produced by this study may provide useful guidelines for evaluating cross-language research and for novice cross-language researchers designing their first studies. Finally, the study also indicates that researchers attempting cross-language studies need to address the methodological issues surrounding language barriers between researchers and participants more systematically.",
"title": ""
},
{
"docid": "9738485d5c61ac43e3a1e101b063dfd5",
"text": "Sentiment analysis is one of the most popular natural language processing techniques. It aims to identify the sentiment polarity (positive, negative, neutral or mixed) within a given text. The proper lexicon knowledge is very important for the lexicon-based sentiment analysis methods since they hinge on using the polarity of the lexical item to determine a text's sentiment polarity. However, it is quite common that some lexical items appear positive in the text of one domain but appear negative in another. In this paper, we propose an innovative knowledge building algorithm to extract sentiment lexicon knowledge through computing their polarity value based on their polarity distribution in text dataset, such as in a set of domain specific reviews. The proposed algorithm was tested by a set of domain microblogs. The results demonstrate the effectiveness of the proposed method. The proposed lexicon knowledge extraction method can enhance the performance of knowledge based sentiment analysis.",
"title": ""
},
{
"docid": "a59f82d98f978701d6a4271db1674d2a",
"text": "Hyperspectral imagery typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image; however, when used in statistical pattern-classification tasks, the resulting high-dimensional feature spaces often tend to result in ill-conditioned formulations. Popular dimensionality-reduction techniques such as principal component analysis, linear discriminant analysis, and their variants typically assume a Gaussian distribution. The quadratic maximum-likelihood classifier commonly employed for hyperspectral analysis also assumes single-Gaussian class-conditional distributions. Departing from this single-Gaussian assumption, a classification paradigm designed to exploit the rich statistical structure of the data is proposed. The proposed framework employs local Fisher's discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, while a subsequent Gaussian mixture model or support vector machine provides effective classification of the reduced-dimension multimodal data. Experimental results on several different multiple-class hyperspectral-classification tasks demonstrate that the proposed approach significantly outperforms several traditional alternatives.",
"title": ""
},
{
"docid": "df70cb4b1d37680cccb7d79bdea5d13b",
"text": "In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision. In recent years, several online support groups have been established which has led to a huge increase in the amount of patient-authored text available. Creating systems which can automatically extract important medical events and create disease progression timelines for users from such text can help in patient health monitoring as well as studying links between medical events and users’ participation in support groups. Prior work in this domain has used manually constructed keyword sets to detect medical events. In this work, our aim is to perform medical event detection using minimal supervision in order to develop a more general timeline construction system. Our system achieves an accuracy of 55.17%, which is 92% of the performance achieved by a supervised baseline system.",
"title": ""
},
{
"docid": "b51021e995fc4be50028a0a152db7e7a",
"text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "3ed5ec863971e04523a7ede434eaa80d",
"text": "This article reports on the design, implementation, and usage of the CourseMarker (formerly known as CourseMaster) courseware Computer Based Assessment (CBA) system at the University of Nottingham. Students use CourseMarker to solve (programming) exercises and to submit their solutions. CourseMarker returns immediate results and feedback to the students. Educators author a variety of exercises that benefit the students while offering practical benefits. To date, both educators and students have been hampered by CBA software that has been constructed to assess text-based or multiple-choice answers only. Although there exist a few CBA systems with some capability to automatically assess programming coursework, none assess Java programs and none are as flexible, architecture-neutral, robust, or secure as the CourseMarker CBA system.",
"title": ""
},
{
"docid": "0b231777fedf27659b4558aaabb872be",
"text": "Recognizing multiple mixed group activities from one still image is not a hard problem for humans but remains highly challenging for computer recognition systems. When modelling interactions among multiple units (i.e., more than two groups or persons), the existing approaches tend to divide them into interactions between pairwise units. However, no mathematical evidence supports this transformation. Therefore, these approaches’ performance is limited on images containing multiple activities. In this paper, we propose a generative model to provide a more reasonable interpretation for the mixed group activities contained in one image. We design a four level structure and convert the original intra-level interactions into inter-level interactions, in order to implement both interactions among multiple groups and interactions among multiple persons within a group. The proposed four-level structure makes our model more robust against the occlusion and overlap of the visible poses in images. Experimental results demonstrate that our model makes good interpretations for mixed group activities and outperforms the state-of-the-art methods on the Collective Activity Classification dataset.",
"title": ""
},
{
"docid": "eec5034991f82e0d809aba5e3eb94fe2",
"text": "This paper considers John Dewey’s dual reformist-preservationist agenda for education in the context of current debates about the role of experience in management learning. The paper argues for preserving experience-based approaches to management learning by revising the concept of experience to more clearly account for the relationship between personal and social (i.e. , tacit/explicit) knowledge. By reviewing, comparing and extending critiques of Kolb’s experiential learning theory and reconceptualizing the learning process based on post-structural analysis of psychoanalyst Jacque Lacan, the paper defines experience within the context of language and social action. This perspective is contrasted to action, cognition, critical reflection and other experience-based approaches to management learning. Implications for management theory, pedagogy and practice suggest greater emphasis on language and conversation in the learning process. Future directions for research are explored.",
"title": ""
},
{
"docid": "3900864885cf79e33683ec5c595235ad",
"text": "Digital mammogram has become the most effective technique for early breast cancer detection modality. Digital mammogram takes an electronic image of the breast and stores it directly in a computer. High quality mammogram images are high resolution and large size images. Processing these images require high computational capabilities. The transmission of these images over the net is sometimes critical especially if the diagnosis of remote radiologists is required. The aim of this study is to develop an automated system for assisting the analysis of digital mammograms. Computer image processing techniques will be applied to enhance images and this is followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI. The texture features will be used to classify the ROIs as either masses or non-masses.",
"title": ""
},
{
"docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1",
"text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.",
"title": ""
},
{
"docid": "eacf295c0cbd52599a1567c6d4193007",
"text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.",
"title": ""
},
{
"docid": "2923652ff988572a40d682e2a459707a",
"text": "Clustering analysis is a descriptive task that seeks to identify homogeneous groups of objects based on the values of their attributes. This paper proposes a new algorithm for K-medoids clustering which runs like the K-means algorithm and tests several methods for selecting initial medoids. The proposed algorithm calculates the distance matrix once and uses it for finding new medoids at every iterative step. We evaluate the proposed algorithm using real and artificial data and compare with the results of other algorithms. The proposed algorithm takes the reduced time in computation with comparable performance as compared to the Partitioning Around Medoids.",
"title": ""
},
{
"docid": "1464f9d7a60a59bfdd6399ea6cd9fd99",
"text": "Table of",
"title": ""
},
{
"docid": "34f8765ca28666cfeb94e324882a71d6",
"text": "We are living in the era of the fourth industrial revolution, namely Industry 4.0. This paper presents the main aspects related to Industry 4.0, the technologies that will enable this revolution, and the main application domains that will be affected by it. The effects that the introduction of Internet of Things (IoT), Cyber-Physical Systems (CPS), crowdsensing, crowdsourcing, cloud computing and big data will have on industrial processes will be discussed. The main objectives will be represented by improvements in: production efficiency, quality and cost-effectiveness; workplace health and safety, as well as quality of working conditions; products’ quality and availability, according to mass customisation requirements. The paper will further discuss the common denominator of these enhancements, i.e., data collection and analysis. As data and information will be crucial for Industry 4.0, crowdsensing and crowdsourcing will introduce new advantages and challenges, which will make most of the industrial processes easier with respect to traditional technologies.",
"title": ""
}
] |
scidocsrr
|
cfebf75dbb7549b5b5a59c2699d9fa6d
|
Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels
|
[
{
"docid": "5ca7e5a8770b931c070c51047ca99108",
"text": "The symmetric positive definite (SPD) matrices have been widely used in image and vision problems. Recently there are growing interests in studying sparse representation (SR) of SPD matrices, motivated by the great success of SR for vector data. Though the space of SPD matrices is well-known to form a Lie group that is a Riemannian manifold, existing work fails to take full advantage of its geometric structure. This paper attempts to tackle this problem by proposing a kernel based method for SR and dictionary learning (DL) of SPD matrices. We disclose that the space of SPD matrices, with the operations of logarithmic multiplication and scalar logarithmic multiplication defined in the Log-Euclidean framework, is a complete inner product space. We can thus develop a broad family of kernels that satisfies Mercer's condition. These kernels characterize the geodesic distance and can be computed efficiently. We also consider the geometric structure in the DL process by updating atom matrices in the Riemannian space instead of in the Euclidean space. The proposed method is evaluated with various vision problems and shows notable performance gains over state-of-the-arts.",
"title": ""
},
{
"docid": "6f484310532a757a28c427bad08f7623",
"text": "We address the problem of tracking and recognizing faces in real-world, noisy videos. We track faces using a tracker that adaptively builds a target model reflecting changes in appearance, typical of a video setting. However, adaptive appearance trackers often suffer from drift, a gradual adaptation of the tracker to non-targets. To alleviate this problem, our tracker introduces visual constraints using a combination of generative and discriminative models in a particle filtering framework. The generative term conforms the particles to the space of generic face poses while the discriminative one ensures rejection of poorly aligned targets. This leads to a tracker that significantly improves robustness against abrupt appearance changes and occlusions, critical for the subsequent recognition phase. Identity of the tracked subject is established by fusing pose-discriminant and person-discriminant features over the duration of a video sequence. This leads to a robust video-based face recognizer with state-of-the-art recognition performance. We test the quality of tracking and face recognition on real-world noisy videos from YouTube as well as the standard Honda/UCSD database. Our approach produces successful face tracking results on over 80% of all videos without video or person-specific parameter tuning. The good tracking performance induces similarly high recognition rates: 100% on Honda/UCSD and over 70% on the YouTube set containing 35 celebrities in 1500 sequences.",
"title": ""
},
{
"docid": "c41c56eeb56975c4d65e3847aa6b8b01",
"text": "We address the problem of comparing sets of images for object recognition, where the sets may represent variations in an object's appearance due to changing camera pose and lighting conditions. canonical correlations (also known as principal or canonical angles), which can be thought of as the angles between two d-dimensional subspaces, have recently attracted attention for image set matching. Canonical correlations offer many benefits in accuracy, efficiency, and robustness compared to the two main classical methods: parametric distribution-based and nonparametric sample-based matching of sets. Here, this is first demonstrated experimentally for reasonably sized data sets using existing methods exploiting canonical correlations. Motivated by their proven effectiveness, a novel discriminative learning method over sets is proposed for set classification. Specifically, inspired by classical linear discriminant analysis (LDA), we develop a linear discriminant function that maximizes the canonical correlations of within-class sets and minimizes the canonical correlations of between-class sets. Image sets transformed by the discriminant function are then compared by the canonical correlations. Classical orthogonal subspace method (OSM) is also investigated for the similar purpose and compared with the proposed method. The proposed method is evaluated on various object recognition problems using face image sets with arbitrary motion captured under different illuminations and image sets of 500 general objects taken at different views. The method is also applied to object category recognition using ETH-80 database. The proposed method is shown to outperform the state-of-the-art methods in terms of accuracy and efficiency",
"title": ""
}
] |
[
{
"docid": "91c5ad5a327026a424454779f96da601",
"text": "We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.",
"title": ""
},
{
"docid": "b71ec61f22457a5604a1c46087685e45",
"text": "Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than 4%, measured by the average Dice-Sørensen Coefficient (DSC). In addition, we report 62.43% DSC in the worst case, which guarantees the reliability of our approach in clinical applications.",
"title": ""
},
{
"docid": "5824a316f20751183676850c119c96cd",
"text": " Proposed method – Max-RGB & Gray-World • Instantiations of Minkowski norm – Optimal illuminant estimate • L6 norm: Working best overall",
"title": ""
},
{
"docid": "c2fe102ed88b248434b51130d693caca",
"text": "The Internet architecture uses congestion avoidance mechanisms implemented in the transport layer protocol like TCP to provide good service under heavy load. If network nodes distribute bandwidth fairly, the Internet would be more robust and accommodate a wide variety of applications. Various congestion and bandwidth management schemes have been proposed for this purpose and can be classified into two broad categories: packet scheduling algorithms such as fair queueing (FQ) which explicitly provide bandwidth shares by scheduling packets. They are more difficult to implement compared to FIFO queueing. The second category has active queue management schemes such as RED which use FIFO queues at the routers. They are easy to implement but don't aim to provide (and, in the presence of non-congestion-responsive sources, don't provide) fairness. An algorithm called AFD (approximate fair dropping), has been proposed to provide approximate, weighted max-min fair bandwidth allocations with relatively low complexity. AFD has since been widely adopted by the industry. This paper describes the evolution of AFD from a research project into an industry setting, focusing on the changes it has undergone in the process. AFD now serves as a traffic management module, which can be implemented either using a single FIFO or overlaid on top of extant per-flow queueing structures and which provides approximate bandwidth allocation in a simple fashion. The AFD algorithm has been implemented in several switch and router platforms at Cisco sytems, successfully transitioning from the academic world into the industry.",
"title": ""
},
{
"docid": "46baa51f8c36c9d913bc9ece46aa1919",
"text": "Radio frequency identification (RFID) has been identified as a crucial technology for the modern 21 st century knowledge-based economy. Many businesses started realising RFID to be able to improve their operational efficiency, achieve additional cost savings, and generate opportunities for higher revenues. To investigate how RFID technology has brought an impact to warehousing, a comprehensive analysis of research findings available through leading scientific article databases was conducted. Articles from years 1995 to 2010 were reviewed and analysed according to warehouse operations, RFID application domains, and benefits achieved. This paper presents four discussion topics covering RFID innovation, including its applications, perceived benefits, obstacles to its adoption and future trends. This is aimed at elucidating the current state of RFID in the warehouse and giving insights for the academics to establish new research scope and for the practitioners to evaluate their assessment of adopting RFID in the warehouse.",
"title": ""
},
{
"docid": "723aeab499abebfec38bfd8cf8484293",
"text": "Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences. However, modeling complex distributions over graphs and then efficiently sampling from these distributions is challenging due to the non-unique, high-dimensional nature of graphs and the complex, non-local dependencies that exist between edges in a given graph. Here we propose GraphRNN, a deep autoregressive model that addresses the above challenges and approximates any distribution of graphs with minimal assumptions about their structure. GraphRNN learns to generate graphs by training on a representative set of graphs and decomposes the graph generation process into a sequence of node and edge formations, conditioned on the graph structure generated so far. In order to quantitatively evaluate the performance of GraphRNN, we introduce a benchmark suite of datasets, baselines and novel evaluation metrics based on Maximum Mean Discrepancy, which measure distances between sets of graphs. Our experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50× larger than previous deep models.",
"title": ""
},
{
"docid": "729d9802488a45d889d891257738a65b",
"text": " Abstract— In this paper is presented an investigation of the speech recognition classification performance. This investigation on the speech recognition classification performance is performed using two standard neural networks structures as the classifier. The utilized standard neural network types include Feed-forward Neural Network (NN) with back propagation algorithm and a Radial Basis Functions Neural Networks.",
"title": ""
},
{
"docid": "b9ad751e5b7e46fd79848788b10d7ab9",
"text": "In this paper, we propose a cross-lingual convolutional neural network (CNN) model that is based on word and phrase embeddings learned from unlabeled data in two languages and dependency grammar. Compared to traditional machine translation (MT) based methods for cross lingual sentence modeling, our model is much simpler and does not need parallel corpora or language specific features. We only use a bilingual dictionary and dependency parser. This makes our model particularly appealing for resource poor languages. We evaluate our model using English and Chinese data on several sentence classification tasks. We show that our model achieves a comparable and even better performance than the traditional MT-based method.",
"title": ""
},
{
"docid": "52b5fa0494733f2f6b72df0cdfad01f4",
"text": "Requirements engineering encompasses many difficult, overarching problems inherent to its subareas of process, elicitation, specification, analysis, and validation. Requirements engineering researchers seek innovative, effective means of addressing these problems. One powerful tool that can be added to the researcher toolkit is that of machine learning. Some researchers have been experimenting with their own implementations of machine learning algorithms or with those available as part of the Weka machine learning software suite. There are some shortcomings to using “one off” solutions. It is the position of the authors that many problems exist in requirements engineering that can be supported by Weka's machine learning algorithms, specifically by classification trees. Further, the authors posit that adoption will be boosted if machine learning is easy to use and is integrated into requirements research tools, such as TraceLab. Toward that end, an initial concept validation of a component in TraceLab is presented that applies the Weka classification trees. The component is demonstrated on two different requirements engineering problems. Finally, insights gained on using the TraceLab Weka component on these two problems are offered.",
"title": ""
},
{
"docid": "77278e6ba57e82c88f66bd9155b43a50",
"text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.",
"title": ""
},
{
"docid": "dfccff16f4600e8cc297296481e50b7b",
"text": "Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes' trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.",
"title": ""
},
{
"docid": "6d6b844d89cd16196c27b70dec2bcd4d",
"text": "Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms \"error\" and \"discrepancy\" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised.\n\n\nTEACHING POINTS\n• Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.",
"title": ""
},
{
"docid": "fe98350e6fa6d91a2e63dc19646a0307",
"text": "One of the most widely studied systems of argumentation is the one described by Dung in a paper from 1995. Unfortunately, this framework does not allow for joint attacks on arguments, which we argue must be required of any truly abstract argumentation framework. A few frameworks can be said to allow for such interactions among arguments, but for various reasons we believe that these are inadequate for modelling argumentation systems with joint attacks. In this paper we propose a generalization of the framework of Dung, which allows for sets of arguments to attack other arguments. We extend the semantics associated with the original framework to this generalization, and prove that all results in the paper by Dung have an equivalent in this more abstract framework.",
"title": ""
},
{
"docid": "355720b7bbdc6d6d30987fc0dad5602e",
"text": "To assess the likelihood of procedural success in patients with multivessel coronary disease undergoing percutaneous coronary angioplasty, 350 consecutive patients (1,100 stenoses) from four clinical sites were evaluated. Eighteen variables characterizing the severity and morphology of each stenosis and 18 patient-related variables were assessed at a core angiographic laboratory and at the clinical sites. Most patients had Canadian Cardiovascular Society class III or IV angina (72%) and two-vessel coronary disease (78%). Left ventricular function was generally well preserved (mean ejection fraction, 58 +/- 12%; range, 18-85%) and 1.9 +/- 1.0 stenoses per patient had attempted percutaneous coronary angioplasty. Procedural success (less than or equal to 50% final diameter stenosis in one or more stenoses and no major ischemic complications) was achieved in 290 patients (82.8%), and an additional nine patients (2.6%) had a reduction in diameter stenosis by 20% or more with a final diameter stenosis 51-60% and were without major complications. Major ischemic complications (death, myocardial infarction, or emergency bypass surgery) occurred in 30 patients (8.6%). In-hospital mortality was 1.1%. Stepwise regression analysis determined that a modified American College of Cardiology/American Heart Association Task Force (ACC/AHA) classification of the primary target stenosis (with type B prospectively divided into type B1 [one type B characteristic] and type B2 [greater than or equal to two type B characteristics]) and the presence of diabetes mellitus were the only variables independently predictive of procedural outcome (target stenosis modified ACC/AHA score; p less than 0.001 for both success and complications; diabetes mellitus: p = 0.003 for success and p = 0.016 for complications). Analysis of success and complications on a per stenosis dilated basis showed, for type A stenoses, a 92% success and a 2% complication rate; for type B1 stenoses, an 84% success and a 4% complication rate; for type B2 stenoses, a 76% success and a 10% complication rate; and for type C stenoses, a 61% success and a 21% complication rate. The subdivision into types B1 and B2 provided significantly more information in this clinically important intermediate risk group than did the standard ACC/AHA scheme. The stenosis characteristics of chronic total occlusion, high grade (80-99% diameter) stenosis, stenosis bend of more than 60 degrees, and excessive tortuosity were particularly predictive of adverse procedural outcome. This improved scheme may improve clinical decision making and provide a framework on which to base meaningful subgroup analysis in randomized trials assessing the efficacy of percutaneous coronary angioplasty.",
"title": ""
},
{
"docid": "69c8584255b16e6bc05fdfc6510d0dc4",
"text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.",
"title": ""
},
{
"docid": "274829e884c6ba5f425efbdce7604108",
"text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.",
"title": ""
},
{
"docid": "b25e35dd703d19860bbbd8f92d80bd26",
"text": "Business analytics (BA) systems are an important strategic investment for many organisations and can potentially contribute significantly to firm performance. Establishing strong BA capabilities is currently one of the major concerns of chief information officers. This research project aims to develop a BA capability maturity model (BACMM). The BACMM will help organisations to scope and evaluate their BA initiatives. This research-in-progress paper describes the current BACMM, relates it to existing capability maturity models and explains its theoretical base. It also discusses the design science research approach being used to develop the BACMM and provides details of further work within the research project. Finally, the paper concludes with a discussion of how the BACMM might be used in practice.",
"title": ""
},
{
"docid": "dcf9cba8bf8e2cc3f175e63e235f6b81",
"text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.",
"title": ""
},
{
"docid": "6dcb885d26ca419925a094ade17a4cf7",
"text": "This paper presents two different Ku-Band Low-Profile antenna concepts for Mobile Satellite Communications. The antennas are based on low-cost hybrid mechanical-electronic steerable solutions but, while the first one allows a broadband reception of a satellite signal (Receive-only antenna concept), the second one provides transmit and receive functions for a bi-directional communication link between the satellite and the mobile user terminal (Transmit-Receive antenna). Both examples are suitable for integration in land vehicles and aircrafts.",
"title": ""
},
{
"docid": "33084a3b41e8932b4dfaba5825d469e4",
"text": "OBJECTIVE\nBecause adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media.\n\n\nMETHODS AND MATERIALS\nWe develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter).\n\n\nRESULTS\nWhen investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines.\n\n\nCONCLUSIONS\nOur experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness.",
"title": ""
}
] |
scidocsrr
|
afb4607b5e8407b9632844376d5681f5
|
Turbo and Turbo-Like Codes: Principles and Applications in Telecommunications
|
[
{
"docid": "48fde3a2cd8781ce675ce116ed8ee861",
"text": "DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.",
"title": ""
}
] |
[
{
"docid": "901174e2dd911afada2e8ccf245d25f3",
"text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.",
"title": ""
},
{
"docid": "d7310e830f85541aa1d4b94606c1be0c",
"text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.",
"title": ""
},
{
"docid": "0387b6a593502a9c74ee62cd8eeec886",
"text": "Recently, very deep networks, with as many as hundreds of layers, have shown great success in image classification tasks. One key component that has enabled such deep models is the use of “skip connections”, including either residual or highway connections, to alleviate the vanishing and exploding gradient problems. While these connections have been explored for speech, they have mainly been explored for feed-forward networks. Since recurrent structures, such as LSTMs, have produced state-of-the-art results on many of our Voice Search tasks, the goal of this work is to thoroughly investigate different approaches to adding depth to recurrent structures. Specifically, we experiment with novel Highway-LSTM models with bottlenecks skip connections and show that a 10 layer model can outperform a state-of-the-art 5 layer LSTM model with the same number of parameters by 2% relative WER. In addition, we experiment with Recurrent Highway layers and find these to be on par with Highway-LSTM models, when given sufficient depth.",
"title": ""
},
{
"docid": "d05a179a28cab9cb47be0638ae7b525c",
"text": "Ionizing radiation effects on CMOS image sensors (CIS) manufactured using a 0.18 mum imaging technology are presented through the behavior analysis of elementary structures, such as field oxide FET, gated diodes, photodiodes and MOSFETs. Oxide characterizations appear necessary to understand ionizing dose effects on devices and then on image sensors. The main degradations observed are photodiode dark current increases (caused by a generation current enhancement), minimum size NMOSFET off-state current rises and minimum size PMOSFET radiation induced narrow channel effects. All these effects are attributed to the shallow trench isolation degradation which appears much more sensitive to ionizing radiation than inter layer dielectrics. Unusual post annealing effects are reported in these thick oxides. Finally, the consequences on sensor design are discussed thanks to an irradiated pixel array and a comparison with previous work is discussed.",
"title": ""
},
{
"docid": "3d846789f15f5a70cd36b45f00c6861a",
"text": "Web-based businesses succeed by cultivating consumers' trust, starting with their beliefs, attitudes, intentions, and willingness to perform transactions at Web sites and with the organizations behind them.",
"title": ""
},
{
"docid": "558abc8028d1d5b6956d2cf046efb983",
"text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.",
"title": ""
},
{
"docid": "0d30cfe8755f146ded936aab55cb80d3",
"text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).",
"title": ""
},
{
"docid": "19d6ad18011815602854685211847c52",
"text": "This paper presents a method for learning an And-Or model to represent context and occlusion for car detection and viewpoint estimation. The learned And-Or model represents car-to-car context and occlusion configurations at three levels: (i) spatially-aligned cars, (ii) single car under different occlusion configurations, and (iii) a small number of parts. The And-Or model embeds a grammar for representing large structural and appearance variations in a reconfigurable hierarchy. The learning process consists of two stages in a weakly supervised way (i.e., only bounding boxes of single cars are annotated). First, the structure of the And-Or model is learned with three components: (a) mining multi-car contextual patterns based on layouts of annotated single car bounding boxes, (b) mining occlusion configurations between single cars, and (c) learning different combinations of part visibility based on CAD simulations. The And-Or model is organized in a directed and acyclic graph which can be inferred by Dynamic Programming. Second, the model parameters (for appearance, deformation and bias) are jointly trained using Weak-Label Structural SVM. In experiments, we test our model on four car detection datasets-the KITTI dataset [1] , the PASCAL VOC2007 car dataset [2] , and two self-collected car datasets, namely the Street-Parking car dataset and the Parking-Lot car dataset, and three datasets for car viewpoint estimation-the PASCAL VOC2006 car dataset [2] , the 3D car dataset [3] , and the PASCAL3D+ car dataset [4] . Compared with state-of-the-art variants of deformable part-based models and other methods, our model achieves significant improvement consistently on the four detection datasets, and comparable performance on car viewpoint estimation.",
"title": ""
},
{
"docid": "2d5f6f0bd7ff91525fb130fd785ce281",
"text": "Security flaws are open doors to attack embedded systems and must be carefully assessed in order to determine threats to safety and security. Subsequently securing a system, that is, integrating security mechanisms into the system's architecture can itself impact the system's safety, for instance deadlines could be missed due to an increase in computations and communications latencies. SysML-Sec addresses these issues with a model-driven approach that promotes the collaboration between system designers and security experts at all design and development stages, e.g., requirements, attacks, partitioning, design, and validation. A central point of SysML-Sec is its partitioning stage during which safety-related and security-related functions are explored jointly and iteratively with regards to requirements and attacks. Once partitioned, the system is designed in terms of system's functions and security mechanisms, and formally verified from both the safety and the security perspectives. Our paper illustrates the whole methodology with the evaluation of a security mechanism added to an existing automotive system.",
"title": ""
},
{
"docid": "f785636331f737d8dc14b6958277553f",
"text": "This paper focuses on subword-based Neural Machine Translation (NMT). We hypothesize that in the NMT model, the appropriate subword units for the following three modules (layers) can differ: (1) the encoder embedding layer, (2) the decoder embedding layer, and (3) the decoder output layer. We find the subword based on Sennrich et al. (2016) has a feature that a large vocabulary is a superset of a small vocabulary and modify the NMT model enables the incorporation of several different subword units in a single embedding layer. We refer these small subword features as hierarchical subword features. To empirically investigate our assumption, we compare the performance of several different subword units and hierarchical subword features for both the encoder and decoder embedding layers. We confirmed that incorporating hierarchical subword features in the encoder consistently improves BLEU scores on the IWSLT evaluation datasets. Title and Abstract in Japanese 階層的部分単語特徴を用いたニューラル機械翻訳 本稿では、部分単語 (subword) を用いたニューラル機械翻訳 (Neural Machine Translation, NMT)に着目する。NMTモデルでは、エンコーダの埋め込み層、デコーダの埋め込み層お よびデコーダの出力層の 3箇所で部分単語が用いられるが、それぞれの層で適切な部分単 語単位は異なるという仮説を立てた。我々は、Sennrich et al. (2016)に基づく部分単語は、 大きな語彙集合が小さい語彙集合を必ず包含するという特徴を利用して、複数の異なる部 分単語列を同時に一つの埋め込み層として扱えるよう NMTモデルを改良する。以降、こ の小さな語彙集合特徴を階層的部分単語特徴と呼ぶ。本仮説を検証するために、様々な部 分単語単位や階層的部分単語特徴をエンコーダ・デコーダの埋め込み層に適用して、その 精度の変化を確認する。IWSLT評価セットを用いた実験により、エンコーダ側で階層的な 部分単語を用いたモデルは BLEUスコアが一貫して向上することが確認できた。",
"title": ""
},
{
"docid": "7056b8e792a2bd1535cf020b2aeab2c7",
"text": "The authors propose a theoretical model linking achievement goals and achievement emotions to academic performance. This model was tested in a prospective study with undergraduates (N 213), using exam-specific assessments of both goals and emotions as predictors of exam performance in an introductory-level psychology course. The findings were consistent with the authors’ hypotheses and supported all aspects of the proposed model. In multiple regression analysis, achievement goals (mastery, performance approach, and performance avoidance) were shown to predict discrete achievement emotions (enjoyment, boredom, anger, hope, pride, anxiety, hopelessness, and shame), achievement emotions were shown to predict performance attainment, and 7 of the 8 focal emotions were documented as mediators of the relations between achievement goals and performance attainment. All of these findings were shown to be robust when controlling for gender, social desirability, positive and negative trait affectivity, and scholastic ability. The results are discussed with regard to the underdeveloped literature on discrete achievement emotions and the need to integrate conceptual and applied work on achievement goals and achievement emotions.",
"title": ""
},
{
"docid": "3e83d63920d7d8650a2eeaa2e68ec640",
"text": "Antibiotic resistance consists of a dynamic web. In this review, we describe the path by which different antibiotic residues and antibiotic resistance genes disseminate among relevant reservoirs (human, animal, and environmental settings), evaluating how these events contribute to the current scenario of antibiotic resistance. The relationship between the spread of resistance and the contribution of different genetic elements and events is revisited, exploring examples of the processes by which successful mobile resistance genes spread across different niches. The importance of classic and next generation molecular approaches, as well as action plans and policies which might aid in the fight against antibiotic resistance, are also reviewed.",
"title": ""
},
{
"docid": "7e941f9534357fca740b97a99e86f384",
"text": "The head-direction (HD) cells found in the limbic system in freely mov ing rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be controlled accurately by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.",
"title": ""
},
{
"docid": "7f2fcc4b4af761292d3f77ffd1a2f7c3",
"text": "An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.",
"title": ""
},
{
"docid": "c7f0856c282d1039e44ba6ef50948d32",
"text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.",
"title": ""
},
{
"docid": "dcf231b887d7caeec341850507561197",
"text": "Convolutional neural networks (CNNs) have attracted increasing attention in the remote sensing community. Most CNNs only take the last fully-connected layers as features for the classification of remotely sensed images, discarding the other convolutional layer features which may also be helpful for classification purposes. In this paper, we propose a new adaptive deep pyramid matching (ADPM) model that takes advantage of the features from all of the convolutional layers for remote sensing image classification. To this end, the optimal fusing weights for different convolutional layers are learned from the data itself. In remotely sensed scenes, the objects of interest exhibit different scales in distinct scenes, and even a single scene may contain objects with different sizes. To address this issue, we select the CNN with spatial pyramid pooling (SPP-net) as the basic deep network, and further construct a multi-scale ADPM model to learn complementary information from multi-scale images. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods. Keywords—Convolutional neural network (CNN), adaptive deep pyramid matching (ADPM), convolutional features, multi-scale ensemble, remote-sensing scene classification.",
"title": ""
},
{
"docid": "5b7f20103c99a93c46efe4575f012e7d",
"text": "The availability of several Advanced Driver Assistance Systems has put a correspondingly large number of inexpensive, yet capable sensors on production vehicles. By combining this reality with expertise from the DARPA Grand and Urban Challenges in building autonomous driving platforms, we were able to design and develop an Autonomous Valet Parking (AVP) system on a 2006 Volkwagen Passat Wagon TDI using automotive grade sensors. AVP provides the driver with both convenience and safety benefits - the driver can leave the vehicle at the entrance of a parking garage, allowing the vehicle to navigate the structure, find a spot, and park itself. By leveraging existing software modules from the DARPA Urban Challenge, our efforts focused on developing a parking spot detector, a localization system that did not use GPS, and a back-in parking planner. This paper focuses on describing the design and development of the last two modules.",
"title": ""
},
{
"docid": "ba6865dc3c93ac52c9f1050f159b9e1a",
"text": "A review of various properties of ceramic-reinforced aluminium matrix composites is presented in this paper. The properties discussed include microstructural, optical, physical and mechanical behaviour of ceramic-reinforced aluminium matrix composites and effects of reinforcement fraction, particle size, heat treatment and extrusion process on these properties. The results obtained by many researchers indicated the uniform distribution of reinforced particles with localized agglomeration at some places, when the metal matrix composite was processed through stir casting method. The density, hardness, compressive strength and toughness increased with increasing reinforcement fraction; however, these properties may reduce in the presence of porosity in the composite material. The particle size of reinforcements affected the hardness adversely. Tensile strength and flexural strength were observed to be increased up to a certain reinforcement fraction in the composites, beyond which these were reduced. The mechanical properties of the composite materials were improved by either thermal treatment or extrusion process. Initiation and growth of fine microcracks leading to macroscopic failure, ductile failure of the aluminium matrix, combination of particle fracture and particle pull-out, overload failure under tension and brittle fracture were the failure mode and mechanisms, as observed by previous researchers, during fractography analysis of tensile specimens of ceramic-reinforced aluminium matrix composites.",
"title": ""
},
{
"docid": "d74874cf15642c87c7de51e54275f9be",
"text": "We used a three layer Convolutional Neural Network (CNN) to make move predictions in chess. The task was defined as a two-part classification problem: a piece-selector CNN is trained to score which white pieces should be made to move, and move-selector CNNs for each piece produce scores for where it should be moved. This approach reduced the intractable class space in chess by a square root. The networks were trained using 20,000 games consisting of 245,000 moves made by players with an ELO rating higher than 2000 from the Free Internet Chess Server. The piece-selector network was trained on all of these moves, and the move-selector networks trained on all moves made by the respective piece. Black moves were trained on by using a data augmentation to frame it as a move made by the",
"title": ""
},
{
"docid": "9c6601360694b48c137ec2a974635106",
"text": "This paper reports a novel deep architecture referred to as Maxout network In Network (MIN), which can enhance model discriminability and facilitate the process of information abstraction within the receptive field. The proposed network adopts the framework of the recently developed Network In Network structure, which slides a universal approximator, multilayer perceptron (MLP) with rectifier units, to exact features. Instead of MLP, we employ maxout MLP to learn a variety of piecewise linear activation functions and to mediate the problem of vanishing gradients that can occur when using rectifier units. Moreover, batch normalization is applied to reduce the saturation of maxout units by pre-conditioning the model and dropout is applied to prevent overfitting. Finally, average pooling is used in all pooling layers to regularize maxout MLP in order to facilitate information abstraction in every receptive field while tolerating the change of object position. Because average pooling preserves all features in the local patch, the proposed MIN model can enforce the suppression of irrelevant information during training. Our experiments demonstrated the state-of-the-art classification performance when the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and comparable performance for SVHN dataset.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.