query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
6fab144b87bd60329653aa4e849b4e80
|
Language Models for Image Captioning: The Quirks and What Works
|
[
{
"docid": "c879ee3945592f2e39bb3306602bb46a",
"text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.",
"title": ""
}
] |
[
{
"docid": "77adcb76c5002c62bc77b07f2198ce0f",
"text": "We study the problem of answering questions about images in the harder setting, where the test questions and corresponding images contain novel objects, which were not queried about in the training data. Such setting is inevitable in real world–owing to the heavy tailed distribution of the visual categories, there would be some objects which would not be annotated in the train set. We show that the performance of two popular existing methods drop significantly (21–28%) when evaluated on novel objects cf. known objects. We propose methods which use large existing external corpora of (i) unlabeled text, i.e. books, and (ii) images tagged with classes, to achieve novel object based visual question answering. We systematically study both, an oracle case where the novel objects are known textually, as well as a fully automatic case without any explicit knowledge of the novel objects, but with the minimal assumption that the novel objects are semantically related to the existing objects in training. The proposed methods for novel object based visual question answering are modular and can potentially be used with many visual question answering architectures. We show consistent improvements with the two popular architectures and give qualitative analysis of the cases where the model does well and of those where it fails to bring improvements.",
"title": ""
},
{
"docid": "fba72eef074a1b2d13d994c882f698c7",
"text": "The ranking-type Delphi method is well suited as a means for consensus-building by using a series of questionnaires to collect data from a panel of geographically dispersed participants. This method allows a group of experts to systematically approach a particular task or problem. While information systems researchers have been using this method for almost three decades, no research to date has attempted to assess the extent to which Delphi studies have been rigorously conducted. Using the guidelines that have been prescribed by the leading Delphi methodologists, our descriptive review reveals many positive signs of rigor such as ensuring the anonymity of experts and providing clear and precise instructions to participants. Nevertheless, there are still several areas for improvement, such as reporting response and retention rates, instrument pretesting, and explicitly justifying modifications to the ranking-type Delphi",
"title": ""
},
{
"docid": "16c58710e1285a55d75f996c2816b9b0",
"text": "Face morphing is an effect that shows a transition from one face image to another face image smoothly. It has been widely used in various fields of work, such as animation, movie production, games, and mobile applications. Two types of methods have been used to conduct face morphing. Semi automatic mapping methods, which allow users to map corresponding pixels between two face images, can produce a smooth transition of result images. Mapping the corresponding pixel between two human face images is usually not trivial. Fully automatic methods have also been proposed for morphing between two images having similar face properties, where the results depend on the similarity of the input face images. In this project, we apply a critical point filter to determine facial features for automatically mapping the correspondence of the input face images. The critical point filters can be used to extract the main features of input face images, including color, position and edge of each facial component in the input images. An energy function is also proposed for mapping the corresponding pixels between pixels of the input face images. The experimental results show that position of each face component plays a more important role than the edge and color of the face. We can summarize that, using the critical point filter, the proposed method to generate face morphing can produce a smooth image transition with our adjusted weight function.",
"title": ""
},
{
"docid": "acbdb3f3abf3e56807a4e7f60869a2ee",
"text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.",
"title": ""
},
{
"docid": "8185da1a497e25f0c50e789847b6bd52",
"text": "We address numerical versus experimental design and testing of miniature implantable antennas for biomedical telemetry in the medical implant communications service band (402-405 MHz). A model of a novel miniature antenna is initially proposed for skin implantation, which includes varying parameters to deal with fabrication-specific details. An iterative design-and-testing methodology is further suggested to determine the parameter values that minimize deviations between numerical and experimental results. To assist in vitro testing, a low-cost technique is proposed for reliably measuring the electric properties of liquids without requiring commercial equipment. Validation is performed within a specific prototype fabrication/testing approach for miniature antennas. To speed up design while providing an antenna for generic skin implantation, investigations are performed inside a canonical skin-tissue model. Resonance, radiation, and safety performance of the proposed antenna is finally evaluated inside an anatomical head model. This study provides valuable insight into the design of implantable antennas, assessing the significance of fabrication-specific details in numerical simulations and uncertainties in experimental testing for miniature structures. The proposed methodology can be applied to optimize antennas for several fabrication/testing approaches and biotelemetry applications.",
"title": ""
},
{
"docid": "e5ad080ca6155eaa72cdfa9f3ca03276",
"text": "We consider the problem of semantic image segmentation using deep convolutional neural networks. We propose a novel network architecture called the label refinement network that predicts segmentation labels in a coarse-to-fine fashion at several resolutions. The segmentation labels at a coarse resolution are used together with convolutional features to obtain finer resolution segmentation labels. We define loss functions at several stages in the network to provide supervisions at different stages. Our experimental results on several standard datasets demonstrate that the proposed model provides an effective way of producing pixel-wise dense image labeling.",
"title": ""
},
{
"docid": "049c9e3abf58bfd504fa0645bb4d1fdc",
"text": "The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.",
"title": ""
},
{
"docid": "dd557664d20f17487425de206f57cbc5",
"text": "This paper presents an ultra low-voltage, rail-to-rail input/output stage Operational Transconductance Amplifier (OTA) which uses quasi floating gate input transistors. This OTA works with ±0.3v and consumes 57µw. It has near zero variation in small/large-signal behavior (i.e. transconductance and slew rate) in whole range of the common mode voltage of input signals. Using source degeneration technique for linearity improvement, make it possible to obtain −42.7 dB, HD3 for 0.6vP-P sine wave input signal with the frequency of 1MHz. The used feedback amplifier in input stage also enhances common mode rejection ratio (CMRR), such that in DC, CMRR is 146 dB. OTA is used for implementation of a wide-tunable third-order elliptic filter with 237 KHz–2.18 MHz cutoff frequencies. Proposed OTA and filter have been simulated in 0.18µm TSMC CMOS technology with Hspice.",
"title": ""
},
{
"docid": "3c75d05e1b6abf2cb03573e1162954a7",
"text": "With the increasing popularity of portable camera devices and embedded visual processing, text extraction from natural scene images has become a key problem that is deemed to change our everyday lives via novel applications such as augmented reality. Text extraction from natural scene images algorithms is generally composed of the following three stages: (i) detection and localization, (ii) text enhancement to variations in the font size and color, text alignment, illumination change and reflections. This paper aims to classify and assess the latest algorithms. More specifically, we draw attention to studies on the first two steps in the extraction process, since OCR is a well-studied area where powerful algorithms already exist. This paper offers to the researchers a link to public image database for the algorithm assessment of text extraction from natural scene images. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7732ad5c481874a765f2ed25ab61ab7a",
"text": "Modern natural language processing (NLP) research requires writing code. Ideally this code would provide a precise definition of the approach, easy repeatability of results, and a basis for extending the research. However, many research codebases bury high-level parameters under implementation details, are challenging to run and debug, and are difficult enough to extend that they are more likely to be rewritten. This paper describes AllenNLP, a library for applying deep learning methods to NLP research, which addresses these issues with easyto-use command-line tools, declarative configuration-driven experiments, and modular NLP abstractions. AllenNLP has already increased the rate of research experimentation and the sharing of NLP components at the Allen Institute for Artificial Intelligence, and we are working to have the same impact across the field.",
"title": ""
},
{
"docid": "de05e649c6e77278b69665df3583d3d8",
"text": "This context-aware emotion-based model can help design intelligent agents for group decision making processes. Experiments show that agents with emotional awareness reach agreement more quickly than those without it.",
"title": ""
},
{
"docid": "68e3a910cd0f4131500bc808a1ac040d",
"text": "With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.",
"title": ""
},
{
"docid": "2047090bab9aa55ae9d6d9bd2d72e4d8",
"text": "When a system fails to correctly recognize a voice search query, the user will frequently retry the query, either by repeating it exactly or rephrasing it in an attempt to adapt to the system’s failure. It is desirable to be able to identify queries as retries both offline, as a valuable quality signal, and online, as contextual information that can aid recognition. We present a method than can identify retries offline with 81% accuracy using similarity measures between two subsequent queries as well as system and user signals of recognition accuracy. The retry rate predicted by this method correlates significantly with a gold standard measure of accuracy, suggesting that it may be useful as an offline predictor of accuracy.",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "10e05c686b74e06c19151d61a458767b",
"text": "The use of a removable partial denture (RPD) in clinical practice remains a viable treatment modality. Various advancements have improved the quality of a RPD, subsequently improving the quality of life for the individuals that use them. This article describes four removable partial denture treatment modalities that provide valuable treatment for the partially edentulous patient. These modalities include: the implant supported RPD, attachment use in RPDs, rotational path RPDs, and Titanium and CAD/CAM RPDs. Data on future needs for RPDs indicate that while there is a decline in tooth loss in the U.S., the need for RPDs will actually increase as the population increases and ages. With the growth in the geriatric population, which includes a high percentage of partially edentulous patients, the use of RPDs in clinical treatment will continue to be predictable treatment option in clinical dentistry.",
"title": ""
},
{
"docid": "8be94cf3744cf18e29c4f41b727cc08a",
"text": "A printed dipole with an integrated balun features a broad operating bandwidth. The feed point of conventional balun structures is fixed at the top of the integrated balun, which makes it difficult to match to a 50-Omega feed. In this communication, we demonstrate that it is possible to directly match with the 50-Omega feed by adjusting the position of the feed point of the integrated balun. The printed dipole with the hereby presented adjustable integrated balun maintains the broadband performance and exhibits flexibility for the matching to different impedance values, which is extremely important for the design of antenna arrays since the mutual coupling between antenna elements commonly changes the input impedance of each single element. An equivalent-circuit analysis is presented for the understanding of the mechanism of the impedance match. An eight-element linear antenna array is designed as a benchmarking topology for broadband wireless base stations.",
"title": ""
},
{
"docid": "afac9140d183eac56785b26069953342",
"text": "Big Data means extremely huge large data sets that can be analyzed to find patterns, trends. One technique that can be used for data analysis so that able to help us find abstract patterns in Big Data is Deep Learning. If we apply Deep Learning to Big Data, we can find unknown and useful patterns that were impossible so far. With the help of Deep Learning, AI is getting smart. There is a hypothesis in this regard, the more data, the more abstract knowledge. So a handy survey of Big Data, Deep Learning and its application in Big Data is necessary. In this paper, we provide a comprehensive survey on what is Big Data, comparing methods, its research problems, and trends. Then a survey of Deep Learning, its methods, comparison of frameworks, and algorithms is presented. And at last, application of Deep Learning in Big Data, its challenges, open research problems and future trends are presented.",
"title": ""
},
{
"docid": "1044aedce86a40b27319d5a7272dc61f",
"text": "We investigate a strategy for reconstructing of buildings from multiple (uncalibrated) images. In a similar manner to the Facade approach we first generate a coarse piecewise planar model of the principal scene planes and their delineations, and then use these facets to guide the search for indentations and protrusions such as windows and doors. However, unlike the Facade approach which involves manual selection and alignment of the geometric primitives, the strategy here is fully automatic. There are several points of novelty: first we demonstrate that the use of quite generic models together with particular scene constraints (the availability of several principal directions) is sufficiently powerful to enable successful reconstruction of the targeted scenes. Second, we develop and refine a technique for piecewise planar model fitting involving sweeping polygonal primitives, and assess the performance of this technique. Third, lines at infinity are constructed from image correspondences and used to sweep planes in the principal directions. The strategy is illustrated on several image triplets of College buildings. It is demonstrated that convincing texture mapped models are generated which include the main walls and roofs, together with inset windows and also protruding (dormer) roof windows.",
"title": ""
},
{
"docid": "fe801ce6c1f5c25d6fe9623ee9a13352",
"text": "Wearable devices with built-in cameras present interesting opportunities for users to capture various aspects of their daily life and are potentially also useful in supporting users with low vision in their everyday tasks. However, state-of-the-art image wearables available in the market are limited to capturing images periodically and do not provide any real-time analysis of the data that might be useful for the wearers. In this paper, we present DeepEye - a match-box sized wearable camera that is capable of running multiple cloud-scale deep learn- ing models locally on the device, thereby enabling rich analysis of the captured images in near real-time without offloading them to the cloud. DeepEye is powered by a commodity wearable processor (Snapdragon 410) which ensures its wearable form factor. The software architecture for DeepEye addresses a key limitation with executing multiple deep learning models on constrained hardware, that is their limited runtime memory. We propose a novel inference software pipeline that targets the local execution of multiple deep vision models (specifically, CNNs) by interleaving the execution of computation-heavy convolutional layers with the loading of memory-heavy fully-connected layers. Beyond this core idea, the execution framework incorporates: a memory caching scheme and a selective use of model compression techniques that further minimizes memory bottlenecks. Through a series of experiments, we show that our execution framework outperforms the baseline approaches significantly in terms of inference latency, memory requirements and energy consumption.",
"title": ""
},
{
"docid": "b79110b1145fc8a35f20efdf0029fbac",
"text": "In this paper, a new bridgeless single-phase AC-DC converter with an automatic power factor correction (PFC) is proposed. The proposed rectifier is based on the single-ended primary inductance converter (SEPIC) topology and it utilizes a bidirectional switch and two fast diodes. The absence of an input diode bridge and the presence of only one diode in the flowing-current path during each switching cycle result in less conduction loss and improved thermal management compared to existing PFC rectifiers. Other advantages include simple control circuitry, reduced switch voltage stress, and low electromagnetic-interference noise. Performance comparison between the proposed and the conventional SEPIC PFC rectifier is performed. Simulation and experimental results are presented to demonstrate the feasibility of the proposed technique.",
"title": ""
}
] |
scidocsrr
|
b5580a75d704dece146244038538f2c7
|
Application of Floating Photovoltaic Energy Generation Systems in South Korea
|
[
{
"docid": "a2d76e1217b0510f82ebccab39b7d387",
"text": "The floating photovoltaic system is a new concept in energy technology to meet the needs of our time. The system integrates existing land based photovoltaic technology with a newly developed floating photovoltaic technology. K-water has already completed two floating photovoltaic systems that enable generation of 100kW and 500kW respectively. In this paper, the generation efficiency of floating and land photovoltaic systems were compared and analyzed. Floating PV has shown greater generation efficiency by over 10% compared with the general PV systems installed overland",
"title": ""
}
] |
[
{
"docid": "29f17b7d7239a2845d513976e4981d6a",
"text": "Agriculture is the backbone of the Indian economy. As all know that demand of agricultural products are increasing day by day as the population is ever increasing, so there is a need to minimize labor, limit the use of water and increase the production of crops. So there is a need to switch from traditional agriculture to the modern agriculture. The introduction of internet of things into agriculture modernization will help solve these problems. This paper presents the IOT based agriculture production system which will monitor or analyze the crop environment like temperature humidity and moisture content in soil. This paper uses the integration of RFID technology and sensors. As both have different objective sensors are for sensing and RIFD technology is for identification This will effectively solve the problem of farmer, increase the yield and saves his time, power, money.",
"title": ""
},
{
"docid": "600ecbb2ae0e5337a568bb3489cd5e29",
"text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.",
"title": ""
},
{
"docid": "46950519803aba56a0cce475964b99d7",
"text": "The coverage problem in the field of robotics is the problem of moving a sensor or actuator over all points in a given region. Example applications of this problem are lawn mowing, spray painting, and aerial or underwater mapping. In this paper, I consider the single-robot offline version of this problem, i.e. given a map of the region to be covered, plan an efficient path for a single robot that sweeps the sensor or actuator over all points. One basic approach to this problem is to decompose the region into subregions, select a sequence of those subregions, and then generate a path that covers each subregion in turn. This paper addresses the problem of creating a good decomposition. Under certain assumptions, the cost to cover a polygonal subregion is proportional to its minimum altitude. An optimal decomposition then minimizes the sum of subregion altitudes. This paper describes an algorithm to find the minimal sum of altitudes (MSA) decomposition of a region with a polygonal boundary and polygonal holes. This algorithm creates an initial decomposition based upon multiple line sweeps and then applies dynamic programming to find the optimal decomposition. This paper describes the algorithm and reports results from an implementation. Several appendices give details and proofs regarding line sweep algorithms.",
"title": ""
},
{
"docid": "ef57140e433ad175a3fae38236effa69",
"text": "For a real driver assistance system, the weather, driving speed, and background could affect the accuracy of obstacle detection. In the past, only a few studies covered all the different weather conditions and almost none of them had paid attention to the safety at vehicle lateral blind spot area. So, this paper proposes a hybrid scheme for pedestrian and vehicle detection, and develop a warning system dedicated for lateral blind spot area under different weather conditions and driving speeds. More specifically, the HOG and SVM methods are used for pedestrian detection. The image subtraction, edge detection and tire detection are applied for vehicle detection. Experimental results also show that the proposed system can efficiently detect pedestrian and vehicle under several scenarios.",
"title": ""
},
{
"docid": "7e6dbb7e6302f9da97ea5f2ba3b2b782",
"text": "The purpose of this paper is to investigate some of the main drivers of high unemployment rates in the European Union countries starting from two sources highlighted in the economic literature: the shortfall of the aggregate demand and the increasing labour market mismatches. Our analysis is based on a panel database and focuses on two objectives: to measure the long and short-term impact of GDP growth on unemployment over recent years for different categories of labour market participants (young, older and low educated workers) and to evaluate the relationship between mismatches related to skills (educational and occupational) and unemployment. One of the main conclusions is that unemployment rates of young and low educated workers are more responsive to economic growth variations both in the long and short run, while unemployment rates of older workers show a greater capacity of adjustment. In addition, occupational mismatches seem to have a significant long-term impact on the changes in unemployment of all categories of unemployed, whereas the short run effect is rather mixed, varying across countries. One explanation is the fact that during crisis, economy’s structure tends to change more rapidly than labour market and educational system can adapt. * Received: 22-02-2017; accepted: 31-05-2017 1 The research is supported by the Czech Science Foundation, project P402/12/G097 “DYME – Dynamic Models in Economics”. 2 Assistant Professor, Faculty of Economic Cybernetics, Statistics and Informatics, Bucharest Academy of Economic Studies, Piata Romana Square 6, 1st district, Bucharest, 010374 Romania. Scientific affiliation: labour market imbalances, international comparisons, regional competitiveness, panel data models. Phone: +40 21 319 19 00. Fax: +40 21 319 18 99. E-mail: gina.dimian@csie.ase.ro (corresponding author). 3 Full Professor, Faculty of Economic Cybernetics, Statistics and Informatics, Bucharest Academy of Economic Studies, Piata Romana Square 6, 1st district, Bucharest, 010374 Romania. Scientific affiliation: macroeconomics, economic convergence, international relationships, international statistics. Phone: +40 21 319 19 00. Fax: +40 21 319 18 99. E-mail: liviu.begu@csie.ase.ro. 4 Full Professor, Faculty of Informatics and Statistics, University of Economics. W. Churchill Sq. 4, 130 67 Prague 3, Czech Republic. Scientific affiliation: efficiency analysis, multiple-criteria analysis, data mining methods. Phone: +420 2 2409 5403. Fax: +420 2 2409 5423. E-mail: jablon@vse.cz. Personal website: http://webhosting.vse.cz/jablon. Gina Cristina Dimian, Liviu Stelian Begu, Josef Jablonsky • Unemployment and labour... 14 Zb. rad. Ekon. fak. Rij. • 2017 • vol. 35 • no. 1 • 13-44",
"title": ""
},
{
"docid": "06e2fec87a501d234e494238cdff6eda",
"text": "Dopamine (DA) is required for hippocampal-dependent memory and long-term potentiation (LTP) at CA1 Schaffer collateral (SC) synapses. It is therefore surprising that exogenously applied DA has little effect on SC synapses, but suppresses CA1 perforant path (PP) inputs. To examine DA actions under more physiological conditions, we used optogenetics to release DA from ventral tegmental area inputs to hippocampus. Unlike exogenous DA application, optogenetic release of DA caused a bidirectional, activity-dependent modulation of SC synapses, with no effect on PP inputs. Low levels of DA release, simulating tonic DA neuron firing, depressed the SC response through a D4 receptor–dependent enhancement of feedforward inhibition mediated by parvalbumin-expressing interneurons. Higher levels of DA release, simulating phasic firing, increased SC responses through a D1 receptor–dependent enhancement of excitatory transmission. Thus, tonic-phasic transitions in DA neuron firing in response to motivational demands may cause a modulatory switch from inhibition to enhancement of hippocampal information flow.",
"title": ""
},
{
"docid": "392f7b126431b202d57d6c25c07f7f7c",
"text": "Serine racemase (SRace) is an enzyme that catalyzes the conversion of L-serine to pyruvate or D-serine, an endogenous agonist for NMDA receptors. Our previous studies showed that inflammatory stimuli such as Abeta could elevate steady-state mRNA levels for SRace, perhaps leading to inappropriate glutamatergic stimulation under conditions of inflammation. We report here that a proinflammatory stimulus (lipopolysaccharide) elevated the activity of the human SRace promoter, as indicated by expression of a luciferase reporter system transfected into a microglial cell line. This effect corresponded to an elevation of SRace protein levels in microglia, as well. By contrast, dexamethasone inhibited the SRace promoter activity and led to an apparent suppression of SRace steady-state mRNA levels. A potential binding site for NFkappaB was explored, but this sequence played no significant role in SRace promoter activation. Instead, large deletions and site-directed mutagenesis indicated that a DNA element between -1382 and -1373 (relative to the start of translation) was responsible for the activation of the promoter by lipopolysaccharide. This region fits the consensus for an activator protein-1 binding site. Lipopolysaccharide induced an activity capable of binding this DNA element in electrophoretic mobility shift assays. Supershifts with antibodies against c-Fos and JunB identified these as the responsible proteins. An inhibitor of Jun N-terminal kinase blocked SRace promoter activation, further implicating activator protein-1. These data indicate that proinflammatory stimuli utilize a signal transduction pathway culminating in activator protein-1 activation to induce expression of serine racemase.",
"title": ""
},
{
"docid": "59405c31da09ea58ef43a03d3fc55cf4",
"text": "The Quality of Service (QoS) management is one of the urgent problems in networking which doesn't have an acceptable solution yet. In the paper the approach to this problem based on multipath routing protocol in SDN is considered. The proposed approach is compared with other QoS management methods. A structural and operation schemes for its practical implementation is proposed.",
"title": ""
},
{
"docid": "56a6ea3418b9a1edf591b860f128ea82",
"text": "Convolutional Neural Networks (CNNs) have gained a remarkable success on many real-world problems in recent years. However, the performance of CNNs is highly relied on their architectures. For some state-of-the-art CNNs, their architectures are hand-crafted with expertise in both CNNs and the investigated problems. To this end, it is difficult for researchers, who have no extended expertise in CNNs, to explore CNNs for their own problems of interest. In this paper, we propose an automatic architecture design method for CNNs by using genetic algorithms, which is capable of discovering a promising architecture of a CNN on handling image classification tasks. The proposed algorithm does not need any pre-processing before it works, nor any post-processing on the discovered CNN, which means it is completely automatic. The proposed algorithm is validated on widely used benchmark datasets, by comparing to the state-of-the-art peer competitors covering eight manually designed CNNs, four semi-automatically designed CNNs and additional four automatically designed CNNs. The experimental results indicate that the proposed algorithm achieves the best classification accuracy consistently among manually and automatically designed CNNs. Furthermore, the proposed algorithm also shows the competitive classification accuracy to the semi-automatic peer competitors, while reducing 10 times of the parameters. In addition, on the average the proposed algorithm takes only one percentage of computational resource compared to that of all the other architecture discovering algorithms. Experimental codes and the discovered architectures along with the trained weights are made public to the interested readers.",
"title": ""
},
{
"docid": "a7b0f0455482765efd3801c3ae9f85b7",
"text": "The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to create models with semantic errors. Such errors are especially serious, because errors in the early phases of systems development are among the most costly and hardest to correct. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. Accordingly, this paper proposes a mapping from BPMN to a formal language, namely Petri nets, for which efficient analysis techniques are available. The proposed mapping has been implemented as a tool that, in conjunction with existing Petri net-based tools, enables the static analysis of BPMN models. The formalisation also led to the identification of deficiencies in the BPMN standard specification.",
"title": ""
},
{
"docid": "0807bfb91fdb15b19652e98f0af20f29",
"text": "Finding the factorization of a polynomial over a finite field is of interest not only independently but also for many applications in computer algebra, algebraic coding theory, cryptography, and computational number theory. Polynomial factorization over finite fields is used as a subproblem in algorithms for factoring polynomials over the integers (Zassenhaus, 1969; Collins, 1979; Lenstra et al., 1982; Knuth, 1998), for constructing cyclic redundancy codes and BCH codes (Berlekamp, 1968; MacWilliams and Sloane, 1977; van Lint, 1982), for designing public key cryptosystems (Chor and Rivest, 1985; Odlyzko, 1985; Lenstra, 1991), and for computing the number of points on elliptic curves (Buchmann, 1990). Major improvements have been made in the polynomial factorization problem during this decade both in theory and in practice. From a theoretical point of view, asymptotically faster algorithms have been proposed. However, these advances are yet more striking in practice where variants of the asymptotically fastest algorithms allow us to factor polynomials over finite fields in reasonable amounts of time that were unassailable a few years ago. Our purpose in this survey is to stress the basic ideas behind these methods, to overview experimental results, as well as to give a comprehensive up-to-date bibliography of the problem. Kaltofen (1982, 1990, 1992) has given excellent surveys of",
"title": ""
},
{
"docid": "43fbba82164929b967ed06b7c42dfadd",
"text": "The effectiveness of character n-gram features for representing the stylistic properties of a text has been demonstrated in various independent Authorship Attribution (AA) studies. Moreover, it has been shown that some categories of character n-grams perform better than others both under single and cross-topic AA conditions. In this work, we present an improved algorithm for cross-topic AA. We demonstrate that the effectiveness of character n-grams representation can be significantly enhanced by performing simple pre-processing steps and appropriately tuning the number of features, especially in cross-topic conditions.",
"title": ""
},
{
"docid": "67c444b9538ccfe7a2decdd11523dcd5",
"text": "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.",
"title": ""
},
{
"docid": "81aa60b514bb11efb9e137b8d13b92e8",
"text": "Linguistic creativity is a marriage of form and content in which each works together to convey our meanings with concision, resonance and wit. Though form clearly influences and shapes our content, the most deft formal trickery cannot compensate for a lack of real insight. Before computers can be truly creative with language, we must first imbue them with the ability to formulate meanings that are worthy of creative expression. This is especially true of computer-generated poetry. If readers are to recognize a poetic turn-of-phrase as more than a superficial manipulation of words, they must perceive and connect with the meanings and the intent behind the words. So it is not enough for a computer to merely generate poem-shaped texts; poems must be driven by conceits that build an affective worldview. This paper describes a conceit-driven approach to computational poetry, in which metaphors and blends are generated for a given topic and affective slant. Subtle inferences drawn from these metaphors and blends can then drive the process of poetry generation. In the same vein, we consider the problem of generating witty insights from the banal truisms of common-sense knowledge bases. Ode to a Keatsian Turn Poetic licence is much more than a licence to frill. Indeed, it is not so much a licence as a contract, one that allows a speaker to subvert the norms of both language and nature in exchange for communicating real insights about some relevant state of affairs. Of course, poetry has norms and conventions of its own, and these lend poems a range of recognizably “poetic” formal characteristics. When used effectively, formal devices such as alliteration, rhyme and cadence can mold our meanings into resonant and incisive forms. However, even the most poetic devices are just empty frills when used only to disguise the absence of real insight. Computer models of poem generation must model more than the frills of poetry, and must instead make these formal devices serve the larger goal of meaning creation. Nonetheless, is often said that we “eat with our eyes”, so that the stylish presentation of food can subtly influence our sense of taste. So it is with poetry: a pleasing form can do more than enhance our recall and comprehension of a meaning – it can also suggest a lasting and profound truth. Experiments by McGlone & Tofighbakhsh (1999, 2000) lend empirical support to this so-called Keats heuristic, the intuitive belief – named for Keats’ memorable line “Beauty is truth, truth beauty” – that a meaning which is rendered in an aesthetically-pleasing form is much more likely to be perceived as truthful than if it is rendered in a less poetic form. McGlone & Tofighbakhsh demonstrated this effect by searching a book of proverbs for uncommon aphorisms with internal rhyme – such as “woes unite foes” – and by using synonym substitution to generate non-rhyming (and thus less poetic) variants such as “troubles unite enemies”. While no significant differences were observed in subjects’ ease of comprehension for rhyming/non-rhyming forms, subjects did show a marked tendency to view the rhyming variants as more truthful expressions of the human condition than the corresponding non-rhyming forms. So a well-polished poetic form can lend even a modestly interesting observation the lustre of a profound insight. An automated approach to poetry generation can exploit this symbiosis of form and content in a number of useful ways. It might harvest interesting perspectives on a given topic from a text corpus, or it might search its stores of commonsense knowledge for modest insights to render in immodest poetic forms. We describe here a system that combines both of these approaches for meaningful poetry generation. As shown in the sections to follow, this system – named Stereotrope – uses corpus analysis to generate affective metaphors for a topic on which it is asked to wax poetic. Stereotrope can be asked to view a topic from a particular affective stance (e.g., view love negatively) or to elaborate on a familiar metaphor (e.g. love is a prison). In doing so, Stereotrope takes account of the feelings that different metaphors are likely to engender in an audience. These metaphors are further integrated to yield tight conceptual blends, which may in turn highlight emergent nuances of a viewpoint that are worthy of poetic expression (see Lakoff and Turner, 1989). Stereotrope uses a knowledge-base of conceptual norms to anchor its understanding of these metaphors and blends. While these norms are the stuff of banal clichés and stereotypes, such as that dogs chase cats and cops eat donuts. we also show how Stereotrope finds and exploits corpus evidence to recast these banalities as witty, incisive and poetic insights. Mutual Knowledge: Norms and Stereotypes Samuel Johnson opined that “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.” Traditional approaches to the modelling of metaphor and other figurative devices have typically sought to imbue computers with the former (Fass, 1997). More recently, however, the latter kind has gained traction, with the use of the Web and text corpora to source large amounts of shallow knowledge as it is needed (e.g., Veale & Hao 2007a,b; Shutova 2010; Veale & Li, 2011). But the kind of knowledge demanded by knowledgehungry phenomena such as metaphor and blending is very different to the specialist “book” knowledge so beloved of Johnson. These demand knowledge of the quotidian world that we all tacitly share but rarely articulate in words, not even in the thoughtful definitions of Johnson’s dictionary. Similes open a rare window onto our shared expectations of the world. Thus, the as-as-similes “as hot as an oven”, “as dry as sand” and “as tough as leather” illuminate the expected properties of these objects, while the like-similes “crying like a baby”, “singing like an angel” and “swearing like a sailor” reflect intuitons of how these familiar entities are tacitly expected to behave. Veale & Hao (2007a,b) thus harvest large numbers of as-as-similes from the Web to build a rich stereotypical model of familiar ideas and their salient properties, while Özbal & Stock (2012) apply a similar approach on a smaller scale using Google’s query completion service. Fishelov (1992) argues convincingly that poetic and non-poetic similes are crafted from the same words and ideas. Poetic conceits use familiar ideas in non-obvious combinations, often with the aim of creating semantic tension. The simile-based model used here thus harvests almost 10,000 familiar stereotypes (drawing on a range of ~8,000 features) from both as-as and like-similes. Poems construct affective conceits, but as shown in Veale (2012b), the features of a stereotype can be affectively partitioned as needed into distinct pleasant and unpleasant perspectives. We are thus confident that a stereotype-based model of common-sense knowledge is equal to the task of generating and elaborating affective conceits for a poem. A stereotype-based model of common-sense knowledge requires both features and relations, with the latter showing how stereotypes relate to each other. It is not enough then to know that cops are tough and gritty, or that donuts are sweet and soft; our stereotypes of each should include the cliché that cops eat donuts, just as dogs chew bones and cats cough up furballs. Following Veale & Li (2011), we acquire inter-stereotype relationships from the Web, not by mining similes but by mining questions. As in Özbal & Stock (2012), we target query completions from a popular search service (Google), which offers a smaller, public proxy for a larger, zealously-guarded search query log. We harvest questions of the form “Why do Xs <relation> Ys”, and assume that since each relationship is presupposed by the question (so “why do bikers wear leathers” presupposes that everyone knows that bikers wear leathers), the triple of subject/relation/object captures a widely-held norm. In this way we harvest over 40,000 such norms from the Web. Generating Metaphors, N-Gram Style! The Google n-grams (Brants & Franz, 2006) is a rich source of popular metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe a topic T, where commonality is defined as the presence of the corresponding metaphor in the Google n-grams. To find metaphors for proper-named entities, we also analyse n-grams of the form stereotype First [Middle] Last, such as “tyrant Adolf Hitler” and “boss Bill Gates”. Thus, e.g.: src(racism) = {problem, disease, joke, sin, poison, crime, ideology, weapon} src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, ...} Let typical(T) denote the set of properties and behaviors harvested for T from Web similes (see previous section), and let srcTypical(T) denote the aggregate set of properties and behaviors ascribable to T via the metaphors in src(T): (1) srcTypical (T) = M∈src(T) typical(M) We can generate conceits for a topic T by considering not just obvious metaphors for T, but metaphors of metaphors: (2) conceits(T) = src(T) ∪ M∈src(T) src(M) The features evoked by the conceit T as M are given by: (3) salient (T,M) = [srcTypical(T) ∪ typical(T)]",
"title": ""
},
{
"docid": "a28a29ec67cf50193b9beb579a83bdf4",
"text": "DNA microarray is an efficient new technology that allows to analyze, at the same time, the expression level of millions of genes. The gene expression level indicates the synthesis of different messenger ribonucleic acid (mRNA) molecule in a cell. Using this gene expression level, it is possible to diagnose diseases, identify tumors, select the best treatment to resist illness, detect mutations among other processes. In order to achieve that purpose, several computational techniques such as pattern classification approaches can be applied. The classification problem consists in identifying different classes or groups associated with a particular disease (e.g., various types of cancer, in terms of the gene expression level). However, the enormous quantity of genes and the few samples available, make difficult the processes of learning and recognition of any classification technique. Artificial neural networks (ANN) are computational models in artificial intelligence used for classifying, predicting and approximating functions. Among the most popular ones, we could mention the multilayer perceptron (MLP), the radial basis function neural netrtificial Bee Colony algorithm work (RBF) and support vector machine (SVM). The aim of this research is to propose a methodology for classifying DNA microarray. The proposed method performs a feature selection process based on a swarm intelligence algorithm to find a subset of genes that best describe a disease. After that, different ANN are trained using the subset of genes. Finally, four different datasets were used to validate the accuracy of the proposal and test the relevance of genes to correctly classify the samples of the disease. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 . Introduction DNA microarray is an essential technique in molecular biolgy that allows, at the same time, to know the expression level f millions of genes. The DNA microarray consists in immobilizing known deoxyribonucleic acid (DNA) molecule layout in a glass ontainer and then this information with other genetic informaion are hybridized. This process is the base to identify, classify or redict diseases such as different kind of cancer [1–4]. The process to obtain a DNA microarray is based on the comination of a healthy DNA reference with a testing DNA. Using Please cite this article in press as: B.A. Garro, et al., Classification of DNA Appl. Soft Comput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.10. uorophores and a laser it is possible to generate a color spot matrix nd obtain quantitative values that represent the expression level f each gene [5]. This expression level is like a signature useful to ∗ Corresponding author. Tel.: +52 5556223899. E-mail addresses: beatriz.garro@iimas.unam.mx (B.A. Garro), atya.rodriguez@iimas.unam.mx (K. Rodríguez), ravem@lasallistas.org.mx R.A. Vázquez). ttp://dx.doi.org/10.1016/j.asoc.2015.10.002 568-4946/© 2015 Published by Elsevier B.V. 48 49 50 51 52 diagnose different diseases. Furthermore, it can be used to identify genes that modify their genetic expression when a medical treatment is applied, identify tumors and genes that make regulation genetic networks, detect mutations among other applications [6]. Computational techniques combined with DNA microarrays can generate efficient results. The classification of DNA microarrays can be divided into three stages: gene finding, class discovery, and class prediction [7,8]. The DNA microarray samples have millions of genes and selecting the best genes set in such a way that get a trustworthy classification is a difficult task. Nonetheless, the evolutionary and bio-inspired algorithms, such as genetic algorithm (GA) [9], particle swarm optimization (PSO) [10], bacterial foraging algorithm (BFA) [11] and fish school search (FSS) [12], are excellent options to solve this problem. However, the performance of these algorithms depends of the fitness function, the parameters of the algorithm, the search space complexity, convergence, etc. In microarrays using artificial neural networks and ABC algorithm, 002 general, the performance of these algorithms is very similar among them, but depends of adjusting carefully their parameters. Based on that, the criterion that we used to select the algorithm for finding the set of most relevant genes was in term of the number of 53 54 55 56",
"title": ""
},
{
"docid": "ee4288bcddc046ae5e9bcc330264dc4f",
"text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.",
"title": ""
},
{
"docid": "86ba97e91a8c2bcb1015c25df7c782db",
"text": "After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.",
"title": ""
},
{
"docid": "9001def80e94598f1165a867f3f6a09b",
"text": "Microbial polyhydroxyalkanoates (PHA) have been developed as biodegradable plastics for the past many years. However, PHA still have only a very limited market. Because of the availability of large amount of shale gas, petroleum will not raise dramatically in price, this situation makes PHA less competitive compared with low cost petroleum based plastics. Therefore, two strategies have been adopted to meet this challenge: first, the development of a super PHA production strain combined with advanced fermentation processes to produce PHA at a low cost; second, the construction of functional PHA production strains with technology to control the precise structures of PHA molecules, this will allow the resulting PHA with high value added applications. The recent systems and synthetic biology approaches allow the above two strategies to be implemented. In the not so distant future, the new technology will allow PHA to be produced with a competitive price compared with petroleum-based plastics.",
"title": ""
},
{
"docid": "68810ad35e71ea7d080e7433e227e40e",
"text": "Mobile devices, ubiquitous in modern lifestyle, embody and provide convenient access to our digital lives. Being small and mobile, they are easily lost or stole, therefore require strong authentication to mitigate the risk of unauthorized access. Common knowledge-based mechanism like PIN or pattern, however, fail to scale with the high frequency but short duration of device interactions and ever increasing number of mobile devices carried simultaneously. To overcome these limitations, we present CORMORANT, an extensible framework for risk-aware multi-modal biometric authentication across multiple mobile devices that offers increased security and requires less user interaction.",
"title": ""
},
{
"docid": "c66df34c3a9b34de22c8053044ce5eaa",
"text": "Over the past decade, hospitals in Greece have made significant investments in adopting and implementing new hospital information systems (HISs). Whether these investments will prove beneficial for these organizations depends on the support that will be provided to ensure the effective use of the information systems implemented and also on the satisfaction of its users, which is one of the most important determinants of the success of these systems. Measuring end-user computing satisfaction has a long history within the IS discipline. A number of attempts have been made to evaluate the overall post hoc impact of HIS, focusing on the end-users and more specifically on their satisfaction and the parameters that determine it. The purpose of this paper is to build further upon the existing body of the relevant knowledge by testing past models and suggesting new conceptual perspectives on how end-user computing satisfaction (EUCS) is formed among hospital information system users. All models are empirically tested using data from hospital information system (HIS) users (283). Correlation, explanatory and confirmation factor analysis was performed to test the reliability and validity of the measurement models. The structural equation modeling technique was also used to evaluate the causal models. The empirical results of the study provide support for the EUCS model (incorporating new factors) and enhance the generalizability of the EUCS instrument and its robustness as a valid measure of computing satisfaction and a surrogate for system success in a variety of cultural and linguistic settings. Although the psychometric properties of EUCS appear to be robust across studies and user groups, it should not be considered as the final chapter in the validation and refinement of these scales. Continuing efforts should be made to validate and extend the instrument.",
"title": ""
}
] |
scidocsrr
|
bc04ce8d4c0e12b8ecf89ef2ccd81ded
|
Learning of Human-like Algebraic Reasoning Using Deep Feedforward Neural Networks
|
[
{
"docid": "b44d6d71650fc31c643ac00bd45772cd",
"text": "We give in this paper a complete description of the Knuth-Bendix completion algorithm. We prove its correctness in full, isolating carefully the essential abstract notions, so that the proof may be extended to other versions and extensions of the basic algorithm. We show that it defines a semidecision algorithm for the validity problem in the equational theories for which it applies, yielding a decision procedure whenever the algorithm terminates.",
"title": ""
},
{
"docid": "26ee1e5770a77d030b6230b8eef7e644",
"text": "We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "cdefeefa1b94254083eba499f6f502fb",
"text": "problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a \"problem\" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that \"solves\" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out \"expensive\" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its \"complexity,\" that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a \"standard\" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by",
"title": ""
}
] |
[
{
"docid": "388f4a555c7aa004f081cbdc6bc0f799",
"text": "We present a multi-GPU version of GPUSPH, a CUDA implementation of fluid-dynamics models based on the smoothed particle hydrodynamics (SPH) numerical method. The SPH is a well-known Lagrangian model for the simulation of free-surface fluid flows; it exposes a high degree of parallelism and has already been successfully ported to GPU. We extend the GPU-based simulator to run simulations on multiple GPUs simultaneously, to obtain a gain in speed and overcome the memory limitations of using a single device. The computational domain is spatially split with minimal overlapping and shared volume slices are updated at every iteration of the simulation. Data transfers are asynchronous with computations, thus completely covering the overhead introduced by slice exchange. A simple yet effective load balancing policy preserves the performance in case of unbalanced simulations due to asymmetric fluid topologies. The obtained speedup factor (up to 4.5x for 6 GPUs) closely follows the expected one (5x for 6 GPUs) and it is possible to run simulations with a higher number of particles than would fit on a single device. We use the Karp-Flatt metric to formally estimate the overall efficiency of the parallelization.",
"title": ""
},
{
"docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05",
"text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.",
"title": ""
},
{
"docid": "553b72da13c28e56822ccc900ff114fa",
"text": "This paper presents some of the unique verification, validation, and certification challenges that must be addressed during the development of adaptive system software for use in safety-critical aerospace applications. The paper first discusses the challenges imposed by the current regulatory guidelines for aviation software. Next, a number of individual technologies being researched by NASA and others are discussed that focus on various aspects of the software challenges. These technologies include the formal methods of model checking, compositional verification, static analysis, program synthesis, and runtime analysis. Then the paper presents some validation challenges for adaptive control, including proving convergence over long durations, guaranteeing controller stability, using new tools to compute statistical error bounds, identifying problems in fault-tolerant software, and testing in the presence of adaptation. These specific challenges are presented in the context of a software validation effort in testing the Integrated Flight Control System (IFCS) neural control software at the Dryden Flight Research Center. Lastly, the challenges to develop technologies to help prevent aircraft system failures, detect and identify failures that do occur, and provide enhanced guidance and control capability to prevent and recover from vehicle loss of control are briefly cited in connection with ongoing work at the NASA Langley Research Center.",
"title": ""
},
{
"docid": "77f0e2656996aebdccabba4d4e6fbb21",
"text": "Cross-modal retrieval has drawn wide interest for retrieval across different modalities (such as text, image, video, audio, and 3-D model). However, existing methods based on a deep neural network often face the challenge of insufficient cross-modal training data, which limits the training effectiveness and easily leads to overfitting. Transfer learning is usually adopted for relieving the problem of insufficient training data, but it mainly focuses on knowledge transfer only from large-scale datasets as a single-modal source domain (such as ImageNet) to a single-modal target domain. In fact, such large-scale single-modal datasets also contain rich modal-independent semantic knowledge that can be shared across different modalities. Besides, large-scale cross-modal datasets are very labor-consuming to collect and label, so it is significant to fully exploit the knowledge in single-modal datasets for boosting cross-modal retrieval. To achieve the above goal, this paper proposes a modal-adversarial hybrid transfer network (MHTN), which aims to realize knowledge transfer from a single-modal source domain to a cross-modal target domain and learn cross-modal common representation. It is an end-to-end architecture with two subnetworks. First, a modal-sharing knowledge transfer subnetwork is proposed to jointly transfer knowledge from a single modality in the source domain to all modalities in the target domain with a star network structure, which distills modal-independent supplementary knowledge for promoting cross-modal common representation learning. Second, a modal-adversarial semantic learning subnetwork is proposed to construct an adversarial training mechanism between the common representation generator and modality discriminator, making the common representation discriminative for semantics but indiscriminative for modalities to enhance cross-modal semantic consistency during the transfer process. Comprehensive experiments on four widely used datasets show the effectiveness of MHTN.",
"title": ""
},
{
"docid": "c3ad915ac57bf56c4adc47acee816b54",
"text": "How does the brain “produce” conscious subjective experience, an awareness of something? This question has been regarded as perhaps the most challenging one facing science. Penfield et al. [9] had produced maps of whereresponses to electrical stimulation of cerebral cortex could be obtained in human neurosurgical patients. Mapping of cerebral activations in various subjective paradigms has been greatly extended more recently by utilizing PET scan and fMRI techniques. But there were virtually no studies of what the appropriate neurons do in order to elicit a conscious experience. The opportunity for me to attempt such studies arose when my friend and neurosurgeon colleague, Bertram Feinstein, invited me to utilize the opportunity presented by access to stimulating and recording electrodes placed for therapeutic purposes intracranially in awake and responsive patients. With the availability of an excellent facility and team of co-workers, I decided to study neuronal activity requirements for eliciting a simple conscious somatosensory experience, and compare that to activity requirements forunconsciousdetection of sensory signals. We discovered that a surprising duration of appropriate neuronal activations, up to about 500 msec, was required in order to elicit a conscious sensory experience [5]. This was true not only when the initiating stimulus was in any of the cerebral somatosensory pathways; several lines of evidence indicated that even a single stimulus pulse to the skin required similar durations of activities at the cortical level. That discovery led to further studies of such a delay factor for awareness generally, and to profound inferences for the nature of conscious subjective experience. It formed the basis of that highlight in my work [1,3]. For example, a neuronal requirement of about 500 msec to produce awareness meant that we do not experience our sensory world immediately, in real time. But that would contradict our intuitive feeling of the experience in real time. We solved this paradox with a hypothesis for “backward referral” of subjective experience to the time of the first cortical response, the primary evoked potential. This was tested and confirmed experimentally [8], a thrilling result. We could now add subjective referral in time to the already known subjective referral in space. Subjective referrals have no known neural basis and appear to be purely mental phenomena! Another experimental study supported my “time-on” theory for eliciting conscious sensations as opposed to unconscious detection [7]. The time-factor appeared also in an endogenous experience, the conscious intention or will to produce a purely voluntary act [4,6]. In this, we found that cerebral activity initiates this volitional process at least 350 msec before the conscious wish (W) to act appears. However, W appears about 200 msec before the muscles are activated. That retained the possibility that the conscious will could control the outcome of the volitional process; it could veto it and block the performance of the act. These discoveries have profound implications for the nature of free will, for individual responsibility and guilt. Discovery of these time factors led to unexpected ways of viewing conscious experience and unconscious mental functions. Experience of the sensory world is delayed. It raised the possibility that all conscious mental functions are initiated unconsciouslyand become conscious only if neuronal activities persist for a sufficiently long time. Conscious experiences must be discontinuousif there is a delay for each; the “stream of consciousness” must be modified. Quick actions or responses, whether in reaction times, sports activities, etc., would all be initially unconscious. Unconscious mental operations, as in creative thinking, artistic impulses, production of speech, performing in music, etc., can all proceed rapidly, since only brief neural actions are sufficient. Rapid unconscious events would allow faster processing in thinking, etc. The delay for awareness provides a physiological opportunity for modulatory influences to affect the content of an experience that finally appears, as in Freudian repression of certain sensory images or thoughts [2,3]. The discovery of the neural time factor (except in conscious will) could not have been made without intracranial access to the neural pathways. They provided an experimentally based entry into how new hypotheses, of how the brain deals with conscious experience, could be directly tested. That was in contrast to the many philosophical approaches which were speculative and mostly untestable. Evidence based views could now be accepted with some confidence.",
"title": ""
},
{
"docid": "57cbffa039208b85df59b7b3bc1718d5",
"text": "This paper provides an in-depth analysis of the technological and social factors that led to the successful adoption of groupware by a virtual team in a educational setting. Drawing on a theoretical framework based on the concept of technological frames, we conducted an action research study to analyse the chronological sequence of events in groupware adoption. We argue that groupware adoption can be conceptualised as a three-step process of expanding and aligning individual technological frames towards groupware. The first step comprises activities that bring knowledge of new technological opportunities to the participants. The second step involves facilitating the participants to articulate and evaluate their work practices and their use of tech© Scandinavian Journal of Information Systems, 2006, 18(2):29-68 nology. The third and final step deals with the participants' commitment to, and practical enactment of, groupware technology. The alignment of individual technological frames requires the articulation and re-evaluation of experience with collaborative practice and with the use of technology. One of the key findings is that this activity cannot take place at the outset of groupware adoption.",
"title": ""
},
{
"docid": "7b02c36cef0c195d755b6cc1c7fbda2e",
"text": "Content based object retrieval across large scale surveillance video dataset is a significant and challenging task, in which learning an effective compact object descriptor plays a critical role. In this paper, we propose an efficient deep compact descriptor with bagging auto-encoders. Specifically, we take advantage of discriminative CNN to extract efficient deep features, which not only involve rich semantic information but also can filter background noise. Besides, to boost the retrieval speed, auto-encoders are used to map the high-dimensional real-valued CNN features into short binary codes. Considering the instability of auto-encoder, we adopt a bagging strategy to fuse multiple auto-encoders to reduce the generalization error, thus further improving the retrieval accuracy. In addition, bagging is easy for parallel computing, so retrieval efficiency can be guaranteed. Retrieval experimental results on the dataset of 100k visual objects extracted from multi-camera surveillance videos demonstrate the effectiveness of the proposed deep compact descriptor.",
"title": ""
},
{
"docid": "9d34171c2fcc8e36b2fb907fe63fc08d",
"text": "A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.",
"title": ""
},
{
"docid": "adb9eaaf50a43d637bf59ce38d7e8f99",
"text": "In response to a stressor, physiological changes are set into motion to help an individual cope with the stressor. However, chronic activation of these stress responses, which include the hypothalamic–pituitary–adrenal axis and the sympathetic–adrenal–medullary axis, results in chronic production of glucocorticoid hormones and catecholamines. Glucocorticoid receptors expressed on a variety of immune cells bind cortisol and interfere with the function of NF-kB, which regulates the activity of cytokine-producing immune cells. Adrenergic receptors bind epinephrine and norepinephrine and activate the cAMP response element binding protein, inducing the transcription of genes encoding for a variety of cytokines. The changes in gene expression mediated by glucocorticoid hormones and catecholamines can dysregulate immune function. There is now good evidence (in animal and human studies) that the magnitude of stress-associated immune dysregulation is large enough to have health implications.",
"title": ""
},
{
"docid": "d480813d8723b2e81ffc0747e02e32cc",
"text": "In practice, multiple types of distortions are associated with an image quality degradation process. The existing machine learning (ML) based image quality assessment (IQA) approaches generally established a unified model for all distortion types, or each model is trained independently for each distortion type by using single-task learning, which lead to the poor generalization ability of the models as applied to practical image processing. There are often the underlying cross relatedness amongst these single-task learnings in IQA, which is ignored by the previous approaches. To solve this problem, we propose a multi-task learning framework to train IQA models simultaneously across individual tasks each of which concerns one distortion type. These relatedness can be therefore exploited to improve the generalization ability of IQA models from single-task learning. In addition, pairwise image quality rank instead of image quality rating is optimized in learning task. By mapping image quality rank to image quality rating, a novel no-reference (NR) IQA approach can be derived. The experimental results confirm that the proposed Multi-task Rank Learning based IQA (MRLIQ) approach is prominent among all state-of-the-art NR-IQA approaches.",
"title": ""
},
{
"docid": "463e6cdd8cb47395e6a4d53c054ff134",
"text": "Programmable stiff sheets with a single low-energy folding motion have been sought in fields ranging from the ancient art of origami to modern meta-materials research. Despite such attention, only two extreme classes of crease patterns are usually studied; special Miura-Ori-based zero-energy patterns, in which crease folding requires no sheet bending, and random patterns with high-energy folding, in which the sheet bends as much as creases fold. We present a physical approach that allows systematic exploration of the entire space of crease patterns as a function of the folding energy. Consequently, we uncover statistical results in origami, finding the entropy of crease patterns of given folding energy. Notably, we identify three classes of Mountain-Valley choices that have widely varying 'typical' folding energies. Our work opens up a wealth of experimentally relevant self-folding origami designs not reliant on Miura-Ori, the Kawasaki condition or any special symmetry in space.",
"title": ""
},
{
"docid": "0739c95aca9678b3c001c4d2eb92ec57",
"text": "The Image segmentation is referred to as one of the most important processes of image processing. Image segmentation is the technique of dividing or partitioning an image into parts, called segments. It is mostly useful for applications like image compression or object recognition, because for these types of applications, it is inefficient to process the whole image. So, image segmentation is used to segment the parts from image for further processing. There exist several image segmentation techniques, which partition the image into several parts based on certain image features like pixel intensity value, color, texture, etc. These all techniques are categorized based on the segmentation method used. In this paper the various image segmentation techniques are reviewed, discussed and finally a comparison of their advantages and disadvantages is listed.",
"title": ""
},
{
"docid": "1293169b867f4455b71c0612428d8d98",
"text": "Emerging applications such as solar cells, fuel cells, energy storage, UPS, etc. require voltage gain and are low voltage high current specifications. This paper thoroughly studies and reviews high-frequency current-fed converters for low voltage high current applications. Hard-switching, dissipative or passive snubber based, active-clamped, naturally-clamped, impulse commutated, and parasitic assisted current-fed topologies are studied, compared, and evaluated. Individual merits, demerits, unique attributes and features of the various categories of the current-fed topologies are reported. Detailed comparison and critical evaluation are presented for the selection and design of a topology for the given application and specifications.",
"title": ""
},
{
"docid": "8e53a1b830917e8f718f75a6a8843b87",
"text": "The final phase of CMOS technology scaling provides continued increases in already vast transistor counts, but only minimal improvements in energy efficiency, thus requiring innovation in circuits and architectures. However, even huge teams are struggling to complete large, complex designs on schedule using traditional rigid development flows. This article presents an agile hardware development methodology, which the authors adopted for 11 RISC-V microprocessor tape-outs on modern 28-nm and 45-nm CMOS processes in the past five years. The authors discuss how this approach enabled small teams to build energy-efficient, cost-effective, and industry-competitive high-performance microprocessors in a matter of months. Their agile methodology relies on rapid iterative improvement of fabricatable prototypes using hardware generators written in Chisel, a new hardware description language embedded in a modern programming language. The parameterized generators construct highly customized systems based on the free, open, and extensible RISC-V platform. The authors present a case study of one such prototype featuring a RISC-V vector microprocessor integrated with a switched-capacitor DC-DC converter alongside an adaptive clock generator in a 28-nm, fully depleted silicon-on-insulator process.",
"title": ""
},
{
"docid": "ee7cb11143e5c974648e9850e4c3d953",
"text": "Dynamics of sleep, vitally important but commonly overlooked, have been associated with various aspects of physical and mental health. Sleep deprivation not only affects daily activities, causing fatigue, impaired memory, and cognitive dysfunction, but also increases the risk of heart disease, diabetes, and obesity. Sleep disruptions generally present as sleep disorders; apnea, insomnia, snoring, sleep walking, and restless legs syndrome are common examples. Diagnosis as well as treatment require one or more sleep studies: polysomnography (PSG), a multiple sleep latency test (MSLT), a maintenance of wakefulness test (MWT), or actigraphy. Considered the gold standard, PSG is commonly administered at a specialized sleep laboratory, which usually has a long waiting list due to an increasingly overwhelming demand. PSG is expensive and time consuming, requiring an overnight stay. To make it more practically accessible, an augmented home-based PSG would be of great benefit. We propose a sleep monitoring device to be used at the comfort of home. Setting it apart from commercially available products, this is a miniature unit. It is affordable, highly customizable, and user-friendly. Patients can view their results in real-time on a graphical display that is easy to read and understand. Physicians can utilize an interactive user interface for ease of data access during medical evaluation. Furthermore, the device features an integrated web application, enabling suitable and attractive data visualization.",
"title": ""
},
{
"docid": "bd3792071a2c7b13bf479aa138f67544",
"text": "Aging is considered the major risk factor for cancer, one of the most important mortality causes in the western world. Inflammaging, a state of chronic, low-level systemic inflammation, is a pervasive feature of human aging. Chronic inflammation increases cancer risk and affects all cancer stages, triggering the initial genetic mutation or epigenetic mechanism, promoting cancer initiation, progression and metastatic diffusion. Thus, inflammaging is a strong candidate to connect age and cancer. A corollary of this hypothesis is that interventions aiming to decrease inflammaging should protect against cancer, as well as most/all age-related diseases. Epidemiological data are concordant in suggesting that the Mediterranean Diet (MD) decreases the risk of a variety of cancers but the underpinning mechanism(s) is (are) still unclear. Here we review data indicating that the MD (as a whole diet or single bioactive nutrients typical of the MD) modulates multiple interconnected processes involved in carcinogenesis and inflammatory response such as free radical production, NF-κB activation and expression of inflammatory mediators, and the eicosanoids pathway. Particular attention is devoted to the capability of MD to affect the balance between pro- and anti-inflammaging as well as to emerging topics such as maintenance of gut microbiota (GM) homeostasis and epigenetic modulation of oncogenesis through specific microRNAs.",
"title": ""
},
{
"docid": "66b0b3a62d0b9ad7aaa37f9bcada700e",
"text": "Wheelchair-mounted robotic arms have been commercially available for a decade. In order to operate these robotic arms, a user must have a high level of cognitive function. Our research focuses on replacing a manufacturer-provided, menu-based interface with a vision-based system while adding autonomy to reduce the cognitive load. Instead of manual task decomposition and execution, the user explicitly designates the end goal, and the system autonomously retrieves the object. In this paper, we present the complete system which can autonomously retrieve a desired object from a shelf. We also present the results of a 15-week study in which 12 participants from our target population used our system, totaling 198 trials.",
"title": ""
},
{
"docid": "cb7e4a454d363b9cb1eb6118a4b00855",
"text": "Stream processing applications reduce the latency of batch data pipelines and enable engineers to quickly identify production issues. Many times, a service can log data to distinct streams, even if they relate to the same real-world event (e.g., a search on Facebook’s search bar). Furthermore, the logging of related events can appear on the server side with different delay, causing one stream to be significantly behind the other in terms of logged event times for a given log entry. To be able to stitch this information together with low latency, we need to be able to join two different streams where each stream may have its own characteristics regarding the degree in which its data is out-of-order. Doing so in a streaming fashion is challenging as a join operator consumes lots of memory, especially with significant data volumes. This paper describes an end-to-end streaming join service that addresses the challenges above through a streaming join operator that uses an adaptive stream synchronization algorithm that is able to handle the different distributions we observe in real-world streams regarding their event times. This synchronization scheme paces the parsing of new data and reduces overall operator memory footprint while still providing high accuracy. We have integrated this into a streaming SQL system and have successfully reduced the latency of several batch pipelines using this approach. PVLDB Reference Format: G. Jacques-Silva, R. Lei, L. Cheng, G. J. Chen, K. Ching, T. Hu, Y. Mei, K. Wilfong, R. Shetty, S. Yilmaz, A. Banerjee, B. Heintz, S. Iyer, A. Jaiswal. Providing Streaming Joins as a Service at Facebook. PVLDB, 11 (12): 1809-1821, 2018. DOI: : https://doi.org/10.14778/3229863.3229869",
"title": ""
},
{
"docid": "e7525ce804a6931dd9a77e241bcacf15",
"text": "Developments have occurred in all aspects of psychosomatic medicine. Among factors affecting individual vulnerability to all types of disease, the following have been highlighted by recent research: recent and early life events, chronic stress and allostatic load, personality, psychological well-being, health attitudes and behavior. As to the interaction between psychological and biological factors in the course and outcome of disease, the presence of psychiatric (DSM-IV) as well as subclinical (Diagnostic Criteria for Psychosomatic Research) symptoms, illness behavior and the impact on quality of life all need to be assessed. The prevention, treatment and rehabilitation of physical illness include the consideration for psychosomatic prevention, the treatment of psychiatric morbidity and abnormal illness behavior and the use of psychotropic drugs in the medically ill. In the past 60 years, psychosomatic medicine has addressed some fundamental questions, contributing to the growth of other related disciplines, such as psychoneuroendocrinology, psychoimmunology, consultation-liaison psychiatry, behavioral medicine, health psychology and quality of life research. Psychosomatic medicine may also provide a comprehensive frame of reference for several current issues of clinical medicine (the phenomenon of somatization, the increasing occurrence of mysterious symptoms, the demand for well-being and quality of life), including its new dialogue with mind-body and alternative medicine.",
"title": ""
},
{
"docid": "b7f7e80b40f9b8b533811a565270824a",
"text": "Many studies over the past two decades have shown that people and animals can use brain signals to convey their intent to a computer using brain-computer interfaces (BCIs). BCI systems measure specific features of brain activity and translate them into control signals that drive an output. The sensor modalities that have most commonly been used in BCI studies have been electroencephalographic (EEG) recordings from the scalp and single- neuron recordings from within the cortex. Over the past decade, an increasing number of studies has explored the use of electro-corticographic (ECoG) activity recorded directly from the surface of the brain. ECoG has attracted substantial and increasing interest, because it has been shown to reflect specific details of actual and imagined actions, and because its technical characteristics should readily support robust and chronic implementations of BCI systems in humans. This review provides general perspectives on the ECoG platform; describes the different electrophysiological features that can be detected in ECoG; elaborates on the signal acquisition issues, protocols, and online performance of ECoG- based BCI studies to date; presents important limitations of current ECoG studies; discusses opportunities for further research; and finally presents a vision for eventual clinical implementation. In summary, the studies presented to date strongly encourage further research using the ECoG platform for basic neuroscientific research, as well as for translational neuroprosthetic applications.",
"title": ""
}
] |
scidocsrr
|
2dd593d54057504f3af12def3133b838
|
The Effects of Interleaved Practice
|
[
{
"docid": "3ade96c73db1f06d7e0c1f48a0b33387",
"text": "To achieve enduring retention, people must usually study information on multiple occasions. How does the timing of study events affect retention? Prior research has examined this issue only in a spotty fashion, usually with very short time intervals. In a study aimed at characterizing spacing effects over significant durations, more than 1,350 individuals were taught a set of facts and--after a gap of up to 3.5 months--given a review. A final test was administered at a further delay of up to 1 year. At any given test delay, an increase in the interstudy gap at first increased, and then gradually reduced, final test performance. The optimal gap increased as test delay increased. However, when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. The interaction of gap and test delay implies that many educational practices are highly inefficient.",
"title": ""
}
] |
[
{
"docid": "5d44349955d07a212bc11f6edfaec8b0",
"text": "This investigation develops an innovative algorithm for multiple autonomous unmanned aerial vehicle (UAV) mission routing. The concept of a UAV Swarm Routing Problem (SRP) as a new combinatorics problem, is developed as a variant of the Vehicle Routing Problem with Time Windows (VRPTW). Solutions of SRP problem model result in route assignments per vehicle that successfully track to all targets, on time, within distance constraints. A complexity analysis and multi-objective formulation of the VRPTW indicates the necessity of a stochastic solution approach leading to a multi-objective evolutionary algorithm. A full problem definition of the SRP as well as a multi-objective formulation parallels that of the VRPTW method. Benchmark problems for the VRPTW are modified in order to create SRP benchmarks. The solutions show the SRP solutions are comparable or better than the same VRPTW solutions, while also representing a more realistic UAV swarm routing solution.",
"title": ""
},
{
"docid": "04edf5059bcaf3ed361ed65b8897ba8d",
"text": "The flying-capacitor (FC) topology is one of the more well-established ideas of multilevel conversion, typically applied as an inverter. One of the biggest advantages of the FC converter is the ability to naturally balance capacitor voltage. When natural balancing occurs neither measurements, nor additional control is needed to maintain required capacitors voltage sharing. However, in order to achieve natural voltage balancing suitable conditions must be achieved such as the topology, number of levels, modulation strategy as well as impedance of the output circuitry. Nevertheless this method is effectively applied in various classes of the converter such as inverters, multicell DC-DC, switch-mode DC-DC, AC-AC, as well as rectifiers. The next important issue related to the natural balancing process is its dynamics. Furthermore, in order to reinforce the balancing mechanism an auxiliary resonant balancing circuit is utilized in the converter which can also be critical in the AC-AC converters or switch mode DC-DC converters. This paper also presents an issue of choosing modulation strategy for the FC converter due to the fact that the natural balancing process is well-established for phase shifted PWM whilst other types of modulation can be more favorable for the power quality.",
"title": ""
},
{
"docid": "2ec3f8bc16c6d3dc8022309686c79f8d",
"text": "Manually re-drawing an image in a certain artistic style takes a professional artist a long time. Doing this for a video sequence single-handedly is beyond imagination. We present two computational approaches that transfer the style from one image (for example, a painting) to a whole video sequence. In our first approach, we adapt to videos the original image style transfer technique by Gatys et al. based on energy minimization. We introduce new ways of initialization and new loss functions to generate consistent and stable stylized video sequences even in cases with large motion and strong occlusion. Our second approach formulates video stylization as a learning problem. We propose a deep network architecture and training procedures that allow us to stylize arbitrary-length videos in a consistent and stable way, and nearly in real time. We show that the proposed methods clearly outperform simpler baselines both qualitatively and quantitatively. Finally, we propose a way to adapt these approaches also to 360$$^\\circ $$ ∘ images and videos as they emerge with recent virtual reality hardware.",
"title": ""
},
{
"docid": "c17522f4b9f3b229dae56b394adb69a1",
"text": "This paper investigates fault effects and error propagation in a FlexRay-based network with hybrid topology that includes a bus subnetwork and a star subnetwork. The investigation is based on about 43500 bit-flip fault injection inside different parts of the FlexRay communication controller. To do this, a FlexRay communication controller is modeled by Verilog HDL at the behavioral level. Then, this controller is exploited to setup a FlexRay-based network composed of eight nodes (four nodes in the bus subnetwork and four nodes in the star subnetwork). The faults are injected in a node of the bus subnetwork and a node of the star subnetwork of the hybrid network Then, the faults resulting in the three kinds of errors, namely, content errors, syntax errors and boundary violation errors are characterized. The results of fault injection show that boundary violation errors and content errors are negligibly propagated to the star subnetwork and syntax errors propagation is almost equal in the both bus and star subnetworks. Totally, the percentage of errors propagation in the bus subnetwork is more than the star subnetwork.",
"title": ""
},
{
"docid": "bc4d717db3b3470d7127590b8d165a5d",
"text": "In this paper, we develop a general formalism for describing the C++ programming language, and regular enough to cope with proposed extensions (such as concepts) for C++0x that affect its type system. Concepts are a mechanism for checking template arguments currently being developed to help cope with the massive use of templates in modern C++. The main challenges in developing a formalism for C++ are scoping, overriding, overloading, templates, specialization, and the C heritage exposed in the built-in types. Here, we primarily focus on templates and overloading.",
"title": ""
},
{
"docid": "962858b6cbb3ae5c95d0018075fd0060",
"text": "By 2010, the worldwide annual production of plastics will surpass 300 million tons. Plastics are indispensable materials in modern society, and many products manufactured from plastics are a boon to public health (e.g., disposable syringes, intravenous bags). However, plastics also pose health risks. Of principal concern are endocrine-disrupting properties, as triggered for example by bisphenol A and di-(2-ethylhexyl) phthalate (DEHP). Opinions on the safety of plastics vary widely, and despite more than five decades of research, scientific consensus on product safety is still elusive. This literature review summarizes information from more than 120 peer-reviewed publications on health effects of plastics and plasticizers in lab animals and humans. It examines problematic exposures of susceptible populations and also briefly summarizes adverse environmental impacts from plastic pollution. Ongoing efforts to steer human society toward resource conservation and sustainable consumption are discussed, including the concept of the 5 Rs--i.e., reduce, reuse, recycle, rethink, restrain--for minimizing pre- and postnatal exposures to potentially harmful components of plastics.",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "912c213d76bed8d90f636ea5a6220cf1",
"text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.",
"title": ""
},
{
"docid": "e9ac1d4fa99e1150a7800471f4f0f73f",
"text": "We present a novel system for automatically generating immersive and interactive virtual reality (VR) environments using the real world as a template. The system captures indoor scenes in 3D, detects obstacles like furniture and walls, and maps walkable areas (WA) to enable real-walking in the generated virtual environment (VE). Depth data is additionally used for recognizing and tracking objects during the VR experience. The detected objects are paired with virtual counterparts to leverage the physicality of the real world for a tactile experience. Our approach is new, in that it allows a casual user to easily create virtual reality worlds in any indoor space of arbitrary size and shape without requiring specialized equipment or training. We demonstrate our approach through a fully working system implemented on the Google Project Tango tablet device.",
"title": ""
},
{
"docid": "065b0af0f1ed195ac90fa3ad041fa4c4",
"text": "We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.",
"title": ""
},
{
"docid": "1d0baee6485920d98492ed25003fc20e",
"text": "Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the minibatch setting that is often used in practice. Our main contribution is to introduce an accelerated minibatch version of SDCA and prove a fast convergence rate for this method. We discuss an implementation of our method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic gradient descent method of Nesterov [2007].",
"title": ""
},
{
"docid": "9a2e7daf5800cb5ad78646036ee205f0",
"text": "In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories. After learning, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. We argue that the learning system proposed shares crucial elements with the development of infants of 7-10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration. In addition, we discuss goal-emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans.",
"title": ""
},
{
"docid": "c53e0a1762e4b69a2b9e5520e3e0bbfe",
"text": "Conventional public key infrastructure (PKI) designs are not optimal and contain security flaws; there is much work underway in improving PKI. The properties given by the Bitcoin blockchain and its derivatives are a natural solution to some of the problems with PKI in particular, certificate transparency and elimination of single points of failure. Recently-proposed blockchain PKI designs are built as public ledgers linking identity with public key, giving no provision of privacy. We consider the suitability of a blockchain-based PKI for contexts in which PKI is required, but in which linking of identity with public key is undesirable; specifically, we show that blockchain can be used to construct a privacy-aware PKI while simultaneously eliminating some of the problems encountered in conventional PKI.",
"title": ""
},
{
"docid": "e0ec89c103aedb1d04fbc5892df288a8",
"text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.",
"title": ""
},
{
"docid": "869ad7b6bf74f283c8402958a6814a21",
"text": "In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients’ self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis.",
"title": ""
},
{
"docid": "7647993815a13899e60fdc17f91e270d",
"text": "of Dissertation presented to COPPE/UFRJ as a partial fulfillment of the requirements for the degree of Master of Science (M.Sc.) WHEN AUTOENCODERS MEET RECOMMENDER SYSTEMS: COFILS APPROACH Julio César Barbieri Gonzalez de Almeida",
"title": ""
},
{
"docid": "5df4c47f9b1d1bffe19a622e9e3147ac",
"text": "Regeneration of load-bearing segmental bone defects is a major challenge in trauma and orthopaedic surgery. The ideal bone graft substitute is a biomaterial that provides immediate mechanical stability, while stimulating bone regeneration to completely bridge defects over a short period. Therefore, selective laser melted porous titanium, designed and fine-tuned to tolerate full load-bearing, was filled with a physiologically concentrated fibrin gel loaded with bone morphogenetic protein-2 (BMP-2). This biomaterial was used to graft critical-sized segmental femoral bone defects in rats. As a control, porous titanium implants were either left empty or filled with a fibrin gels without BMP-2. We evaluated bone regeneration, bone quality and mechanical strength of grafted femora using in vivo and ex vivo µCT scanning, histology, and torsion testing. This biomaterial completely regenerated and bridged the critical-sized bone defects within eight weeks. After twelve weeks, femora were anatomically re-shaped and revealed open medullary cavities. More importantly, new bone was formed throughout the entire porous titanium implants and grafted femora regained more than their innate mechanical stability: torsional strength exceeded twice their original strength. In conclusion, combining porous titanium implants with a physiologically concentrated fibrin gels loaded with BMP-2 improved bone regeneration in load-bearing segmental defects. This material combination now awaits its evaluation in larger animal models to show its suitability for grafting load-bearing defects in trauma and orthopaedic surgery.",
"title": ""
},
{
"docid": "6b58567286efcb6ac857b7ef778a6e40",
"text": "Goal: Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Methods: Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. Results: This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Conclusion: Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. Significance: This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.",
"title": ""
},
{
"docid": "1e30d2f8e11bfbd868fdd0dfc0ea4179",
"text": "In this paper, I study how companies can use their personnel data and information from job satisfaction surveys to predict employee quits. An important issue discussed at length in the paper is how employers can ensure the anonymity of employees in surveys used for management and HR analytics. I argue that a simple mechanism where the company delegates the implementation of job satisfaction surveys to an external consulting company can be optimal. In the subsequent empirical analysis, I use a unique combination of firm-level data (personnel records) and information from job satisfaction surveys to assess the benefits for companies using data in their decision-making. Moreover, I show how companies can move from a descriptive to a predictive approach.",
"title": ""
},
{
"docid": "bd80596e80eab8a08ec5bf7afe49f46d",
"text": "What aspects of movement are represented in the primary motor cortex (M1): relatively low-level parameters like muscle force, or more abstract parameters like handpath? To examine this issue, the activity of neurons in M1 was recorded in a monkey trained to perform a task that dissociates three major variables of wrist movement: muscle activity, direction of movement at the wrist joint, and direction of movement in space. A substantial group of neurons in M1 (28 out of 88) displayed changes in activity that were muscle-like. Unexpectedly, an even larger group of neurons in M1 (44 out of 88) displayed changes in activity that were related to the direction of wrist movement in space independent of the pattern of muscle activity that generated the movement. Thus, both \"muscles\" and \"movements\" appear to be strongly represented in M1.",
"title": ""
}
] |
scidocsrr
|
14aaebf21720dc0e75f06d636974de7f
|
SMARTbot: A Behavioral Analysis Framework Augmented with Machine Learning to Identify Mobile Botnet Applications
|
[
{
"docid": "5a392f4c9779c06f700e2ff004197de9",
"text": "Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classiier learning systems. Both form a set of classiiers that are combined by v oting, bagging by generating replicated boot-strap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater beneet. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classiiers reduces this downside and also leads to slightly better results on most of the datasets considered.",
"title": ""
},
{
"docid": "3e26fe227e8c270fda4fe0b7d09b2985",
"text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.",
"title": ""
},
{
"docid": "2f2291baa6c8a74744a16f27df7231d2",
"text": "Malicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistencies Škodlivé programy, jako viry a červy (malware), jsou zřídka psány narychlo, jen tak. Obvykle jsou výsledkem svých evolučních vztahů. Zjištěním těchto vztahů a tvorby v přesné fylogenezi se předpokládá užitečná pomoc v analýze nového malware a ve vytvoření zásad pojmenovacího schématu. Porovnávání permutací kódu uvnitř malware mů že nabídnout výhody pro fylogenní generování, protože evoluční kroky implementované autory malware nemohou uchovat posloupnosti ve sdíleném kódu. Popisujeme rodinu fylogenních generátorů, které provádějí clustering pomocí PQ stromově založených extrakčních vlastností. Byl vykonán experiment v němž výstup stromu z těchto generátorů byl vyhodnocen vzhledem k fylogenezím generovaným pomocí vážených n-gramů. Výsledky ukazují výhody přístupu založeného na permutacích ve fylogenním generování malware. Les codes malveillants, tels que les virus et les vers, sont rarement écrits de zéro; en conséquence, il existe des relations de nature évolutive entre ces différents codes. Etablir ces relations et construire une phylogénie précise permet d’espérer une meilleure capacité d’analyse de nouveaux codes malveillants et de disposer d’une méthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles d’être très intéressante dans l’établissement d’une phylogénie, dans la mesure où les étapes évolutives réalisées par les auteurs de codes malveillants ne conservent généralement pas l’ordre des instructions présentes dans le code commun. Nous décrivons ici une famille de générateurs phylogénétiques réalisant des regroupements à l’aide de caractéristiques extraites d’arbres PQ. Une expérience a été réalisée, dans laquelle l’arbre produit par ces générateurs est évalué d’une part en le comparant avec les classificiations de références utilisées par les antivirus par scannage, et d’autre part en le comparant aux phylogénies produites à l’aide de polygrammes de taille n (n-grammes), pondérés. Les résultats démontrent l’intérêt de l’approche utilisant les permutations dans la génération phylogénétique des codes malveillants. Haitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tämän seurauksena niistä on löydettävissä evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien löytämisellä sekä rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sekä toteuttaa nimeämiskäytäntöjä. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionääriset askeleet eivät välttämättä säilytä jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin käyttämällä PQ-puuhun perustuvia ominaisuuksia. Teimme myös kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sekä evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, että permutaatioon perustuvaa lähestymistapaa voidaan menestyksekkäästi käyttää evoluutiomallien muodostamineen. Maliziöse Programme, wie z.B. Viren und Würmer, werden nur in den seltensten Fällen komplett neu geschrieben; als Ergebnis können zwischen verschiedenen maliziösen Codes Abhängigkeiten gefunden werden. Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziöser Codes kann es sehr hilfreich erweisen, Abhängigkeiten zu bestehenden maliziösen Codes darzulegen und somit einen Stammbaum zu erstellen. In dem Artikel wird u.a. auf moderne Ansätze innerhalb der Staumbaumgenerierung anhand ausgewählter Win32 Viren eingegangen. I programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puo’aiutare sia nell’analisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo’ dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita’ del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento l’albero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dell’approccio basato su permutazioni nella generazione delle filogenie del malware. ",
"title": ""
},
{
"docid": "2e12a5f308472f3f4d19d4399dc85546",
"text": "This paper presents a taxonomy of replay attacks on cryptographic protocols in terms of message origin and destination. The taxonomy is independent of any method used to analyze or prevent such attacks. It is also complete in the sense that any replay attack is composed entirely of elements classi ed by the taxonomy. The classi cation of attacks is illustrated using both new and previously known attacks on protocols. The taxonomy is also used to discuss the appropriateness of particular countermeasures and protocol analysis methods to particular kinds of replays.",
"title": ""
},
{
"docid": "3cae5c0440536b95cf1d0273071ad046",
"text": "Android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system.\n This paper presents VetDroid, a dynamic analysis platform for reconstructing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid features a systematic framework to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1,249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.",
"title": ""
}
] |
[
{
"docid": "d0486fc1c105cd3e13ca855221462973",
"text": "Automatic segmentation of an organ and its cystic region is a prerequisite of computer-aided diagnosis. In this paper, we focus on pancreatic cyst segmentation in abdominal CT scan. This task is important and very useful in clinical practice yet challenging due to the low contrast in boundary, the variability in location, shape and the different stages of the pancreatic cancer. Inspired by the high relevance between the location of a pancreas and its cystic region, we introduce extra deep supervision into the segmentation network, so that cyst segmentation can be improved with the help of relatively easier pancreas segmentation. Under a reasonable transformation function, our approach can be factorized into two stages, and each stage can be efficiently optimized via gradient back-propagation throughout the deep networks. We collect a new dataset with 131 pathological samples, which, to the best of our knowledge, is the largest set for pancreatic cyst segmentation. Without human assistance, our approach reports a 63.44% average accuracy, measured by the Dice-Sørensen coefficient (DSC), which is higher than the number (60.46%) without deep supervision.",
"title": ""
},
{
"docid": "0243035834fcce312f7cb1d87ef5c71b",
"text": "This work develops a representation learning method for bipartite networks. While existing works have developed various embedding methods for network data, they have primarily focused on homogeneous networks in general and overlooked the special properties of bipartite networks. As such, these methods can be suboptimal for embedding bipartite networks. In this paper, we propose a new method named BiNE, short for Bipartite Network Embedding, to learn the vertex representations for bipartite networks. By performing biased random walks purposefully, we generate vertex sequences that can well preserve the long-tail distribution of vertices in the original bipartite network. We then propose a novel optimization framework by accounting for both the explicit relations (i.e., observed links) and implicit relations (i.e., unobserved but transitive links) in learning the vertex representations. We conduct extensive experiments on several real datasets covering the tasks of link prediction (classification), recommendation (personalized ranking), and visualization. Both quantitative results and qualitative analysis verify the effectiveness and rationality of our BiNE method.",
"title": ""
},
{
"docid": "fc29f8e0d932140b5f48b35e4175b51a",
"text": "A three-dimensional (3D) geometric model obtained from a 3D device or other approaches is not necessarily watertight due to the presence of geometric deficiencies. These inadequacies must be repaired to create a valid surface mesh on the model as a pre-process of computational engineering analyses. This procedure has been a tedious and labor-intensive step, as there are many kinds of deficiencies that can make the geometry to be nonwatertight, such as gaps and holes. It is still challenging to repair discrete surface models based on available geometric information. The focus of this paper is to develop a new automated method for patching holes on the surface models in order to achieve watertightness. It describes a numerical algorithm utilizing Non-Uniform Rational B-Splines (NURBS) surfaces to generate smooth triangulated surface patches for topologically simple holes on discrete surface models. The Delaunay criterion for point insertion and edge swapping is used in this algorithm to improve the outcome. Surface patches are generated based on existing points surrounding the holes without altering them. The watertight geometry produced can be used in a wide range of engineering applications in the field of computational engineering simulation studies.",
"title": ""
},
{
"docid": "4932cb674e281098a5ef8007d3e37032",
"text": "We present Sparse Non-negative Matrix (SNM) estimation, a novel probability estimation technique for language modeling that can efficiently incorporate arbitrary features. We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus. Results show that SNM language models trained with n-gram features are a close match for the well-established Kneser-Ney models. The addition of skip-gram features yields a model that is in the same league as the state-of-the-art recurrent neural network language models, as well as complementary: combining the two modeling techniques yields the best known result on the One Billion Word Benchmark. On the Gigaword corpus further improvements are observed using features that cross sentence boundaries. The computational advantages of SNM estimation over both maximum entropy and neural network estimation are probably its main strength, promising an approach that has large flexibility in combining arbitrary features and yet scales gracefully to large amounts of data.",
"title": ""
},
{
"docid": "8bbbaab2cf7825ca98937de14908e655",
"text": "Software Reliability Model is categorized into two, one is static model and the other one is dynamic model. Dynamic models observe the temporary behavior of debugging process during testing phase. In Static Models, modeling and analysis of program logic is done on the same code. A Model which describes about error detection in software Reliability is called Software Reliability Growth Model. This paper reviews various existing software reliability models and there failure intensity function and the mean value function. On the basis of this review a model is proposed for the software reliability having different mean value function and failure intensity function.",
"title": ""
},
{
"docid": "8035245f1aa7edebd74e39332bdef3c9",
"text": "In order to develop theory any community of scientists must agree as to what constitutes its phenomena of interest. A distinction is made between phenomena of interest and exemplars. The concept \"prevention\" is viewed as an exemplar, whereas the concept \"empowerment\" is suggested as a leading candidate for the title \"phenomena of interest\" to Community Psychology. The ecological nature of empowerment theory is described, and some of the terms of empowerment (definitions, conditions, and periods of time) are explicated. Eleven assumptions, presuppositions, and hypotheses are offered as guidelines for theory development and empirical study.",
"title": ""
},
{
"docid": "b687ad05040b3df09a9a6381f7e34d04",
"text": "ÐThe research topic of looking at people, that is, giving machines the ability to detect, track, and identify people and more generally, to interpret human behavior, has become a central topic in machine vision research. Initially thought to be the research problem that would be hardest to solve, it has proven remarkably tractable and has even spawned several thriving commercial enterprises. The principle driving application for this technology is afourth generationo embedded computing: asmarto' environments and portable or wearable devices. The key technical goals are to determine the computer's context with respect to nearby humans (e.g., who, what, when, where, and why) so that the computer can act or respond appropriately without detailed instructions. This paper will examine the mathematical tools that have proven successful, provide a taxonomy of the problem domain, and then examine the stateof-the-art. Four areas will receive particular attention: person identification, surveillance/monitoring, 3D methods, and smart rooms/ perceptual user interfaces. Finally, the paper will discuss some of the research challenges and opportunities. Index TermsÐLooking at people, face recognition, gesture recognition, visual interface, appearance-based vision, wearable computing, ubiquitious.",
"title": ""
},
{
"docid": "e6bbe7de06295817435acafbbb7470cc",
"text": "Cortical circuits work through the generation of coordinated, large-scale activity patterns. In sensory systems, the onset of a discrete stimulus usually evokes a temporally organized packet of population activity lasting ∼50–200 ms. The structure of these packets is partially stereotypical, and variation in the exact timing and number of spikes within a packet conveys information about the identity of the stimulus. Similar packets also occur during ongoing stimuli and spontaneously. We suggest that such packets constitute the basic building blocks of cortical coding.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "5b56288bb7b49f18148f28798cfd8129",
"text": "According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.",
"title": ""
},
{
"docid": "72e9e772ede3d757122997d525d0f79c",
"text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.",
"title": ""
},
{
"docid": "ab4a788fd82d5953e22032b1361328c2",
"text": "To recognize application of Artificial Neural Networks (ANNs) in weather forecasting, especially in rainfall forecasting a comprehensive literature review from 1923 to 2012 is done and presented in this paper. And it is found that architectures of ANN such as BPN, RBFN is best established to be forecast chaotic behavior and have efficient enough to forecast monsoon rainfall as well as other weather parameter prediction phenomenon over the smaller geographical region.",
"title": ""
},
{
"docid": "d24980c1a1317c8dd055741da1b8c7a7",
"text": "Influence Maximization (IM), which selects a set of <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math> <alternatives><inline-graphic xlink:href=\"li-ieq1-2807843.gif\"/></alternatives></inline-formula> users (called seed set) from a social network to maximize the expected number of influenced users (called influence spread), is a key algorithmic problem in social influence analysis. Due to its immense application potential and enormous technical challenges, IM has been extensively studied in the past decade. In this paper, we survey and synthesize a wide spectrum of existing studies on IM from an <italic>algorithmic perspective</italic>, with a special focus on the following key aspects: (1) a review of well-accepted diffusion models that capture the information diffusion process and build the foundation of the IM problem, (2) a fine-grained taxonomy to classify existing IM algorithms based on their design objectives, (3) a rigorous theoretical comparison of existing IM algorithms, and (4) a comprehensive study on the applications of IM techniques in combining with novel context features of social networks such as topic, location, and time. Based on this analysis, we then outline the key challenges and research directions to expand the boundary of IM research.",
"title": ""
},
{
"docid": "f04efdcb31c3ec070ad0c50737c3eb2b",
"text": "Previous works on image emotion analysis mainly focused on predicting the dominant emotion category or the average dimension values of an image for affective image classification and regression. However, this is often insufficient in various real-world applications, as the emotions that are evoked in viewers by an image are highly subjective and different. In this paper, we propose to predict the continuous probability distribution of image emotions which are represented in dimensional valence-arousal space. We carried out large-scale statistical analysis on the constructed Image-Emotion-Social-Net dataset, on which we observed that the emotion distribution can be well-modeled by a Gaussian mixture model. This model is estimated by an expectation-maximization algorithm with specified initializations. Then, we extract commonly used emotion features at different levels for each image. Finally, we formalize the emotion distribution prediction task as a shared sparse regression (SSR) problem and extend it to multitask settings, named multitask shared sparse regression (MTSSR), to explore the latent information between different prediction tasks. SSR and MTSSR are optimized by iteratively reweighted least squares. Experiments are conducted on the Image-Emotion-Social-Net dataset with comparisons to three alternative baselines. The quantitative results demonstrate the superiority of the proposed method.",
"title": ""
},
{
"docid": "baa5eff969c4c81c863ec4c4c6ce7734",
"text": "The research describes a rapid method for the determination of fatty acid (FA) contents in a micro-encapsulated fish-oil (μEFO) supplement by using attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopic technique and partial least square regression (PLSR) analysis. Using the ATR-FTIR technique, the μEFO powder samples can be directly analysed without any pre-treatment required, and our developed PLSR strategic approach based on the acquired spectral data led to production of a good linear calibration with R(2)=0.99. In addition, the subsequent predictions acquired from an independent validation set for the target FA compositions (i.e., total oil, total omega-3 fatty acids, EPA and DHA) were highly accurate when compared to the actual values obtained from standard GC-based technique, with plots between predicted versus actual values resulting in excellent linear fitting (R(2)≥0.96) in all cases. The study therefore demonstrated not only the substantial advantage of the ATR-FTIR technique in terms of rapidness and cost effectiveness, but also its potential application as a rapid, potentially automated, online monitoring technique for the routine analysis of FA composition in industrial processes when used together with the multivariate data analysis modelling.",
"title": ""
},
{
"docid": "4bfc1e2fbb2b1dea29360c410e5258b4",
"text": "Fault tolerance is gaining interest as a means to increase the reliability and availability of distributed energy systems. In this paper, a voltage-oriented doubly fed induction generator, which is often used in wind turbines, is examined. Furthermore, current, voltage, and position sensor fault detection, isolation, and reconfiguration are presented. Machine operation is not interrupted. A bank of observers provides residuals for fault detection and replacement signals for the reconfiguration. Control is temporarily switched from closed loop into open-loop to decouple the drive from faulty sensor readings. During a short period of open-loop operation, the fault is isolated using parity equations. Replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. There are no large transients in the current. Measurement results and stability analysis show good results.",
"title": ""
},
{
"docid": "1e5ebd122bee855d7e8113d5fe71202d",
"text": "We derive the general expression of the anisotropic magnetoresistance (AMR) ratio of ferromagnets for a relative angle between the magnetization direction and the current direction. We here use the two-current model for a system consisting of a spin-polarized conduction state (s) and localized d states (d) with spin-orbit interaction. Using the expression, we analyze the AMR ratios of Ni and a half-metallic ferromagnet. These results correspond well to the respective experimental results. In addition, we give an intuitive explanation about a relation between the sign of the AMR ratio and the s-d scattering process. Introduction The anisotropic magnetoresistance (AMR) effect, in which the electrical resistivity depends on a relative angle θ between the magnetization (Mex) direction and the electric current (I) direction, has been studied extensively both experimentally [1-5] and theoretically [1,6]. The AMR ratio is often defined by ( ) ( ) ρ θ ρ θ ρ ρ ρ ⊥",
"title": ""
},
{
"docid": "8c4540f3724dab3a173e94bdba7b0999",
"text": "The significant growth of the Internet of Things (IoT) is revolutionizing the way people live by transforming everyday Internet-enabled objects into an interconnected ecosystem of digital and personal information accessible anytime and anywhere. As more objects become Internet-enabled, the security and privacy of the personal information generated, processed and stored by IoT devices become complex and challenging to manage. This paper details the current security and privacy challenges presented by the increasing use of the IoT. Furthermore, investigate and analyze the limitations of the existing solutions with regard to addressing security and privacy challenges in IoT and propose a possible solution to address these challenges. The results of this proposed solution could be implemented during the IoT design, building, testing and deployment phases in the real-life environments to minimize the security and privacy challenges associated with IoT.",
"title": ""
}
] |
scidocsrr
|
1f18e5170c0de6160d9360e87e80eca2
|
MODEC: Multimodal Decomposable Models for Human Pose Estimation
|
[
{
"docid": "ba085cc5591471b8a46e391edf2e78d4",
"text": "Despite recent successes, pose estimators are still somewhat fragile, and they frequently rely on a precise knowledge of the location of the object. Unfortunately, articulated objects are also very difficult to detect. Knowledge about the articulated nature of these objects, however, can substantially contribute to the task of finding them in an image. It is somewhat surprising, that these two tasks are usually treated entirely separately. In this paper, we propose an Articulated Part-based Model (APM) for jointly detecting objects and estimating their poses. APM recursively represents an object as a collection of parts at multiple levels of detail, from coarse-to-fine, where parts at every level are connected to a coarser level through a parent-child relationship (Fig. 1(b)-Horizontal). Parts are further grouped into part-types (e.g., left-facing head, long stretching arm, etc) so as to model appearance variations (Fig. 1(b)-Vertical). By having the ability to share appearance models of part types and by decomposing complex poses into parent-child pairwise relationships, APM strikes a good balance between model complexity and model richness. Extensive quantitative and qualitative experiment results on public datasets show that APM outperforms state-of-the-art methods. We also show results on PASCAL 2007 - cats and dogs - two highly challenging articulated object categories.",
"title": ""
}
] |
[
{
"docid": "371ab49af58c0eb4dc55f3fdf1c741f0",
"text": "Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.",
"title": ""
},
{
"docid": "1047e89937593d2e08c5433652316d73",
"text": "We describe a set of top-performing systems at the SemEval 2015 English Semantic Textual Similarity (STS) task. Given two English sentences, each system outputs the degree of their semantic similarity. Our unsupervised system, which is based on word alignments across the two input sentences, ranked 5th among 73 submitted system runs with a mean correlation of 79.19% with human annotations. We also submitted two runs of a supervised system which uses word alignments and similarities between compositional sentence vectors as its features. Our best supervised run ranked 1st with a mean correlation of 80.15%.",
"title": ""
},
{
"docid": "82e170219f7fefdc2c36eb89e44fa0f5",
"text": "The Internet of Things (IOT), the idea of getting real-world objects connected with each other, will change the ways we organize, obtain and consume information radically. Through sensor networks, agriculture can be connected to the IOT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of the connections, the agronomists will have better understanding of crop growth models and farming practices will be improved as well. This paper reports on the design of the sensor network when connecting agriculture to the IOT. Reliability, management, interoperability, low cost and commercialization are considered in the design. Finally, we share our experiences in both development and deployment.",
"title": ""
},
{
"docid": "70df369be2c95afd04467cd291e60175",
"text": "In this paper, we introduce two novel metric learning algorithms, χ-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ-LMNN, obtain best results in 19 out of 20 learning settings.",
"title": ""
},
{
"docid": "416f9184ae6b0c04803794b1ab2b8f50",
"text": "Although hydrophilic small molecule drugs are widely used in the clinic, their rapid clearance, suboptimal biodistribution, low intracellular absorption and toxicity can limit their therapeutic efficacy. These drawbacks can potentially be overcome by loading the drug into delivery systems, particularly liposomes; however, low encapsulation efficiency usually results. Many strategies are available to improve both the drug encapsulation efficiency and delivery to the target site to reduce side effects. For encapsulation, passive and active strategies are available. Passive strategies encompass the proper selection of the composition of the formulation, zeta potential, particle size and preparation method. Moreover, many weak acids and bases, such as doxorubicin, can be actively loaded with high efficiency. It is highly desirable that once the drug is encapsulated, it should be released preferentially at the target site, resulting in an optimal therapeutic effect devoid of side effects. For this purpose, targeted and triggered delivery approaches are available. The rapidly increasing knowledge of the many overexpressed biochemical makers in pathological sites, reviewed herein, has enabled the development of liposomes decorated with ligands for cell-surface receptors and active delivery. Furthermore, many liposomal formulations have been designed to actively release their content in response to specific stimuli, such as a pH decrease, heat, external alternating magnetic field, ultrasound or light. More than half a century after the discovery of liposomes, some hydrophilic small molecule drugs loaded in liposomes with high encapsulation efficiency are available on the market. However, targeted liposomes or formulations able to deliver the drug after a stimulus are not yet a reality in the clinic and are still awaited.",
"title": ""
},
{
"docid": "2d95b9919e1825ea46b5c5e6a545180c",
"text": "Computed tomography (CT) generates a stack of cross-sectional images covering a region of the body. The visual assessment of these images for the identification of potential abnormalities is a challenging and time consuming task due to the large amount of information that needs to be processed. In this article we propose a deep artificial neural network architecture, ReCTnet, for the fully-automated detection of pulmonary nodules in CT scans. The architecture learns to distinguish nodules and normal structures at the pixel level and generates three-dimensional probability maps highlighting areas that are likely to harbour the objects of interest. Convolutional and recurrent layers are combined to learn expressive image representations exploiting the spatial dependencies across axial slices. We demonstrate that leveraging intra-slice dependencies substantially increases the sensitivity to detect pulmonary nodules without inflating the false positive rate. On the publicly available LIDC/IDRI dataset consisting of 1,018 annotated CT scans, ReCTnet reaches a detection sensitivity of 90.5% with an average of 4.5 false positives per scan. Comparisons with a competing multi-channel convolutional neural network for multislice segmentation and other published methodologies using the same dataset provide evidence that ReCTnet offers significant performance gains. 1 ar X iv :1 60 9. 09 14 3v 1 [ st at .M L ] 2 8 Se p 20 16",
"title": ""
},
{
"docid": "96aa1f19a00226af7b5bbe0bb080582e",
"text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.",
"title": ""
},
{
"docid": "630c4e87333606c6c8e7345cb0865c64",
"text": "MapReduce plays a critical role as a leading framework for big data analytics. In this paper, we consider a geodistributed cloud architecture that provides MapReduce services based on the big data collected from end users all over the world. Existing work handles MapReduce jobs by a traditional computation-centric approach that all input data distributed in multiple clouds are aggregated to a virtual cluster that resides in a single cloud. Its poor efficiency and high cost for big data support motivate us to propose a novel data-centric architecture with three key techniques, namely, cross-cloud virtual cluster, data-centric job placement, and network coding based traffic routing. Our design leads to an optimization framework with the objective of minimizing both computation and transmission cost for running a set of MapReduce jobs in geo-distributed clouds. We further design a parallel algorithm by decomposing the original large-scale problem into several distributively solvable subproblems that are coordinated by a high-level master problem. Finally, we conduct real-world experiments and extensive simulations to show that our proposal significantly outperforms the existing works.",
"title": ""
},
{
"docid": "3ea533be157b63e673f43205d195d13e",
"text": "Recent work on fairness in machine learning has begun to be extended to recommender systems. While there is a tension between the goals of fairness and of personalization, there are contexts in which a global evaluations of outcomes is possible and where equity across such outcomes is a desirable goal. In this paper, we introduce the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified version of the SLIM algorithm can be used to improve the balance of user neighborhoods, with the result of achieving greater outcome fairness in a real-world dataset with minimal loss in ranking performance.",
"title": ""
},
{
"docid": "1a6e9229f6bc8f6dc0b9a027e1d26607",
"text": "− This work illustrates an analysis of Rogowski coils for power applications, when operating under non ideal measurement conditions. The developed numerical model, validated by comparison with other methods and experiments, enables to investigate the effects of the geometrical and constructive parameters on the measurement behavior of the coil.",
"title": ""
},
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "6091748ab964ea58a06f9b8335f9829e",
"text": "Apprenticeship is an inherently social learning method with a long history of helping novices become experts in fields as diverse as midwifery, construction, and law. At the center of apprenticeship is the concept of more experienced people assisting less experienced ones, providing structure and examples to support the attainment of goals. Traditionally apprenticeship has been associated with learning in the context of becoming skilled in a trade or craft—a task that typically requires both the acquisition of knowledge, concepts, and perhaps psychomotor skills and the development of the ability to apply the knowledge and skills in a context-appropriate manner—and far predates formal schooling as it is known today. In many nonindustrialized nations apprenticeship remains the predominant method of teaching and learning. However, the overall concept of learning from experts through social interactions is not one that should be relegated to vocational and trade-based training while K–12 and higher educational institutions seek to prepare students for operating in an information-based society. Apprenticeship as a method of teaching and learning is just as relevant within the cognitive and metacognitive domain as it is in the psychomotor domain. In the last 20 years, the recognition and popularity of facilitating learning of all types through social methods have grown tremendously. Educators and educational researchers have looked to informal learning settings, where such methods have been in continuous use, as a basis for creating more formal instructional methods and activities that take advantage of these social constructivist methods. Cognitive apprenticeship— essentially, the use of an apprentice model to support learning in the cognitive domain—is one such method that has gained respect and popularity throughout the 1990s and into the 2000s. Scaffolding, modeling, mentoring, and coaching are all methods of teaching and learning that draw on social constructivist learning theory. As such, they promote learning that occurs through social interactions involving negotiation of content, understanding, and learner needs, and all three generally are considered forms of cognitive apprenticeship (although certainly they are not the only methods). This chapter first explores prevailing definitions and underlying theories of these teaching and learning strategies and then reviews the state of research in these area.",
"title": ""
},
{
"docid": "5c5e9a93b4838cbebd1d031a6d1038c4",
"text": "Live migration of virtual machines (VMs) is key feature of virtualization that is extensively leveraged in IaaS cloud environments: it is the basic building block of several important features, such as load balancing, pro-active fault tolerance, power management, online maintenance, etc. While most live migration efforts concentrate on how to transfer the memory from source to destination during the migration process, comparatively little attention has been devoted to the transfer of storage. This problem is gaining increasing importance: due to performance reasons, virtual machines that run large-scale, data-intensive applications tend to rely on local storage, which poses a difficult challenge on live migration: it needs to handle storage transfer in addition to memory transfer. This paper proposes a memory migration independent approach that addresses this challenge. It relies on a hybrid active push / prioritized prefetch strategy, which makes it highly resilient to rapid changes of disk state exhibited by I/O intensive workloads. At the same time, it is minimally intrusive in order to ensure a maximum of portability with a wide range of hypervisors. Large scale experiments that involve multiple simultaneous migrations of both synthetic benchmarks and a real scientific application show improvements of up to 10x faster migration time, 10x less bandwidth consumption and 8x less performance degradation over state-of-art.",
"title": ""
},
{
"docid": "26c003f70bbaade54b84dcb48d2a08c9",
"text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.",
"title": ""
},
{
"docid": "181a3d68fd5b5afc3527393fc3b276f9",
"text": "Updating inference in response to new evidence is a fundamental challenge in artificial intelligence. Many real problems require large probabilistic graphical models, containing possibly millions of interdependent variables. For such large models, jointly updating the most likely (i.e., MAP) configuration of the variables each time new evidence is encountered can be infeasible, even if inference is tractable. In this paper, we introduce budgeted online collective inference, in which the MAP configuration of a graphical model is updated efficiently by revising the assignments to a subset of the variables while holding others fixed. The goal is to selectively update certain variables without sacrificing quality with respect to full inference. To formalize the consequences of partially updating inference, we introduce the concept of inference regret. We derive inference regret bounds for a class of graphical models with strongly-convex free energies. These theoretical insights, combined with a thorough analysis of the optimization solver, motivate new approximate methods for efficiently updating the variable assignments under a budget constraint. In experiments, we demonstrate that our algorithms can reduce inference time by 65% with accuracy comparable to full inference.",
"title": ""
},
{
"docid": "1c1f5159ab51923fcc4fef2fad501159",
"text": "This article assesses the consequences of poverty between a child's prenatal year and 5th birthday for several adult achievement, health, and behavior outcomes, measured as late as age 37. Using data from the Panel Study of Income Dynamics (1,589) and controlling for economic conditions in middle childhood and adolescence, as well as demographic conditions at the time of the birth, findings indicate statistically significant and, in some cases, quantitatively large detrimental effects of early poverty on a number of attainment-related outcomes (adult earnings and work hours). Early-childhood poverty was not associated with such behavioral measures as out-of-wedlock childbearing and arrests. Most of the adult earnings effects appear to operate through early poverty's association with adult work hours.",
"title": ""
},
{
"docid": "3ae6cb348cff49851cf15036483e2117",
"text": "Rate-Distortion Methods for Image and Video Compression: An. Or Laplacian p.d.f.s and optimal bit allocation techniques to ensure that bits.Rate-Distortion Methods for Image and Video Compression. Coding Parameters: chosen on input-by-input rampant caries pdf basis to optimize. In this article we provide an overview of rate-distortion R-D based optimization techniques and their practical application to image and video. Rate-distortion methods for image and video compression. Enter the password to open this PDF file.Bernd Girod: EE368b Image and Video Compression. Lower the bit-rate R by allowing some acceptable distortion. Consideration of a specific coding method. Bit-rate at least R.rate-distortion R-D based optimization techniques and their practical application to. Area of R-D optimized image and video coding see 1, 2 and many of the. Such Intra coding alone is in common use as ramones guitar tab pdf a video coding method today. MPEG-2: A step higher in bit rate, picture quality, and popularity.coding, rate distortion RD optimization, soft decision quantization SDQ. RD methods for video compression can be classified into two categories. Practical SDQ include without limitation SDQ in JPEG image coding and H. However, since we know that most lossy compression techniques operate on data. In image and video compression, the human perception models are less well. The conditional PDF QY Xy x that minimize rate for a given distortion D.The H. 264AVC video coding standard has been recently proposed by the Joint. MB which determine the overall rate and the distortion of the coded. Figure 2: The picture encoding process in the proposed method. Selection of λ and.fact, operational rate-distortion methods have come into wide use for image and video coders. In previous work, de Queiroz applied this technique to finding.",
"title": ""
},
{
"docid": "fdc875181fe37e6b469d07e0e580fadb",
"text": "Attention mechanism has recently attracted increasing attentions in the area of facial action unit (AU) detection. By finding the region of interest (ROI) of each AU with the attention mechanism, AU related local features can be captured. Most existing attention based AU detection works use prior knowledge to generate fixed attentions or refine the predefined attentions within a small range, which limits their capacity to model various AUs. In this paper, we propose a novel end-to-end weakly-supervised attention and relation learning framework for AU detection with only AU labels, which has not been explored before. In particular, multi-scale features shared by each AU are learned firstly, and then both channel-wise attentions and spatial attentions are learned to select and extract AU related local features. Moreover, pixellevel relations for AUs are further captured to refine spatial attentions so as to extract more relevant local features. Extensive experiments on BP4D and DISFA benchmarks demonstrate that our framework (i) outperforms the state-of-the-art methods for AU detection, and (ii) can find the ROI of each AU and capture the relations among AUs adaptively.",
"title": ""
},
{
"docid": "8be921cfab4586b6a19262da9a1637de",
"text": "Automatic segmentation of microscopy images is an important task in medical image processing and analysis. Nucleus detection is an important example of this task. Mask-RCNN is a recently proposed state-of-the-art algorithm for object detection, object localization, and object instance segmentation of natural images. In this paper we demonstrate that Mask-RCNN can be used to perform highly effective and efficient automatic segmentations of a wide range of microscopy images of cell nuclei, for a variety of cells acquired under a variety of conditions.",
"title": ""
},
{
"docid": "37a47bd2561b534d5734d250d16ff1c2",
"text": "Many chronic eye diseases can be conveniently investigated by observing structural changes in retinal blood vessel diameters. However, detecting changes in an accurate manner in face of interfering pathologies is a challenging task. The task is generally performed through an automatic computerized process. The literature shows that powerful methods have already been proposed to identify vessels in retinal images. Though a significant progress has been achieved toward methods to separate blood vessels from the uneven background, the methods still lack the necessary sensitivity to segment fine vessels. Recently, a multi-scale line-detector method proved its worth in segmenting thin vessels. This paper presents modifications to boost the sensitivity of this multi-scale line detector. First, a varying window size with line-detector mask is suggested to detect small vessels. Second, external orientations are fed to steer the multi-scale line detectors into alignment with flow directions. Third, optimal weights are suggested for weighted linear combinations of individual line-detector responses. Fourth, instead of using one global threshold, a hysteresis threshold is proposed to find a connected vessel tree. The overall impact of these modifications is a large improvement in noise removal capability of the conventional multi-scale line-detector method while finding more of the thin vessels. The contrast-sensitive steps are validated using a publicly available database and show considerable promise for the suggested strategy.",
"title": ""
}
] |
scidocsrr
|
95be3ea3a568d4bee3fdce29ac84bec9
|
A Global Analysis of Emoji Usage
|
[
{
"docid": "5ea65120d42f75d594d73e92cc82dc48",
"text": "There is a new generation of emoticons, called emojis, that is increasingly being used in mobile communications and social media. In the past two years, over ten billion emojis were used on Twitter. Emojis are Unicode graphic symbols, used as a shorthand to express concepts and ideas. In contrast to the small number of well-known emoticons that carry clear emotional contents, there are hundreds of emojis. But what are their emotional contents? We provide the first emoji sentiment lexicon, called the Emoji Sentiment Ranking, and draw a sentiment map of the 751 most frequently used emojis. The sentiment of the emojis is computed from the sentiment of the tweets in which they occur. We engaged 83 human annotators to label over 1.6 million tweets in 13 European languages by the sentiment polarity (negative, neutral, or positive). About 4% of the annotated tweets contain emojis. The sentiment analysis of the emojis allows us to draw several interesting conclusions. It turns out that most of the emojis are positive, especially the most popular ones. The sentiment distribution of the tweets with and without emojis is significantly different. The inter-annotator agreement on the tweets with emojis is higher. Emojis tend to occur at the end of the tweets, and their sentiment polarity increases with the distance. We observe no significant differences in the emoji rankings between the 13 languages and the Emoji Sentiment Ranking. Consequently, we propose our Emoji Sentiment Ranking as a European language-independent resource for automated sentiment analysis. Finally, the paper provides a formalization of sentiment and a novel visualization in the form of a sentiment bar.",
"title": ""
}
] |
[
{
"docid": "8a59e2b140eaf91a4a5fd8c109682543",
"text": "A search-based procedural content generation (SBPCG) algorithm for strategy game maps is proposed. Two representations for strategy game maps are devised, along with a number of objectives relating to predicted player experience. A multiobjective evolutionary algorithm is used for searching the space of maps for candidates that satisfy pairs of these objectives. As the objectives are inherently partially conflicting, the algorithm generates Pareto fronts showing how these objectives can be balanced. Such fronts are argued to be a valuable tool for designers looking to balance various design needs. Choosing appropriate points (manually or automatically) on the Pareto fronts, maps can be found that exhibit good map design according to specified criteria, and could either be used directly in e.g. an RTS game or form the basis for further human design.",
"title": ""
},
{
"docid": "78e8d8b0508e011f5dc0e63fa1f0a1ee",
"text": "This paper proposes chordal surface transform for representation and discretization of thin section solids, such as automobile bodies, plastic injection mold components and sheet metal parts. A multiple-layered all-hex mesh with a high aspect ratio is a typical requirement for mold flow simulation of thin section objects. The chordal surface transform reduces the problem of 3D hex meshing to 2D quad meshing on the chordal surface. The chordal surface is generated by cutting a tet mesh of the input CAD model at its mid plane. Radius function and curvature of the chordal surface are used to provide sizing function for quad meshing. Two-way mapping between the chordal surface and the boundary is used to sweep the quad elements from the chordal surface onto the boundary, resulting in a layered all-hex mesh. The algorithm has been tested on industrial models, whose chordal surface is 2-manifold. The graphical results of the chordal surface and the multiple-layered all-hex mesh are presented along with the quality measures. The results show geometrically adaptive high aspect ratio all-hex mesh, whose average scaled Jacobean, is close to 1.0.",
"title": ""
},
{
"docid": "292074e2e9a9a7a99a72876a3905ce7a",
"text": "A textile endfire antenna operating in the 60-GHz band is proposed for wireless body area networks (BANs). The permittivity of the textile substrate has been accurately characterized, and the Yagi-Uda antenna has been fabricated using an ad hoc manufacturing process. Its performance in terms of reflection coefficient, radiation pattern, gain, and efficiency has been studied in free space and on a tissue-equivalent phantom representing the human body. It is shown that the antenna is matched in the 57-64-GHz band. Its measured on-body efficiency and maximum gain equal 48.0% and 11.9 dBi, respectively. To our best knowledge, this is the first textile antenna for on-body wireless communications reported at millimeter waves.",
"title": ""
},
{
"docid": "e0cc4796b1680a626bbde7a3a525b7fe",
"text": "To eliminate the need to evaluate the intersection curves in explicit representations of surface cutouts or of trimmed faces in BReps of CSG solids, we advocate using constructive solid trimming (CST). A CST face is the intersection of a surface with a Blist representation of a trimming CSG volume. We propose a new GPU-based CSG rendering algorithm that trims the boundary of each primitive using a Blist of its active zone. This approach is faster than the previously reported Blister approach, eliminates occasional speckles of wrongly colored pixels, and provides additional capabilities: painting on surfaces, rendering semitransparent CSG models, and highlighting selected features in the BReps of CSG models.",
"title": ""
},
{
"docid": "7b52e6fb962eedf29284ab32b8ea9f8e",
"text": "This paper presents a novel binary monarch butterfly optimization (BMBO) method, intended for addressing the 0–1 knapsack problem (0–1 KP). Two tuples, consisting of real-valued vectors and binary vectors, are used to represent the monarch butterfly individuals in BMBO. Real-valued vectors constitute the search space, whereas binary vectors form the solution space. In other words, monarch butterfly optimization works directly on real-valued vectors, while solutions are represented by binary vectors. Three kinds of individual allocation schemes are tested in order to achieve better performance. Toward revising the infeasible solutions and optimizing the feasible ones, a novel repair operator, based on greedy strategy, is employed. Comprehensive numerical experimentations on three types of 0–1 KP instances are carried out. The comparative study of the BMBO with four state-of-the-art classical algorithms clearly points toward the superiority of the former in terms of search accuracy, convergent capability and stability in solving the 0–1 KP, especially for the high-dimensional instances.",
"title": ""
},
{
"docid": "65dd0e6e143624c644043507cf9465a7",
"text": "Let G \" be a non-directed graph having n vertices, without parallel edges and slings. Let the vertices of Gn be denoted by F 1 ,. . ., Pn. Let v(P j) denote the valency of the point P i and put (0. 1) V(G,) = max v(Pj). 1ninn Let E(G.) denote the number of edges of Gn. Let H d (n, k) denote the set of all graphs Gn for which V (G n) = k and the diameter D (Gn) of which is-d, In the present paper we shall investigate the quantity (0 .2) Thus we want to determine the minimal number N such that there exists a graph having n vertices, N edges and diameter-d and the maximum of the valencies of the vertices of the graph is equal to k. To help the understanding of the problem let us consider the following interpretation. Let be given in a country n airports ; suppose we want to plan a network of direct flights between these airports so that the maximal number of airports to which a given airport can be connected by a direct flight should be equal to k (i .e. the maximum of the capacities of the airports is prescribed), further it should be possible to fly from every airport to any other by changing the plane at most d-1 times ; what is the minimal number of flights by which such a plan can be realized? For instance, if n = 7, k = 3, d= 2 we have F2 (7, 3) = 9 and the extremal graph is shown by Fig. 1. The problem of determining Fd (n, k) has been proposed and discussed recently by two of the authors (see [1]). In § 1 we give a short summary of the results of the paper [1], while in § 2 and 3 we give some new results which go beyond those of [1]. Incidentally we solve a long-standing problem about the maximal number of edges of a graph not containing a cycle of length 4. In § 4 we mention some unsolved problems. Let us mention that our problem can be formulated also in terms of 0-1 matrices as follows : Let M=(a il) be a symmetrical n by n zero-one matrix such 2",
"title": ""
},
{
"docid": "ba959139c1fc6324f3c32a4e4b9bb16c",
"text": "The short-term unit commitment problem is traditionally solved as a single-objective optimization problem with system operation cost as the only objective. This paper presents multi-objectivization of the short-term unit commitment problem in uncertain environment by considering reliability as an additional objective along with the economic objective. The uncertainties occurring due to unit outage and load forecast error are incorporated using loss of load probability (LOLP) and expected unserved energy (EUE) reliability indices. The multi-objectivized unit commitment problem in uncertain environment is solved using our earlier proposed multi-objective evolutionary algorithm [1]. Simulations are performed on a test system of 26 thermal generating units and the results obtained are benchmarked against the study [2] where the unit commitment problem was solved as a reliability-constrained single-objective optimization problem. The simulation results demonstrate that the proposed multi-objectivized approach can find solutions with considerably lower cost than those obtained in the benchmark. Further, the efficiency and consistency of the proposed algorithm for multi-objectivized unit commitment problem is demonstrated by quantitative performance assessment using hypervolume indicator.",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
},
{
"docid": "d7307a2d0c3d4a9622bd8e137e124562",
"text": "BACKGROUND\nConsumers of research (researchers, administrators, educators and clinicians) frequently use standard critical appraisal tools to evaluate the quality of published research reports. However, there is no consensus regarding the most appropriate critical appraisal tool for allied health research. We summarized the content, intent, construction and psychometric properties of published, currently available critical appraisal tools to identify common elements and their relevance to allied health research.\n\n\nMETHODS\nA systematic review was undertaken of 121 published critical appraisal tools sourced from 108 papers located on electronic databases and the Internet. The tools were classified according to the study design for which they were intended. Their items were then classified into one of 12 criteria based on their intent. Commonly occurring items were identified. The empirical basis for construction of the tool, the method by which overall quality of the study was established, the psychometric properties of the critical appraisal tools and whether guidelines were provided for their use were also recorded.\n\n\nRESULTS\nEighty-seven percent of critical appraisal tools were specific to a research design, with most tools having been developed for experimental studies. There was considerable variability in items contained in the critical appraisal tools. Twelve percent of available tools were developed using specified empirical research. Forty-nine percent of the critical appraisal tools summarized the quality appraisal into a numeric summary score. Few critical appraisal tools had documented evidence of validity of their items, or reliability of use. Guidelines regarding administration of the tools were provided in 43% of cases.\n\n\nCONCLUSIONS\nThere was considerable variability in intent, components, construction and psychometric properties of published critical appraisal tools for research reports. There is no \"gold standard' critical appraisal tool for any study design, nor is there any widely accepted generic tool that can be applied equally well across study types. No tool was specific to allied health research requirements. Thus interpretation of critical appraisal of research reports currently needs to be considered in light of the properties and intent of the critical appraisal tool chosen for the task.",
"title": ""
},
{
"docid": "7a076d150ecc4382c20a6ce08f3a0699",
"text": "Cyber-physical system (CPS) is a new trend in the Internet-of-Things related research works, where physical systems act as the sensors to collect real-world information and communicate them to the computation modules (i.e. cyber layer), which further analyze and notify the findings to the corresponding physical systems through a feedback loop. Contemporary researchers recommend integrating cloud technologies in the CPS cyber layer to ensure the scalability of storage, computation, and cross domain communication capabilities. Though there exist a few descriptive models of the cloud-based CPS architecture, it is important to analytically describe the key CPS properties: computation, control, and communication. In this paper, we present a digital twin architecture reference model for the cloud-based CPS, C2PS, where we analytically describe the key properties of the C2PS. The model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm. We have designed C2PS smart interaction controller using a Bayesian belief network, so that the system dynamically considers current contexts. The composition of fuzzy rule base with the Bayes network further enables the system with reconfiguration capability. We also describe analytically, how C2PS subsystem communications can generate even more complex system-of-systems. Later, we present a telematics-based prototype driving assistance application for the vehicular domain of C2PS, VCPS, to demonstrate the efficacy of the architecture reference model.",
"title": ""
},
{
"docid": "b386c24fc4412d050c1fb71692540b45",
"text": "In this paper, we consider the problem of approximating the densest subgraph in the dynamic graph stream model. In this model of computation, the input graph is defined by an arbitrary sequence of edge insertions and deletions and the goal is to analyze properties of the resulting graph given memory that is sub-linear in the size of the stream. We present a single-pass algorithm that returns a (1 + ) approximation of the maximum density with high probability; the algorithm uses O( −2npolylog n) space, processes each stream update in polylog(n) time, and uses poly(n) post-processing time where n is the number of nodes. The space used by our algorithm matches the lower bound of Bahmani et al. (PVLDB 2012) up to a poly-logarithmic factor for constant . The best existing results for this problem were established recently by Bhattacharya et al. (STOC 2015). They presented a (2 + ) approximation algorithm using similar space and another algorithm that both processed each update and maintained a (4 + ) approximation of the current maximum density in polylog(n) time per-update.",
"title": ""
},
{
"docid": "815819dc633c8434eb4e8c02b3c88186",
"text": "Volume and weight limitations for components in hybrid electrical vehicle (HEV) propulsion systems demand highly-compact and highly-efficient power electronics. The application of silicon carbide (SiC) semiconductor technology in conjunction with high temperature (HT) operation allows the power density of the DC-DC converters and inverters to be increased. Elevated ambient temperatures of above 200degC also affects the gate drives attached to the power semiconductors. This paper focuses on the selection of HT components and discusses different gate drive topologies for SiC JFETs with respect to HT operation capability, limitations, dynamic performance and circuit complexity. An experimental performance comparison of edge-triggered and phase-difference HT drivers with a conventional room temperature JFET gate driver is given. The proposed edge-triggered gate driver offers high switching speeds and a cost effective implementation. Switching tests at 200degC approve an excellent performance at high temperature and a low temperature drift of the driver output voltage.",
"title": ""
},
{
"docid": "02c2c8df7a4343d10c482025d07c4995",
"text": "taking data about a user’s likes and dislikes and generating a general profile of the user. These profiles can be used to retrieve documents matching user interests; recommend music, movies, or other similar products; or carry out other tasks in a specialized fashion. This article presents a fundamentally new method for generating user profiles that takes advantage of a large-scale database of demographic data. These data are used to generalize user-specified data along the patterns common across the population, including areas not represented in the user’s original data. I describe the method in detail and present its implementation in the LIFESTYLE FINDER agent, an internet-based experiment testing our approach on more than 20,000 users worldwide.",
"title": ""
},
{
"docid": "1fee36b3d0e796273eaa33b250930997",
"text": "Developers spend a lot of time searching for the root causes of software failures. For this, they traditionally try to reproduce those failures, but unfortunately many failures are so hard to reproduce in a test environment that developers spend days or weeks as ad-hoc detectives. The shortcomings of many solutions proposed for this problem prevent their use in practice.\n We propose failure sketching, an automated debugging technique that provides developers with an explanation (\"failure sketch\") of the root cause of a failure that occurred in production. A failure sketch only contains program statements that lead to the failure, and it clearly shows the differences between failing and successful runs; these differences guide developers to the root cause. Our approach combines static program analysis with a cooperative and adaptive form of dynamic program analysis.\n We built Gist, a prototype for failure sketching that relies on hardware watchpoints and a new hardware feature for extracting control flow traces (Intel Processor Trace). We show that Gist can build failure sketches with low overhead for failures in systems like Apache, SQLite, and Memcached.",
"title": ""
},
{
"docid": "4282e931ced3f8776f6c4cffb5027f61",
"text": "OBJECTIVES\nTo provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design.\n\n\nTARGET AUDIENCE\nThis tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art.\n\n\nSCOPE\nWe describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.",
"title": ""
},
{
"docid": "6a7ff930ba9949a86e66362c379c7e7b",
"text": "Psychometric research on widely used questionnaires aimed at measuring experiential avoidance of chronic pain has led to inconclusive results. To test the structural validity, internal consistency, and construct validity of a recently developed short questionnaire: the Acceptance and Action Questionnaire II-pain version (AAQ-II-P). Cross-sectional validation study among 388 adult patients with chronic nonspecific musculoskeletal pain admitted for multidisciplinary pain rehabilitation in four tertiary rehabilitation centers in the Netherlands. Cronbach's α was calculated to analyze internal consistency. Principal component analysis was performed to analyze factor structure. Construct validity was analyzed by examining the association between acceptance of pain and measures of psychological flexibility (two scales and sum), pain catastrophizing (three scales and sum), and mental and physical functioning. Interpretation was based on a-priori defined hypotheses. The compound of the seven items of the AAQ-II-P shows a Cronbach's α of 0.87. The single component explained 56.2% of the total variance. Correlations ranged from r=-0.21 to 0.73. Two of the predefined hypotheses were rejected and seven were not rejected. The AAQ-II-P measures a single component and has good internal consistency, and construct validity is not rejected. Thus, the construct validity of the AAQ-II-P sum scores as indicator of experiential avoidance of pain was supported.",
"title": ""
},
{
"docid": "fa826e5846cdee91192beecd1a52bb3a",
"text": "ABSTRA CT Recommender systemsusepeople’ s opinionsaboutitemsin an information domainto help peoplechooseother items. Thesesystemshave succeededin domainsas diverse as movies, news articles,Web pages,andwines. The psychological literatureonconformitysuggeststhatin thecourseof helpingpeoplemake choices,thesesystemsprobablyaffect users’opinionsof the items. If opinionsare influencedby recommendations, they might be lessvaluablefor making recommendations for otherusers.Further, manipulatorswho seekto makethesystemgenerateartificially highor low recommendationsmight benefitif their efforts influenceusers to changethe opinionsthey contribute to the recommender . Westudytwo aspectsof recommender systeminterfacesthat may affect users’opinions: the rating scaleandthe display of predictionsat thetime usersrateitems.We find thatusers rate fairly consistentlyacrossrating scales. Userscan be manipulated,though,tendingto rate toward the prediction thesystemshows, whetherthepredictionis accurateor not. However, userscan detectsystemsthat manipulatepredictions. We discusshow designersof recommendersystems might reactto thesefindings.",
"title": ""
},
{
"docid": "d1d14d5f16b4a32576e9a6c43e75138f",
"text": "6 1 and cost of the product. Not all materials can be scaled-up with the same mixing process. Frequently, scaling-up the mixing process from small research batches to large quantities, necessary for production, can lead to unexpected problems. This reference book is intended to help the reader both identify and solve mixing problems. It is a comprehensive handbook that provides excellent coverage on the fundamentals, design, and applications of current mixing technology in general. Although this book includes many technology areas, one of main areas of interest to our readers would be in the polymer processing area. This would include the first eight chapters in the book and a specific application chapter on polymer processing. These cover the fundamentals of mixing technology, important to polymer processing, including residence time distributions and laminar mixing techniques. In the experimental section of the book, some of the relevant tools and techniques cover flow visualization technologies, lab scale mixing, flow and torque measurements, CFD coding, and numerical methods. There is a good overview of various types of mixers used for polymer processing in a dedicated applications chapter on mixing high viscosity materials such as polymers. There are many details given on the differences between the mixing blades in various types of high viscosity mixers and suggestions for choosing the proper mixer for high viscosity applications. The majority of the book does, however, focus on the chemical, petroleum, and pharmaceutical industries that generally process materials with much lower viscosity than polymers. The reader interested in learning about the fundamentals of mixing in general as well as some specifics on polymer processing would find this book to be a useful reference.",
"title": ""
},
{
"docid": "a6c1df858f05972157f6b53314582d39",
"text": "Dissecting cellulitis (DC) also referred to as to as perifolliculitis capitis abscedens et suffodiens (Hoffman) manifests with perifollicular pustules, nodules, abscesses and sinuses that evolve into scarring alopecia. In the U.S., it predominantly occurs in African American men between 20-40 years of age. DC also occurs in other races and women more rarely. DC has been reported worldwide. Older therapies reported effective include: low dose oral zinc, isotretinoin, minocycline, sulfa drugs, tetracycline, prednisone, intralesional triamcinolone, incision and drainage, dapsone, antiandrogens (in women), topical clindamycin, topical isotretinoin, X-ray epilation and ablation, ablative C02 lasers, hair removal lasers (800nm and 694nm), and surgical excision. Newer treatments reported include tumor necrosis factor blockers (TNFB), quinolones, macrolide antibiotics, rifampin, alitretinoin, metronidazole, and high dose zinc sulphate (135-220 mg TID). Isotretinoin seems to provide the best chance at remission, but the number of reports is small, dosing schedules variable, and the long term follow up beyond a year is negligible; treatment failures have been reported. TNFB can succeed when isotretinoin fails, either as monotherapy, or as a bridge to aggressive surgical treatment, but long term data is lacking. Non-medical therapies noted in the last decade include: the 1064 nm laser, ALA-PDT, and modern external beam radiation therapy. Studies that span more than 1 year are lacking. Newer pathologic hair findings include: pigmented casts, black dots, and \"3D\" yellow dots. Newer associations include: keratitis-ichthyosis-deafness syndrome, Crohn disease and pyoderma gangrenosum. Older associations include arthritis and keratitis. DC is likely a reaction pattern, as is shown by its varied therapeutic successes and failures. The etiology of DC remains enigmatic and DC is distinct from hidradenitis suppurativa, which is shown by their varied responses to therapies and their histologic differences. Like HS, DC likely involves both follicular dysfunction and an aberrant cutaneous immune response to commensal bacteria, such as coagulase negative staphylococci. The incidence of DC is likely under-reported. The literature suggests that now most cases of DC can be treated effectively. However, the lack of clinical studies regarding DC prevents full understanding of the disease and limits the ability to define a consensus treatment algorithm.",
"title": ""
},
{
"docid": "05d723bdda995f444500a675f3eb3e29",
"text": "Diseases caused by the liver fluke, Opisthorchis viverrini and the minute intestinal fluke, Haplorchis taichui, are clinically important, especially in the Northeast and North regions of Thailand. It is often difficult to distinguish between these trematode species using morphological methods due to the similarity of their eggs and larval stages both in mixed and co-infections. A sensitive, accurate, and specific detection method of these flukes is required for an effective epidemiological control program. This study aimed to determine the prevalence of O. viverrini and H. taichui infections in human feces by using formalin-ether sedimentation and high annealing temperature random amplified polymorphic DNA (HAT-RAPD) PCR methods. Fecal specimens of people living along the Mae Ping River, Chomtong district were examined seasonally for trematode eggs using a compound microscope. Positive cases were analyzed in HAT-RAPD, DNA profiles were compared with adult stages to determine the actual species infected, and specific DNA markers of each fluke were also screened. Our results showed that out of 316 specimens, 62 were positive for fluke eggs which were pre-identified as O. viverrini and H. taichui. In addition, co-infection among these two fluke species was observed from only two specimens. The prevalence of H. taichui infections peaked in the hot-dry (19.62%), gradually decreased in the rainy (18.18%), and cool-dry seasons (14.54%), respectively. O. viverrini was found only in the hot-dry season (6.54%). For molecular studies, 5 arbitrary primers (Operon Technologies, USA) were individually performed in HAT-RAPD-PCR for the generation of polymorphic DNA profiles. The DNA profiles in all 62 positives cases were the same as those of the adult stage which confirmed our identifications. This study demonstrates the mixed infection of O. viverrini and H. taichui and confirms the extended distribution of O. viverrini in Northern Thailand.",
"title": ""
}
] |
scidocsrr
|
3b854c906d0e8815a54e74071e004340
|
Generic Physiological Features as Predictors of Player Experience
|
[
{
"docid": "72e4d7729031d63f96b686444c9b446e",
"text": "In this paper we describe the fundamentals of affective gaming from a physiological point of view, covering some of the origins of the genre, how affective videogames operate and current conceptual and technological capabilities. We ground this overview of the ongoing research by taking an in-depth look at one of our own early biofeedback-based affective games. Based on our analysis of existing videogames and our own experience with affective videogames, we propose a new approach to game design based on several high-level design heuristics: assist me, challenge me and emote me (ACE), a series of gameplay \"tweaks\" made possible through affective videogames.",
"title": ""
}
] |
[
{
"docid": "6b622da925ead8c237518ab21fa3e85d",
"text": "Helpless children attribute their failures to lack of ability and view them as insurmountable. Mastery-oriented children, in contrast, tend to emphasize motivational factors and to view failure as surmountable. Although the performance of the two groups is usually identical during success of prior to failure, past research suggests that these groups may well differ in the degree to which they perceive that their successes are replicable and hence that their failures are avoidable. The present study was concerned with the nature of such differences. Children performed a task on which they encountered success and then failure. Half were asked a series of questions about their performance after success and half after failure. Striking differences emerged: Compared to mastery-oriented children, helpless children underestimated the number of success (and overestimated the number of failures), did not view successes as indicative of ability, and did not expect the successes to continue. subsequent failure led them to devalue ;their performance but left the mastery-oriented children undaunted. Thus, for helpless children, successes are less salient, less predictive, and less enduring--less successful.",
"title": ""
},
{
"docid": "e613ef418da545958c2094c5cce8f4f1",
"text": "This paper proposes a new visual SLAM technique that not only integrates 6 degrees of freedom (DOF) pose and dense structure but also simultaneously integrates the colour information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Another originality of the proposed approach with respect to the current state of the art lies in the minimisation of both colour (RGB) and depth (D) errors, whilst competing approaches only minimise geometry. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking.",
"title": ""
},
{
"docid": "a334bfdcbaacf1cada20694e2e3dd867",
"text": "The oral bioavailability of diclofenac potassium 50 mg administered as a soft gelatin capsule (softgel capsule), powder for oral solution (oral solution), and tablet was evaluated in a randomized, open-label, 3-period, 6-sequence crossover study in healthy adults. Plasma diclofenac concentrations were measured using a validated liquid chromatography-mass spectrometry/mass spectrometry method, and pharmacokinetic analysis was performed by noncompartmental methods. The median time to achieve peak plasma concentrations of diclofenac was 0.5, 0.25, and 0.75 hours with the softgel capsule, oral solution, and tablet formulations, respectively. The geometric mean ratio and associated 90%CI for AUCinf, and Cmax of the softgel capsule formulation relative to the oral solution formulation were 0.97 (0.95-1.00) and 0.85 (0.76-0.95), respectively. The geometric mean ratio and associated 90%CI for AUCinf and Cmax of the softgel capsule formulation relative to the tablet formulation were 1.04 (1.00-1.08) and 1.67 (1.43-1.96), respectively. In conclusion, the exposure (AUC) of diclofenac with the new diclofenac potassium softgel capsule formulation was comparable to that of the existing oral solution and tablet formulations. The peak plasma concentration of diclofenac from the new softgel capsule was 67% higher than the existing tablet formulation, whereas it was 15% lower in comparison with the oral solution formulation.",
"title": ""
},
{
"docid": "d33aff7fc4923a7dc7521c2db56cb99e",
"text": "OBJECTIVE\nThis research was conducted to study the relationship between attribution and academic procrastination in University Students.\n\n\nMETHODS\nThe subjects were 203 undergraduate students, 55 males and 148 females, selected from English and French language and literature students of Tabriz University. Data were gathered through Procrastination Assessment Scale-student (PASS) and Causal Dimension Scale (CDA) and were analyzed by multiple regression analysis (stepwise).\n\n\nRESULTS\nThe results showed that there was a meaningful and negative relation between the locus of control and controllability in success context and academic procrastination. Besides, a meaningful and positive relation was observed between the locus of control and stability in failure context and procrastination. It was also found that 17% of the variance of procrastination was accounted by linear combination of attributions.\n\n\nCONCLUSION\nWe believe that causal attribution is a key in understanding procrastination in academic settings and is used by those who have the knowledge of Causal Attribution styles to organize their learning.",
"title": ""
},
{
"docid": "34461f38c51a270e2f3b0d8703474dfc",
"text": "Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.",
"title": ""
},
{
"docid": "a078ace7b4093d10e4998667156c68bf",
"text": "In this study we develop a method which improves a credit card fraud detection solution currently being used in a bank. With this solution each transaction is scored and based on these scores the transactions are classified as fraudulent or legitimate. In fraud detection solutions the typical objective is to minimize the wrongly classified number of transactions. However, in reality, wrong classification of each transaction do not have the same effect in that if a card is in the hand of fraudsters its whole available limit is used up. Thus, the misclassification cost should be taken as the available limit of the card. This is what we aim at minimizing in this study. As for the solution method, we suggest a novel combination of the two well known meta-heuristic approaches, namely the genetic algorithms and the scatter search. The method is applied to real data and very successful results are obtained compared to current practice. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e032ace86d446b4ecacbda453913a373",
"text": "While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is inspired by the following observation: any machine translation task has a dual task, e.g., English-to-French translation (primal) versus French-to-English translation (dual); the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the dual-learning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process (e.g., the languagemodel likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations), we can iteratively update the two models until convergence (e.g., using the policy gradient methods). We call the corresponding approach to neural machine translation dual-NMT. Experiments show that dual-NMT works very well on English↔French translation; especially, by learning from monolingual data (with 10% bilingual data for warm start), it achieves a comparable accuracy to NMT trained from the full bilingual data for the French-to-English translation task.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "10187e22397b1c30b497943764d32c34",
"text": "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.",
"title": ""
},
{
"docid": "586ea16456356b6301e18f39e50baa89",
"text": "In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services.Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.",
"title": ""
},
{
"docid": "c27ba892408391234da524ffab0e7418",
"text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.",
"title": ""
},
{
"docid": "c716e7dc1c0e770001bcb57eab871968",
"text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
"title": ""
},
{
"docid": "f8b0dcd771e7e7cf50a05cf7221f4535",
"text": "Studies on monocyte and macrophage biology and differentiation have revealed the pleiotropic activities of these cells. Macrophages are tissue sentinels that maintain tissue integrity by eliminating/repairing damaged cells and matrices. In this M2-like mode, they can also promote tumor growth. Conversely, M1-like macrophages are key effector cells for the elimination of pathogens, virally infected, and cancer cells. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, thereby accounting for the many and apparently opposed macrophage functions. Many questions arise. When monocytes differentiate into macrophages in a tissue (concomitantly adopting a specific functional program, M1 or M2), do they all die during the inflammatory reaction, or do some of them survive? Do those that survive become quiescent tissue macrophages, able to react as naïve cells to a new challenge? Or, do monocyte-derived tissue macrophages conserve a \"memory\" of their past inflammatory activation? This review will address some of these important questions under the general framework of the role of monocytes and macrophages in the initiation, development, resolution, and chronicization of inflammation.",
"title": ""
},
{
"docid": "a8f352abf1203132d69f3199b2b2a705",
"text": "BACKGROUND\nQualitative research explores complex phenomena encountered by clinicians, health care providers, policy makers and consumers. Although partial checklists are available, no consolidated reporting framework exists for any type of qualitative design.\n\n\nOBJECTIVE\nTo develop a checklist for explicit and comprehensive reporting of qualitative studies (in depth interviews and focus groups).\n\n\nMETHODS\nWe performed a comprehensive search in Cochrane and Campbell Protocols, Medline, CINAHL, systematic reviews of qualitative studies, author or reviewer guidelines of major medical journals and reference lists of relevant publications for existing checklists used to assess qualitative studies. Seventy-six items from 22 checklists were compiled into a comprehensive list. All items were grouped into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. Duplicate items and those that were ambiguous, too broadly defined and impractical to assess were removed.\n\n\nRESULTS\nItems most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting.\n\n\nCONCLUSIONS\nThe criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations.",
"title": ""
},
{
"docid": "792767dee5fb0251f0ff028c75d6e55a",
"text": "According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an attentional task demonstrated that the ERN--its timing and sensitivity to task parameters--can be explained in terms of the conflict theory. A new experiment confirmed predictions of this theory regarding the ERN and a second scalp potential, the N2, that is proposed to reflect conflict monitoring on correct response trials. Further analysis of the simulation data indicated that errors can be detected reliably on the basis of post-error conflict. It is concluded that the ERN can be explained in terms of response conflict and that monitoring for conflict may provide a simple mechanism for detecting errors.",
"title": ""
},
{
"docid": "3f418dd3a1374a7928e2428aefe4fe29",
"text": "The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach for tackling this problem is commonly known as pruning and it consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the least-squares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "eb2db8f0eb72c1721a78b3a2abbacdef",
"text": "Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community (Krizhevsky et al., 2012) for their superior predictive properties including robustness to overfitting. However their application to algorithmic trading has not been previously researched, partly because of their computational complexity. This paper describes the application of DNNs to predicting financial market movement directions. In particular we describe the configuration and training approach and then demonstrate their application to backtesting a simple trading strategy over 43 different Commodity and FX future mid-prices at 5-minute intervals. All results in this paper are generated using a C++ implementation on the Intel Xeon Phi co-processor which is 11.4x faster than the serial version and a Python strategy backtesting environment both of which are available as open source code written by the authors.",
"title": ""
},
{
"docid": "fda10c187c97f5c167afaa0f84085953",
"text": "We provide empirical evidence that suggests social media and stock markets have a nonlinear causal relationship. We take advantage of an extensive data set composed of social media messages related to DJIA index components. By using information-theoretic measures to cope for possible nonlinear causal coupling between social media and stock markets systems, we point out stunning differences in the results with respect to linear coupling. Two main conclusions are drawn: First, social media significant causality on stocks’ returns are purely nonlinear in most cases; Second, social media dominates the directional coupling with stock market, an effect not observable within linear modeling. Results also serve as empirical guidance on model adequacy in the investigation of sociotechnical and financial systems.",
"title": ""
},
{
"docid": "fde78187088da4d4b8fe4cb0f959b860",
"text": "The key question raised in this research in progress paper is whether the development stage of a (hardware) startup can give an indication of the crowdfunding type it decides to choose. Throughout the paper, I empirically investigate the German crowdfunding landscape and link it to startups in the hardware sector, picking up the proposed notion of an emergent hardware renaissance. To identify the potential points of contact between crowdfunds and startups, an evaluation of different startup stage models with regard to funding requirements is provided, as is an overview of currently used crowdfunding typologies. The example of two crowdfunding platforms (donation and non-monetary reward crowdfunding vs. equity-based crowdfunding) and their respective hardware projects and startups is used to highlight the potential of this research in progress. 1 Introduction Originally motivated by Paul Graham's 'The Hardware Renaissance' (2012) and further spurred by Witheiler's 'The hardware revolution will be crowdfunded' (2013), I chose to consider the intersection of startups, crowdfunding, and hardware. This is particularly interesting since literature on innovation and startup funding has indeed grown to some sophistication regarding the timing of more classic sources of capital in a startup's life, such as bootstrapping, business angel funding, and venture capital (cf. e.g., Schwienbacher & Larralde, 2012; Metrick & Yasuda, 2011). Due to the novelty of crowdfunding, however, general research on this type of funding is just at the beginning stages and many papers are rather focused on specific elements of the phenomenon (e.g., Belleflamme et al., 2013; Agrawal et al. 2011) and / or exploratory in nature (e.g., Mollick, 2013). What is missing is a verification of the research on potential points of contact between crowdfunds and startups. It remains unclear when crowdfunding is used—primarily during the early seed stage for example or equally at some later point as well—and what types apply (cf. e.g., Collins & Pierrakis, 2012). Simply put, the research question that emerges is whether the development stage of a startup can give an indication of the crowdfunding type it decides to choose. To further explore an answer to this question, I commenced an investigation of the German crowdfunding scene with a focus on hardware startups. Following desk research on platforms situated in German-speaking areas—Germany, Austria, Switzerland—, a categorization of the respectively used funding types is still in process, and transitions into a quantitative analysis and an in-depth case study-based assessment. The prime challenge of such an investigation …",
"title": ""
},
{
"docid": "43d307f1e7aa43350399e7343946ac47",
"text": "Computer based medical decision support system (MDSS) can be useful for the physicians with its fast and accurate decision making process. Predicting the existence of heart disease accurately, results in saving life of patients followed by proper treatment. The main objective of our paper is to present a MDSS for heart disease classification based on sequential minimal optimization (SMO) technique in support vector machine (SVM). In this we illustrated the UCI machine learning repository data of Cleveland heart disease database; we trained SVM by using SMO technique. Training a SVM requires the solution of a very large QP optimization problem..SMO algorithm breaks this large optimization problem into small sub-problems. Both the training and testing phases give the accuracy on each record. The results proved that the MDSS is able to carry out heart disease diagnosis accurately in fast way and on a large dataset it shown good ability of prediction.",
"title": ""
}
] |
scidocsrr
|
860855d1f7a529a14d81729a0eb3a747
|
The state-of-the-art in personalized recommender systems for social networking
|
[
{
"docid": "107aff0162fb0b6c1f90df1bdf7174b7",
"text": "Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings.",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
}
] |
[
{
"docid": "c96fa07ef9860880d391a750826f5faf",
"text": "This paper presents the investigations of short-circuit current, electromagnetic force, and transient dynamic response of windings deformation including mechanical stress, strain, and displacements for an oil-immersed-type 220-kV power transformer. The worst-case fault with three-phase short-circuit happening simultaneously is assumed. A considerable leakage magnetic field excited by short-circuit current can produce the dynamical electromagnetic force to act on copper disks in each winding. The two-dimensional finite element method (FEM) is employed to obtain the electromagnetic force and its dynamical characteristics in axial and radial directions. In addition, to calculate the windings deformation accurately, we measured the nonlinear elasticity characteristic of spacer and built three-dimensional FE kinetic model to analyze the axial dynamic deformation. The results of dynamic mechanical stress and strain induced by combining of short-circuit force and prestress are useful for transformer design and fault diagnosis.",
"title": ""
},
{
"docid": "5e581fa162c4662ef26450ed24122ccd",
"text": "Article history: Received 6 December 2010 Received in revised form 6 December 2012 Accepted 12 January 2013 Available online 11 February 2013",
"title": ""
},
{
"docid": "c4256017c214eabda8e5b47c604e0e49",
"text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.",
"title": ""
},
{
"docid": "218ddb719c00ea390d08b2d128481333",
"text": "Teeth move through alveolar bone, whether through the normal process of tooth eruption or by strains generated by orthodontic appliances. Both eruption and orthodontics accomplish this feat through similar fundamental biological processes, osteoclastogenesis and osteogenesis, but there are differences that make their mechanisms unique. A better appreciation of the molecular and cellular events that regulate osteoclastogenesis and osteogenesis in eruption and orthodontics is not only central to our understanding of how these processes occur, but also is needed for ultimate development of the means to control them. Possible future studies in these areas are also discussed, with particular emphasis on translation of fundamental knowledge to improve dental treatments.",
"title": ""
},
{
"docid": "43567ea4daef8fcbfed0bb258be0aec2",
"text": "0140-3664/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.comcom.2009.11.009 * Corresponding author. Tel.: +1 517 353 4379. E-mail addresses: renjian@egr.msu.edu (J. Ren), jie Anonymous communications aim to preserve communications privacy within the shared public network environment. It can provide security well beyond content privacy and integrity. The scientific studies of anonymous communications are largely originated from Chaum’s two seminal approaches: mixnet and DC-net. In this paper, we present an overview of the research in this field. We start with the basic definitions of anonymous communications. We then describe the cryptographic primitives, the network protocols, and some of the representative anonymous communication systems. We also describe verifiable mixnets and their applications to electronic voting. Finally, we briefly cite some other anonymous systems. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "aba1bbd9163e5f9d16ef2d98d16ce1c2",
"text": "The basic reproduction number (0) is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of (0) where finitely many different categories of individuals are recognized. We clear up confusion that has been around in the literature concerning the construction of this matrix, specifically for the most frequently used so-called compartmental models. We present a detailed easy recipe for the construction of the NGM from basic ingredients derived directly from the specifications of the model. We show that two related matrices exist which we define to be the NGM with large domain and the NGM with small domain. The three matrices together reflect the range of possibilities encountered in the literature for the characterization of (0). We show how they are connected and how their construction follows from the basic model ingredients, and establish that they have the same non-zero eigenvalues, the largest of which is the basic reproduction number (0). Although we present formal recipes based on linear algebra, we encourage the construction of the NGM by way of direct epidemiological reasoning, using the clear interpretation of the elements of the NGM and of the model ingredients. We present a selection of examples as a practical guide to our methods. In the appendix we present an elementary but complete proof that (0) defined as the dominant eigenvalue of the NGM for compartmental systems and the Malthusian parameter r, the real-time exponential growth rate in the early phase of an outbreak, are connected by the properties that (0) > 1 if and only if r > 0, and (0) = 1 if and only if r = 0.",
"title": ""
},
{
"docid": "b14748d454917414725bfa51c62730ad",
"text": "The authors investigated the lexical entry for morphologically complex words in English, Six experiments, using a cross-modal repetition priming task, asked whether the lexical entry for derivationally suffixed and prefixed words is morphologically structured and how this relates to the semantic and phonological transparency of the surface relationship between stem and affix. There was clear evidence for morphological decomposition of semantically transparent forms. This was independent of phonological transparency, suggesting that morphemic representations are phonologically abstract. Semantically opaque forms, in contrast, behave like monomorphemic words. Overall, suffixed and prefixed derived words and their stems prime each other through shared morphemes in the lexical entry, except for pairs of suffixed forms, which show a cohort-based interference effect.",
"title": ""
},
{
"docid": "d42f627606fdc43798eb2eb7bea38f20",
"text": "Purpose\nTo examine the structure-function relationship in glaucoma between deep defects on visual fields (VF) and deep losses in the circumpapillary retinal nerve fiber layer (cpRNFL) on optical coherence tomography (OCT) circle scans.\n\n\nMethods\nThirty two glaucomatous eyes with deep VF defects, as defined by at least one test location worse than ≤ -15 dB on the 10-2 and/or 24-2 VF pattern deviation (PD) plots, were included from 87 eyes with \"early\" glaucoma (i.e., 24-2 mean deviation better than -6 dB). Using the location of the deep VF points and a schematic model, the location of local damage on an OCT circle scan was predicted. The thinnest location of cpRNFL (i.e., deepest loss) was also determined.\n\n\nResults\nIn 19 of 32 eyes, a region of complete or near complete cpRNFL loss was observed. All 19 of these had deep VF defects on the 24-2 and/or 10-2. All of the 32 eyes with deep VF defects had abnormal cpRNFL regions (red, 1%) and all but 2 had a region of cpRNFL thickness <21 μm. The midpoint of the VF defect and the location of deepest cpRNFL had a 95% limit of agreement within approximately two-thirds of a clock-hour (or 30°) sector (between -22.1° to 25.2°). Individual fovea-to-disc angle (FtoDa) adjustment improved agreement in one eye with an extreme FtoDa.\n\n\nConclusions\nAlthough studies relating local structural (OCT) and functional (VF) measures typically show poor to moderate correlations, there is good qualitative agreement between the location of deep cpRNFL loss and deep defects on VFs.",
"title": ""
},
{
"docid": "f309fed8c77a2aab87fd160d07746910",
"text": "Accurately predicting protein-ligand binding affinities is an important problem in computational chemistry since it can substantially accelerate drug discovery for virtual screening and lead optimization. We propose here a fast machine-learning approach for predicting binding affinities using state-of-the-art 3D-convolutional neural networks and compare this approach to other machine-learning and scoring methods using several diverse data sets. The results for the standard PDBbind (v.2016) core test-set are state-of-the-art with a Pearson's correlation coefficient of 0.82 and a RMSE of 1.27 in pK units between experimental and predicted affinity, but accuracy is still very sensitive to the specific protein used. KDEEP is made available via PlayMolecule.org for users to test easily their own protein-ligand complexes, with each prediction taking a fraction of a second. We believe that the speed, performance, and ease of use of KDEEP makes it already an attractive scoring function for modern computational chemistry pipelines.",
"title": ""
},
{
"docid": "06bcdd8e0993bf87108acc5b9bc64ca3",
"text": "The exact role and the function of the scapula are misunderstood in many clinical situations. This lack of awareness often translates into incomplete evaluation and diagnosis of shoulder problems. In addition, scapular rehabilitation is often ignored. Recent research, however, has demonstrated a pivotal role for the scapula in shoulder function, shoulder injury, and shoulder rehabilitation. This knowledge will help the physician to provide more comprehensive care for the athlete. This \"Current Concepts\" review will address the anatomy of the scapula, the roles that the scapula plays in overhead throwing and serving activities, the normal biomechanics of the scapula, abnormal biomechanics and physiology of the scapula, how the scapula may function in injuries that occur around the shoulder, and treatment and rehabilitation of scapular problems.",
"title": ""
},
{
"docid": "ff72ade7fdfba55c0f6ab7b5f8b74eb7",
"text": "Automatic detection of facial features in an image is important stage for various facial image interpretation work, such as face recognition, facial expression recognition, 3Dface modeling and facial features tracking. Detection of facial features like eye, pupil, mouth, nose, nostrils, lip corners, eye corners etc., with different facial expression and illumination is a challenging task. In this paper, we presented different methods for fully automatic detection of facial features. Viola-Jones' object detector along with haar-like cascaded features are used to detect face, eyes and nose. Novel techniques using the basic concepts of facial geometry, are proposed to locate the mouth position, nose position and eyes position. The estimation of detection region for features like eye, nose and mouth enhanced the detection accuracy significantly. An algorithm, using the H-plane of the HSV color space is proposed for detecting eye pupil from the eye detected region. FEI database of frontal face images is mainly used to test the algorithm. Proposed algorithm is tested over 100 frontal face images with two different facial expression (neutral face and smiling face). The results obtained are found to be 100% accurate for lip, lip corners, nose and nostrils detection. The eye corners, and eye pupil detection is giving approximately 95% accurate results.",
"title": ""
},
{
"docid": "46d20d0330aaaf22418c53c715d78631",
"text": "s, Cochrane Central Register of Controlled Trials and Database of Systemic Reviews, Database of Abstracts of Effects, ACP Journal Club, and OTseeker. Experts such as librarians have been used. However, there is no mention of efforts in relation to unpublished research, but abstracts of 950 articles weres of 950 articles were scanned, which appears to be a sufficient amount.",
"title": ""
},
{
"docid": "bdf191e0f2b06f13da05a08f34901459",
"text": "This paper presents a deduplication storage system over cloud computing. Our deduplication storage system consists of two major components, a front-end deduplication application and Hadoop Distributed File System. Hadoop Distributed File System is common back-end distribution file system, which is used with a Hadoop database. We use Hadoop Distributed File System to build up a mass storage system and use a Hadoop database to build up a fast indexing system. With the deduplication applications, a scalable and parallel deduplicated cloud storage system can be effectively built up. We further use VMware to generate a simulated cloud environment. The simulation results demonstrate that our deduplication cloud storage system is more efficient than traditional deduplication approaches.",
"title": ""
},
{
"docid": "a39834162b2072c69b03745cfdbe2f1a",
"text": "AI has seen great advances of many kinds recently, but there is one critical area where progress has been extremely slow: ordinary commonsense.",
"title": ""
},
{
"docid": "35e1a2344773d79fd7b50325757c14e5",
"text": "Many imaging tasks require global information about all pixels in an image. Conventional bottom-up classification networks globalize information by decreasing resolution; features are pooled and downsampled into a single output. But for semantic segmentation and object detection tasks, a network must provide higher-resolution pixel-level outputs. To globalize information while preserving resolution, many researchers propose the inclusion of sophisticated auxiliary blocks, but these come at the cost of a considerable increase in network size and computational cost. This paper proposes stacked u-nets (SUNets), which iteratively combine features from different resolution scales while maintaining resolution. SUNets leverage the information globalization power of u-nets in a deeper network architectures that is capable of handling the complexity of natural images. SUNets perform extremely well on semantic segmentation tasks using a small number of parameters. The code is available at https://github.com/shahsohil/sunets.",
"title": ""
},
{
"docid": "1a9fc19eb416eebdbfe1110c37e0852b",
"text": "Two important aspects of switched-mode (Class-D) amplifiers providing a high signal to noise ratio (SNR) for mechatronic applications are investigated. Signal jitter is common in digital systems and introduces noise, leading to a deterioration of the SNR. Hence, a jitter elimination technique for the transistor gate signals in power electronic converters is presented and verified. Jitter is reduced tenfold as compared to traditional approaches to values of 25 ps at the output of the power stage. Additionally, digital modulators used for the generation of the switch control signals can only achieve a limited resolution (and hence, limited SNR) due to timing constraints in digital circuits. Consequently, a specialized modulator structure based on noise shaping is presented and optimized which enables the creation of high-resolution switch control signals. This, together with the jitter reduction circuit, enables half-bridge output voltage SNR values of more than 100dB in an open-loop system.",
"title": ""
},
{
"docid": "945902f8d3dabb4e12143783a65457bd",
"text": "Authentication is a mechanism to verify identity of users. Those who can present valid credential are considered as authenticated identities. In this paper, we introduce an adaptive authentication system called Unified Authentication Platform (UAP) which incorporates adaptive control to identify high-risk and suspicious illegitimate login attempts. The system evaluates comprehensive set of known information about the users from the past login history to define their normal behavior profile. The system leverages this information that has been previously stored to determine the security risk and level of assurance of current login attempt.",
"title": ""
},
{
"docid": "f25266d59f3918b85217c4c54629e5de",
"text": "Customized logistics service and online shoppers’ satisfaction: an empirical study Mingyao Hu Fang Huang Hanping Hou Yong Chen Larissa Bulysheva Article information: To cite this document: Mingyao Hu Fang Huang Hanping Hou Yong Chen Larissa Bulysheva , (2016),\"Customized logistics service and online shoppers’ satisfaction: an empirical study\", Internet Research, Vol. 26 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/IntR-11-2014-0295",
"title": ""
},
{
"docid": "78f03adf9c114a8a720c9518b1cbf59e",
"text": "A crucial capability of autonomous road vehicles is the ability to cope with the unknown future behavior of surrounding traffic participants. This requires using non-deterministic models for prediction. While stochastic models are useful for long-term planning, we use set-valued non-determinism capturing all possible behaviors in order to verify the safety of planned maneuvers. To reduce the set of solutions, our earlier work considers traffic rules; however, it neglects mutual influences between traffic participants. This work presents the first solution for establishing interaction within set-based prediction of traffic participants. Instead of explicitly modeling dependencies between vehicles, we trim reachable occupancy regions to consider interaction, which is computationally much more efficient. The usefulness of our approach is demonstrated by experiments from the CommonRoad benchmark repository.",
"title": ""
},
{
"docid": "8400fd3ffa3cdfd54e92370b8627c7e8",
"text": "A number of computer vision problems such as human age estimation, crowd density estimation and body/face pose (view angle) estimation can be formulated as a regression problem by learning a mapping function between a high dimensional vector-formed feature input and a scalar-valued output. Such a learning problem is made difficult due to sparse and imbalanced training data and large feature variations caused by both uncertain viewing conditions and intrinsic ambiguities between observable visual features and the scalar values to be estimated. Encouraged by the recent success in using attributes for solving classification problems with sparse training data, this paper introduces a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. More precisely, low-level visual features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space where each dimension has clearly defined semantic interpretation (a label) that captures how the scalar output value (e.g. age, people count) changes continuously and cumulatively. Extensive experiments show that our cumulative attribute framework gains notable advantage on accuracy for both age estimation and crowd counting when compared against conventional regression models, especially when the labelled training data is sparse with imbalanced sampling.",
"title": ""
}
] |
scidocsrr
|
108eb06bba679458650bcfb0ceedd835
|
Making machine learning models interpretable
|
[
{
"docid": "be9cea5823779bf5ced592f108816554",
"text": "Undoubtedly, bioinformatics is one of the fastest developing scientific disciplines in recent years. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. There is already a significant number of books on bioinformatics. Some are introductory and require almost no prior experience in biology or computer science: “Bioinformatics Basics Applications in Biological Science and Medicine” and “Introduction to Bioinformatics.” Others are targeted to biologists entering the field of bioinformatics: “Developing Bioinformatics Computer Skills.” Some more specialized books are: “An Introduction to Support Vector Machines : And Other Kernel-Based Learning Methods”, “Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids”, “Pattern Discovery in Bimolecular Data : Tools, Techniques, and Applications”, “Computational Molecular Biology: An Algorithmic Approach.” The book subject of this review has a broad scope. “Bioinformatics: The machine learning approach” is aimed at two types of researchers and students. First are the biologists and biochemists who need to understand new data-driven algorithms, such as neural networks and hidden Markov",
"title": ""
}
] |
[
{
"docid": "e755e96c2014100a69e4a962d6f75fb5",
"text": "We propose a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera. Our method images the surface under arbitrary environment lighting with the flash turned on, thereby avoiding shadows while simultaneously capturing highfrequency specular highlights. We train a CNN to regress an SVBRDF and surface normals from this image. Our network is trained using a large-scale SVBRDF dataset and designed to incorporate physical insights for material estimation, including an in-network rendering layer to model appearance and a material classifier to provide additional supervision during training. We refine the results from the network using a dense CRF module whose terms are designed specifically for our task. The framework is trained end-to-end and produces high quality results for a variety of materials. We provide extensive ablation studies to evaluate our network on both synthetic and real data, while demonstrating significant improvements in comparisons with prior works.",
"title": ""
},
{
"docid": "559a4175347e5fea57911d9b8c5080e6",
"text": "Online social networks offering various services have become ubiquitous in our daily life. Meanwhile, users nowadays are usually involved in multiple online social networks simultaneously to enjoy specific services provided by different networks. Formally, social networks that share some common users are named as partially aligned networks. In this paper, we want to predict the formation of social links in multiple partially aligned social networks at the same time, which is formally defined as the multi-network link (formation) prediction problem. In multiple partially aligned social networks, users can be extensively correlated with each other by various connections. To categorize these diverse connections among users, 7 \"intra-network social meta paths\" and 4 categories of \"inter-network social meta paths\" are proposed in this paper. These \"social meta paths\" can cover a wide variety of connection information in the network, some of which can be helpful for solving the multi-network link prediction problem but some can be not. To utilize useful connection, a subset of the most informative \"social meta paths\" are picked, the process of which is formally defined as \"social meta path selection\" in this paper. An effective general link formation prediction framework, Mli (Multi-network Link Identifier), is proposed in this paper to solve the multi-network link (formation) prediction problem. Built with heterogenous topological features extracted based on the selected \"social meta paths\" in the multiple partially aligned social networks, Mli can help refine and disambiguate the prediction results reciprocally in all aligned networks. Extensive experiments conducted on real-world partially aligned heterogeneous networks, Foursquare and Twitter, demonstrate that Mli can solve the multi-network link prediction problem very well.",
"title": ""
},
{
"docid": "17c49edf5842fb918a3bd4310d910988",
"text": "In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "93f0026a850a620ecabafdbfec3abb72",
"text": "Knet (pronounced \"kay-net\") is the Koç University machine learning framework implemented in Julia, a high-level, high-performance, dynamic programming language. Unlike gradient generating compilers like Theano and TensorFlow which restrict users into a modeling mini-language, Knet allows models to be defined by just describing their forward computation in plain Julia, allowing the use of loops, conditionals, recursion, closures, tuples, dictionaries, array indexing, concatenation and other high level language features. High performance is achieved by combining automatic differentiation of most of Julia with efficient GPU kernels and memory management. Several examples and benchmarks are provided to demonstrate that GPU support and automatic differentiation of a high level language are sufficient for concise definition and efficient training of sophisticated models.",
"title": ""
},
{
"docid": "46df05f01a027359f23d4de2396e2586",
"text": "Dialog act identification plays an important role in understanding conversations. It has been widely applied in many fields such as dialogue systems, automatic machine translation, automatic speech recognition, and especially useful in systems with human-computer natural language dialogue interfaces such as virtual assistants and chatbots. The first step of identifying dialog act is identifying the boundary of the dialog act in utterances. In this paper, we focus on segmenting the utterance according to the dialog act boundaries, i.e. functional segments identification, for Vietnamese utterances. We investigate carefully functional segment identification in two approaches: (1) machine learning approach using maximum entropy (ME) and conditional random fields (CRFs); (2) deep learning approach using bidirectional Long Short-Term Memory (LSTM) with a CRF layer (Bi-LSTM-CRF) on two different conversational datasets: (1) Facebook messages (Message data); (2) transcription from phone conversations (Phone data). To the best of our knowledge, this is the first work that applies deep learning based approach to dialog act segmentation. As the results show, deep learning approach performs appreciably better as to compare with traditional machine learning approaches. Moreover, it is also the first study that tackles dialog act and functional segment identification for Vietnamese.",
"title": ""
},
{
"docid": "f66c9aa537630fdbff62d8d49205123b",
"text": "This workshop will explore community based repositories for educational data and analytic tools that are used to connect researchers and reduce the barriers to data sharing. Leading innovators in the field, as well as attendees, will identify and report on bottlenecks that remain toward our goal of a unified repository. We will discuss these as well as possible solutions. We will present LearnSphere, an NSF funded system that supports collaborating on and sharing a wide variety of educational data, learning analytics methods, and visualizations while maintaining confidentiality. We will then have hands-on sessions in which attendees have the opportunity to apply existing learning analytics workflows to their choice of educational datasets in the repository (using a simple drag-and-drop interface), add their own learning analytics workflows (requires very basic coding experience), or both. Leaders and attendees will then jointly discuss the unique benefits as well as the limitations of these solutions. Our goal is to create building blocks to allow researchers to integrate their data and analysis methods with others, in order to advance the future of learning science.",
"title": ""
},
{
"docid": "506a6a98e87fb5a6dc7e5cbe9cf27262",
"text": "Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex innerand cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large innerand cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. Source (GTA5) Target (BDD) Figure 1: Exemplar guided image translation examples of GTA5→ BDD. Best viewed in color.",
"title": ""
},
{
"docid": "46829dde25c66191bcefae3614c2dd3f",
"text": "User-generated content (UGC) on the Web, especially on social media platforms, facilitates the association of additional information with digital resources; thus, it can provide valuable supplementary content. However, UGC varies in quality and, consequently, raises the challenge of how to maximize its utility for a variety of end-users. This study aims to provide researchers and Web data curators with comprehensive answers to the following questions: What are the existing approaches and methods for assessing and ranking UGC? What features and metrics have been used successfully to assess and predict UGC value across a range of application domains? What methods can be effectively employed to maximize that value? This survey is composed of a systematic review of approaches for assessing and ranking UGC: results are obtained by identifying and comparing methodologies within the context of short text-based UGC on the Web. Existing assessment and ranking approaches adopt one of four framework types: the community-based framework takes into consideration the value assigned to content by a crowd of humans, the end-user--based framework adapts and personalizes the assessment and ranking process with respect to a single end-user, the designer-based framework encodes the software designer’s values in the assessment and ranking method, and the hybrid framework employs methods from more than one of these types. This survey suggests a need for further experimentation and encourages the development of new approaches for the assessment and ranking of UGC.",
"title": ""
},
{
"docid": "6c3f80b453d51e364eca52656ed54e62",
"text": "Despite substantial recent research activity related to continuous delivery and deployment (CD), there has not yet been a systematic, empirical study on how the practices often associated with continuous deployment have found their way into the broader software industry. This raises the question to what extent our knowledge of the area is dominated by the peculiarities of a small number of industrial leaders, such as Facebook. To address this issue, we conducted a mixed-method empirical study, consisting of a pre-study on literature, qualitative interviews with 20 software developers or release engineers with heterogeneous backgrounds, and a Web-based quantitative survey that attracted 187 complete responses. A major trend in the results of our study is that architectural issues are currently one of the main barriers for CD adoption. Further, feature toggles as an implementation technique for partial rollouts lead to unwanted complexity, and require research on better abstractions and modelling techniques for runtime variability. Finally, we conclude that practitioners are in need for more principled approaches to release decision making, e.g., which features to conduct A/B tests on, or which metrics to evaluate.",
"title": ""
},
{
"docid": "52212ff3e1c85b5f5c3fcf0ec71f6f8b",
"text": "Embodied cognition theory proposes that individuals' abstract concepts can be associated with sensorimotor processes. The authors examined the effects of teaching participants novel embodied metaphors, not based in prior physical experience, and found evidence suggesting that they lead to embodied simulation, suggesting refinements to current models of embodied cognition. Creating novel embodiments of abstract concepts in the laboratory may be a useful method for examining mechanisms of embodied cognition.",
"title": ""
},
{
"docid": "712cd41c525b6632a7a5c424173d6f1e",
"text": "The use of 3-D multicellular spheroid (MCS) models is increasingly being accepted as a viable means to study cell-cell, cell-matrix and cell-drug interactions. Behavioral differences between traditional monolayer (2-D) cell cultures and more recent 3-D MCS confirm that 3-D MCS more closely model the in vivo environment. However, analyzing the effect of pharmaceutical agents on both monolayer cultures and MCS is very time intensive. This paper reviews the use of electrical impedance spectroscopy (EIS), a label-free whole cell assay technique, as a tool for automated screening of cell drug interactions in MCS models for biologically/physiologically relevant events over long periods of time. EIS calculates the impedance of a sample by applying an AC current through a range of frequencies and measuring the resulting voltage. This review will introduce techniques used in impedance-based analysis of 2-D systems; highlight recently developed impedance-based techniques for analyzing 3-D cell cultures; and discuss applications of 3-D culture impedance monitoring systems.",
"title": ""
},
{
"docid": "cc92787280db22c46a159d95f6990473",
"text": "A novel formulation for the voltage waveforms in high efficiency linear power amplifiers is described. This formulation demonstrates that a constant optimum efficiency and output power can be obtained over a continuum of solutions by utilizing appropriate harmonic reactive impedance terminations. A specific example is confirmed experimentally. This new formulation has some important implications for the possibility of realizing broadband >10% high efficiency linear RF power amplifiers.",
"title": ""
},
{
"docid": "ef26995e3979f479f4c3628283816d5d",
"text": "This article addresses the position taken by Clark (1983) that media do not influence learning under any conditions. The article reframes the questions raised by Clark to explore the conditions under which media will influence learning. Specifically, it posits the need to consider the capabilities of media, and the methods that employ them, as they interact with the cognitive and social processes by which knowledge is constructed. This approach is examined within the context of two major media-based projects, one which uses computers and the other,video. The article discusses the implications of this approach for media theory, research and practice.",
"title": ""
},
{
"docid": "55a0fb2814fde7890724a137fc414c88",
"text": "Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.",
"title": ""
},
{
"docid": "00223ccf5b5aebfc23c76afb7192e3f7",
"text": "Computer Security System / technology have passed through several changes. The trends have been from what you know (e.g. password, PIN, etc) to what you have (ATM card, Driving License, etc) and presently to who you are (Biometry) or combinations of two or more of the trios. This technology (biometry) has come to solve the problems identified with knowledge-based and token-based authentication systems. It is possible to forget your password and what you have can as well be stolen. The security of determining who you are is referred to as BIOMETRIC. Biometric, in a nutshell, is the use of your body as password. This paper explores the various methods of biometric identification that have evolved over the years and the features used for each modality.",
"title": ""
},
{
"docid": "a7618e1370db3fca4262f8d36979aa91",
"text": "Generative Adversarial Network (GAN) has been shown to possess the capability to learn distributions of data, given infinite capacity of models [1, 2]. Empirically, approximations with deep neural networks seem to have “sufficiently large” capacity and lead to several success in many applications, such as image generation. However, most of the results are difficult to evaluate because of the curse of dimensionality and the unknown distribution of the data. To evaluate GANs, in this paper, we consider simple one-dimensional data coming from parametric distributions circumventing the aforementioned problems. We formulate rigorous techniques for evaluation under this setting. Based on this evaluation, we find that many state-ofthe-art GANs are very difficult to train to learn the true distribution and can usually only find some of the modes. If the GAN has learned, such as MMD GAN, we observe it has some generalization capabilities.",
"title": ""
},
{
"docid": "82865170278997209a650aa8be483703",
"text": "This paper presents a novel dataset for traffic accidents analysis. Our goal is to resolve the lack of public data for research about automatic spatio-temporal annotations for traffic safety in the roads. Through the analysis of the proposed dataset, we observed a significant degradation of object detection in pedestrian category in our dataset, due to the object sizes and complexity of the scenes. To this end, we propose to integrate contextual information into conventional Faster R-CNN using Context Mining (CM) and Augmented Context Mining (ACM) to complement the accuracy for small pedestrian detection. Our experiments indicate a considerable improvement in object detection accuracy: +8.51% for CM and +6.20% for ACM. Finally, we demonstrate the performance of accident forecasting in our dataset using Faster R-CNN and an Accident LSTM architecture. We achieved an average of 1.684 seconds in terms of Time-To-Accident measure with an Average Precision of 47.25%. Our Webpage for the paper is https:",
"title": ""
},
{
"docid": "1c8ac344f85ff4d4a711536841168b6a",
"text": "Internet Protocol Television (IPTV) is an increasingly popular multimedia service which is used to deliver television, video, audio and other interactive content over proprietary IP-based networks. Video on Demand (VoD) is one of the most popular IPTV services, and is very important for IPTV providers since it represents the second most important revenue stream after monthly subscriptions. In addition to high-quality VoD content, profitable VoD service provisioning requires an enhanced content accessibility to greatly improve end-user experience. Moreover, it is imperative to offer innovative features to attract new customers and retain existing ones. To achieve this goal, IPTV systems typically employ VoD recommendation engines to offer personalized lists of VoD items that are potentially interesting to a user from a large amount of available titles. In practice, a good recommendation engine does not offer popular and well-known titles, but is rather able to identify interesting among less popular items which would otherwise be hard to find. In this paper we report our experience in building a VoD recommendation system. The presented evaluation shows that our recommendation system is able to recommend less popular items while operating under a high load of end-user requests.",
"title": ""
},
{
"docid": "97065954a10665dee95977168b9e6c60",
"text": "We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.",
"title": ""
}
] |
scidocsrr
|
ecb24abe09ddd13c3ecda600afea6a50
|
Scalable inside-out image-based rendering
|
[
{
"docid": "8a812c0ec6f8d29f9cbff4af2fa1c868",
"text": "Due to the demand for depth maps of higher quality than possible with a single depth imaging technique today, there has been an increasing interest in the combination of different depth sensors to produce a “super-camera” that is more than the sum of the individual parts. In this survey paper, we give an overview over methods for the fusion of Time-ofFlight (ToF) and passive stereo data as well as applications of the resulting high quality depth maps. Additionally, we provide a tutorial-based introduction to the principles behind ToF stereo fusion and the evaluation criteria used to benchmark these methods.",
"title": ""
}
] |
[
{
"docid": "6cc3f51b56261c1b51da88fb9deaa893",
"text": "We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).",
"title": ""
},
{
"docid": "095c796491edf050dc372799ae82b3d3",
"text": "Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Although many networks contain this type of temporal information, the majority of research in network representation learning has focused on static snapshots of the graph and has largely ignored the temporal dynamics of the network. In this work, we describe a general framework for incorporating temporal information into network embedding methods. The framework gives rise to methods for learning time-respecting embeddings from continuous-time dynamic networks. Overall, the experiments demonstrate the effectiveness of the proposed framework and dynamic network embedding approach as it achieves an average gain of 11.9% across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations.",
"title": ""
},
{
"docid": "b09ebc39f36f16a0ef7cb1b5e3ce9620",
"text": "Mobile applications (apps) can be very useful software on smartphones for all aspects of people’s lives. Chronic diseases, such as diabetes, can be made manageable with the support of mobile apps. Applications on smartphones can also help people with diabetes to control their fitness and health. A systematic review of free apps in the English language for smartphones in three of the most popular mobile app stores: Google Play (Android), App Store (iOS) and Windows Phone Store, was performed from November to December 2015. The review of freely available mobile apps for self-management of diabetes was conducted based on the criteria for promoting diabetes self-management as defined by Goyal and Cafazzo (monitoring blood glucose level and medication, nutrition, physical exercise and body weight). The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) was followed. Three independent experts in the field of healthcare-related mobile apps were included in the assessment for eligibility and testing phase. We tested and evaluated 65 apps (21 from Google Play Store, 31 from App Store and 13 from Windows Phone Store). Fifty-six of these apps did not meet even minimal requirements or did not work properly. While a wide selection of mobile applications is available for self-management of diabetes, current results show that there are only nine (5 from Google Play Store, 3 from App Store and 1 from Windows Phone Store) out of 65 reviewed mobile apps that can be versatile and useful for successful self-management of diabetes based on selection criteria. The levels of inclusion of features based on selection criteria in selected mobile apps can be very different. The results of the study can be used as a basis to prvide app developers with certain recommendations. There is a need for mobile apps for self-management of diabetes with more features in order to increase the number of long-term users and thus influence better self-management of the disease.",
"title": ""
},
{
"docid": "7850280ba2c29dc328b9594f4def05a6",
"text": "Electric traction motors in automotive applications work in operational conditions characterized by variable load, rotational speed and other external conditions: this complicates the task of diagnosing bearing defects. The objective of the present work is the development of a diagnostic system for detecting the onset of degradation, isolating the degrading bearing, classifying the type of defect. The developed diagnostic system is based on an hierarchical structure of K-Nearest Neighbours classifiers. The selection of the features from the measured vibrational signals to be used in input by the bearing diagnostic system is done by a wrapper approach based on a Multi-Objective (MO) optimization that integrates a Binary Differential Evolution (BDE) algorithm with the K-Nearest Neighbour (KNN) classifiers. The developed approach is applied to an experimental dataset. The satisfactory diagnostic performances obtain show the capability of the method, independently from the bearings operational conditions.",
"title": ""
},
{
"docid": "231287a073198d45375dae8856c36572",
"text": "We consider a setting in which two firms compete to spread rumors in a social network. Firms seed their rumors simultaneously and rumors propagate according to the linear threshold model. Consumers have (randomly drawn) heterogeneous thresholds for each product. Using the concept of cascade centrality introduced by [6], we provide a sharp characterization of networks in which games admit purestrategy Nash equilibria (PSNE). We provide tight bounds for the efficiency of these equilibria and for the inequality in firms' equilibrium payoffs. When the network is a tree, the model is particularly tractable.",
"title": ""
},
{
"docid": "a8a51268e3e4dc3b8dd5102dafcb8f36",
"text": "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"title": ""
},
{
"docid": "91414c022cad78ee98b7662647253340",
"text": "Biometric based authentication, particularly for fingerprint authentication systems play a vital role in identifying an individual. The existing fingerprint authentication systems depend on specific points known as minutiae for recognizing an individual. Designing a reliable automatic fingerprint authentication system is still very challenging, since not all fingerprint information is available. Further, the information obtained is not always accurate due to cuts, scars, sweat, distortion and various skin conditions. Moreover, the existing fingerprint authentication systems do not utilize other significant minutiae information, which can improve the accuracy. Various local feature detectors such as Difference-of-Gaussian, Hessian, Hessian Laplace, Harris Laplace, Multiscale Harris, and Multiscale Hessian have been extensively used for feature detection. However, these detectors have not been employed for detecting fingerprint image features. In this article, a versatile local feature fingerprint matching scheme is proposed. The local features are obtained by exploiting these local geometric detectors and SIFT descriptor. This scheme considers local characteristic features of the fingerprint image, thus eliminating the issues caused in existing fingerprint feature based matching techniques. Computer simulations of the proposed algorithm on specific databases show significant improvements when compared to existing fingerprint matchers, such as minutiae matcher, hierarchical matcher and graph based matcher. Computer simulations conducted on the Neurotechnology database demonstrates a very low Equal Error Rate (EER) of 0.8%. The proposed system a) improves the accuracy of the fingerprint authentication system, b) works when the minutiae information is sparse, and c) produces satisfactory matching accuracy in the case when minutiae information is unavailable. The proposed system can also be employed for partial fingerprint authentication.",
"title": ""
},
{
"docid": "72385aba9bdf5f8d35985fc8ff98a5ff",
"text": "Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as ecommerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.",
"title": ""
},
{
"docid": "e81b7c70e05b694a917efdd52ef59132",
"text": "Last several years, industrial and information technology field have undergone profound changes, entering \"Industry 4.0\" era. Industry4.0, as a representative of the future of the Fourth Industrial Revolution, evolved from embedded system to the Cyber Physical System (CPS). Manufacturing will be via the Internet, to achieve Internal and external network integration, toward the intelligent direction. This paper introduces the development of Industry 4.0, and the Cyber Physical System is introduced with the example of the Wise Information Technology of 120 (WIT120), then the application of Industry 4.0 in intelligent manufacturing is put forward through the digital factory to the intelligent factory. Finally, the future development direction of Industry 4.0 is analyzed, which provides reference for its application in intelligent manufacturing.",
"title": ""
},
{
"docid": "00bea30f23603619470e64edf61b4942",
"text": "OBJECTIVES\nHigh-throughput techniques such as cDNA microarray, oligonucleotide arrays, and serial analysis of gene expression (SAGE) have been developed and used to automatically screen huge amounts of gene expression data. However, researchers usually spend lots of time and money on discovering gene-disease relationships by utilizing these techniques. We prototypically implemented an algorithm that can provide some kind of predicted results for biological researchers before they proceed with experiments, and it is very helpful for them to discover gene-disease relationships more efficiently.\n\n\nMETHODS\nDue to the fast development of computer technology, many information retrieval techniques have been applied to analyze huge digital biomedical databases available worldwide. Therefore we highly expect that we can apply information retrieval (IR) technique to extract useful information for the relationship of specific diseases and genes from MEDLINE articles. Furthermore, we also applied natural language processing (NLP) methods to do the semantic analysis for the relevant articles to discover the relationships between genes and diseases.\n\n\nRESULTS\nWe have extracted gene symbols from our literature collection according to disease MeSH classifications. We have also built an IR-based retrieval system, \"Biomedical Literature Retrieval System (BLRS)\" and applied the N-gram model to extract the relationship features which can reveal the relationship between genes and diseases. Finally, a relationship network of a specific disease has been built to represent the gene-disease relationships.\n\n\nCONCLUSIONS\nA relationship feature is a functional word that can reveal the relationship between one single gene and a disease. By incorporating many modern IR techniques, we found that BLRS is a very powerful information discovery tool for literature searching. A relationship network which contains the information on gene symbol, relationship feature, and disease MeSH term can provide an integrated view to discover gene-disease relationships.",
"title": ""
},
{
"docid": "5ed74b235edcbcb5aeb5b6b3680e2122",
"text": "Self-paced learning (SPL) mimics the cognitive mechanism o f humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by mini zer function. Existing methods usually pursue this by artificially designing th e explicit form of SPL regularizer. In this paper, we focus on the minimizer functi on, and study a group of new regularizer, named self-paced implicit regularizer th at is deduced from robust loss function. Based on the convex conjugacy theory, the min imizer function for self-paced implicit regularizer can be directly learned fr om the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We dem onstrate that the learning procedure of SPL-IR is associated with latent robu st loss functions, thus can provide some theoretical inspirations for its working m echanism. We further analyze the relation between SPL-IR and half-quadratic opt imization. Finally, we implement SPL-IR to both supervised and unsupervised tasks , nd experimental results corroborate our ideas and demonstrate the correctn ess and effectiveness of implicit regularizers.",
"title": ""
},
{
"docid": "cc21d54f763176994602f9ae598596ce",
"text": "BACKGROUND\nRecent studies have revealed that nursing staff turnover remains a major problem in emerging economies. In particular, nursing staff turnover in Malaysia remains high due to a lack of job satisfaction. Despite a shortage of healthcare staff, the Malaysian government plans to create 181 000 new healthcare jobs by 2020 through the Economic Transformation Programme (ETP). This study investigated the causal relationships among perceived transformational leadership, empowerment, and job satisfaction among nurses and medical assistants in two selected large private and public hospitals in Malaysia. This study also explored the mediating effect of empowerment between transformational leadership and job satisfaction.\n\n\nMETHODS\nThis study used a survey to collect data from 200 nursing staff, i.e., nurses and medical assistants, employed by a large private hospital and a public hospital in Malaysia. Respondents were asked to answer 5-point Likert scale questions regarding transformational leadership, employee empowerment, and job satisfaction. Partial least squares-structural equation modeling (PLS-SEM) was used to analyze the measurement models and to estimate parameters in a path model. Statistical analysis was performed to examine whether empowerment mediated the relationship between transformational leadership and job satisfaction.\n\n\nRESULTS\nThis analysis showed that empowerment mediated the effect of transformational leadership on the job satisfaction in nursing staff. Employee empowerment not only is indispensable for enhancing job satisfaction but also mediates the relationship between transformational leadership and job satisfaction among nursing staff.\n\n\nCONCLUSIONS\nThe results of this research contribute to the literature on job satisfaction in healthcare industries by enhancing the understanding of the influences of empowerment and transformational leadership on job satisfaction among nursing staff. This study offers important policy insight for healthcare managers who seek to increase job satisfaction among their nursing staff.",
"title": ""
},
{
"docid": "d1756aa5f0885157bdad130d96350cd3",
"text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.",
"title": ""
},
{
"docid": "b08ea654e0d5ab7286013207a522a708",
"text": "Recent advances in sensing and computing technologies have inspired a new generation of data analysis and visualization systems for video surveillance applications. We present a novel visualization system for video surveillance based on an Augmented Virtual Environment (AVE) that fuses dynamic imagery with 3D models in a real-time display to help observers comprehend multiple streams of temporal data and imagery from arbitrary views of the scene. This paper focuses on our recent technical extensions to our AVE system, including moving object detection, tracking, and 3D display for effective dynamic event comprehension and situational awareness. Moving objects are detected and tracked in video sequences and visualized as pseudo-3D elements in the AVE scene display in real-time. We show results that illustrate the utility and benefits of these new capabilities.",
"title": ""
},
{
"docid": "29ba9499fd1d5f61f4222efbb56eb623",
"text": "Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of \"deep learning\". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks.",
"title": ""
},
{
"docid": "5a397012744d958bb1a69b435c73e666",
"text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.",
"title": ""
},
{
"docid": "8ac205b5b2344b64e926a5e18e43322f",
"text": "In 2015, Google's Deepmind announced an advancement in creating an autonomous agent based on deep reinforcement learning (DRL) that could beat a professional player in a series of 49 Atari games. However, the current manifestation of DRL is still immature, and has significant drawbacks. One of DRL's imperfections is its lack of \"exploration\" during the training process, especially when working with high-dimensional problems. In this paper, we propose a mixed strategy approach that mimics behaviors of human when interacting with environment, and create a \"thinking\" agent that allows for more efficient exploration in the DRL training process. The simulation results based on the Breakout game show that our scheme achieves a higher probability of obtaining a maximum score than does the baseline DRL algorithm, i.e., the asynchronous advantage actor-critic method. The proposed scheme therefore can be applied effectively to solving a complicated task in a real-world application.",
"title": ""
},
{
"docid": "37ccaaf82bd001e48ef1d4a2651a5700",
"text": "In a wireless network with a single source and a single destination and an arbitrary number of relay nodes, what is the maximum rate of information flow achievable? We make progress on this long standing problem through a two-step approach. First, we propose a deterministic channel model which captures the key wireless properties of signal strength, broadcast and superposition. We obtain an exact characterization of the capacity of a network with nodes connected by such deterministic channels. This result is a natural generalization of the celebrated max-flow min-cut theorem for wired networks. Second, we use the insights obtained from the deterministic analysis to design a new quantize-map-and-forward scheme for Gaussian networks. In this scheme, each relay quantizes the received signal at the noise level and maps it to a random Gaussian codeword for forwarding, and the final destination decodes the source's message based on the received signal. We show that, in contrast to existing schemes, this scheme can achieve the cut-set upper bound to within a gap which is independent of the channel parameters. In the case of the relay channel with a single relay as well as the two-relay Gaussian diamond network, the gap is 1 bit/s/Hz. Moreover, the scheme is universal in the sense that the relays need no knowledge of the values of the channel parameters to (approximately) achieve the rate supportable by the network. We also present extensions of the results to multicast networks, half-duplex networks, and ergodic networks.",
"title": ""
},
{
"docid": "7ec81eb3119d8b26056a587397bfeff4",
"text": "A human brain can store and remember thousands of faces in a person's life time, however it is very difficult for an automated system to reproduce the same results. Faces are complex and multidimensional which makes extraction of facial features to be very challenging, yet it is imperative for our face recognition systems to be better than our brain's capabilities. The face like many physiological biometrics that include fingerprint, hand geometry, retina, iris and ear uniquely identifies each individual. In this paper we focus mainly on the face recognition techniques. This review looks at three types of recognition approaches namely holistic, feature based (geometric) and the hybrid approach. We also look at the challenges that are face by the approaches.",
"title": ""
},
{
"docid": "6f3223a26959bd80e7ec73700a232657",
"text": "Question answering over knowledge graph (QA-KG) aims to use facts in the knowledge graph (KG) to answer natural language questions. It helps end users more efficiently and more easily access the substantial and valuable knowledge in the KG, without knowing its data structures. QA-KG is a nontrivial problem since capturing the semantic meaning of natural language is difficult for a machine. Meanwhile, many knowledge graph embedding methods have been proposed. The key idea is to represent each predicate/entity as a low-dimensional vector, such that the relation information in the KG could be preserved. The learned vectors could benefit various applications such as KG completion and recommender systems. In this paper, we explore to use them to handle the QA-KG problem. However, this remains a challenging task since a predicate could be expressed in different ways in natural language questions. Also, the ambiguity of entity names and partial names makes the number of possible answers large. To bridge the gap, we propose an effective Knowledge Embedding based Question Answering (KEQA) framework. We focus on answering the most common types of questions, i.e., simple questions, in which each question could be answered by the machine straightforwardly if its single head entity and single predicate are correctly identified. To answer a simple question, instead of inferring its head entity and predicate directly, KEQA targets at jointly recovering the question's head entity, predicate, and tail entity representations in the KG embedding spaces. Based on a carefully-designed joint distance metric, the three learned vectors' closest fact in the KG is returned as the answer. Experiments on a widely-adopted benchmark demonstrate that the proposed KEQA outperforms the state-of-the-art QA-KG methods.",
"title": ""
}
] |
scidocsrr
|
8444760ba8bd035fa3fc36a4d3d7fc61
|
Low Cost Self-assistive Voice Controlled Technology for Disabled People
|
[
{
"docid": "802d66fda1701252d1addbd6d23f6b4c",
"text": "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.",
"title": ""
}
] |
[
{
"docid": "7dc9afa44cc609a658b11a949829e2b9",
"text": "To achieve security in wireless sensor networks, it is important to he able to encrypt messages sent among sensor nodes. Keys for encryption purposes must he agreed upon by communicating nodes. Due to resource constraints, achieving such key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and public-key based schemes, are not suitable for wireless sensor networks. Pre-distribution of secret keys for all pairs of nodes is not viable due to the large amount of memory used when the network size is large. Recently, a random key pre-distribution scheme and its improvements have been proposed. A common assumption made by these random key pre-distribution schemes is that no deployment knowledge is available. Noticing that in many practical scenarios, certain deployment knowledge may be available a priori, we propose a novel random key pre-distribution scheme that exploits deployment knowledge and avoids unnecessary key assignments. We show that the performance (including connectivity, memory usage, and network resilience against node capture) of sensor networks can he substantially improved with the use of our proposed scheme. The scheme and its detailed performance evaluation are presented in this paper.",
"title": ""
},
{
"docid": "030c8aeb4e365bfd2fdab710f8c9f598",
"text": "By combining linear graph theory with the principle of virtual work, a dynamic formulation is obtained that extends graph-theoretic modelling methods to the analysis of exible multibody systems. The system is represented by a linear graph, in which nodes represent reference frames on rigid and exible bodies, and edges represent components that connect these frames. By selecting a spanning tree for the graph, the analyst can choose the set of coordinates appearing in the nal system of equations. This set can include absolute, joint, or elastic coordinates, or some combination thereof. If desired, all non-working constraint forces and torques can be automatically eliminated from the dynamic equations by exploiting the properties of virtual work. The formulation has been implemented in a computer program, DynaFlex, that generates the equations of motion in symbolic form. Three examples are presented to demonstrate the application of the formulation, and to validate the symbolic computer implementation.",
"title": ""
},
{
"docid": "7394f3000da8af0d4a2b33fed4f05264",
"text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.",
"title": ""
},
{
"docid": "1644d83b83383bffbd01b0ae83c3836c",
"text": "The dysregulation of inflammatory responses and of immune self-tolerance is considered to be a key element in the autoreactive immune response in multiple sclerosis (MS). Regulatory T (TREG) cells have emerged as crucial players in the pathogenetic scenario of CNS autoimmune inflammation. Targeted deletion of TREG cells causes spontaneous autoimmune disease in mice, whereas augmentation of TREG-cell function can prevent the development of or alleviate variants of experimental autoimmune encephalomyelitis, the animal model of MS. Recent findings indicate that MS itself is also accompanied by dysfunction or impaired maturation of TREG cells. The development and function of TREG cells is closely linked to dendritic cells (DCs), which have a central role in the activation and reactivation of encephalitogenic cells in the CNS. DCs and TREG cells have an intimate bidirectional relationship, and, in combination with other factors and cell types, certain types of DCs are capable of inducing TREG cells. Consequently, TREG cells and DCs have been recognized as potential therapeutic targets in MS. This Review compiles the current knowledge on the role and function of various subsets of TREG cells in MS and experimental autoimmune encephalomyelitis. We also highlight the role of tolerogenic DCs and their bidirectional interaction with TREG cells during CNS autoimmunity.",
"title": ""
},
{
"docid": "23ffdf5e7797e7f01c6d57f1e5546026",
"text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.",
"title": ""
},
{
"docid": "b2f66e8508978c392045b5f9e99362a1",
"text": "In this paper we have proposed a linguistically informed recursive neural network architecture for automatic extraction of cause-effect relations from text. These relations can be expressed in arbitrarily complex ways. The architecture uses word level embeddings and other linguistic features to detect causal events and their effects mentioned within a sentence. The extracted events and their relations are used to build a causal-graph after clustering and appropriate generalization, which is then used for predictive purposes. We have evaluated the performance of the proposed extraction model with respect to two baseline systems,one a rule-based classifier, and the other a conditional random field (CRF) based supervised model. We have also compared our results with related work reported in the past by other authors on SEMEVAL data set, and found that the proposed bidirectional LSTM model enhanced with an additional linguistic layer performs better. We have also worked extensively on creating new annotated datasets from publicly available data, which we are willing to share with the community.",
"title": ""
},
{
"docid": "a9346f8d40a8328e963774f2604da874",
"text": "Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs. Keywords-Convolutional Neural Networks, Softmax (key words) __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "881a0d8022142dc6200777835da2d323",
"text": "Muslim-majority countries do not use formal financial services (Honohon 2007).1 Even when financial services are available, some people view conventional products as incompatible with the financial principles set forth in Islamic law. In recent years, some microfinance institutions (MFIs) have stepped in to service low-income Muslim clients who demand products consistent with Islamic financial principles—leading to the emergence of Islamic microfinance as a new market niche.",
"title": ""
},
{
"docid": "f6cb93fe2e51bdfb82199a138c225c54",
"text": "Puberty suppression using gonadotropin-releasing-hormone analogues (GnRHa) has become increasingly accepted as an intervention during the early stages of puberty (Tanner stage 2–3) in individuals with clear signs of childhood-onset gender dysphoria. However, lowering the age threshold for using medical intervention for children with gender dysphoria is still a matter of contention, and is more controversial than treating the condition in adolescents and adults, as children with gender dysphoria are more likely to express an unstable pattern of gender variance. Furthermore, concerns have been expressed regarding the risks of puberty suppression, which are poorly understood, and the child's ability to make decisions and provide informed consent. However, even if the limited data available mean that it is not possible to make a conclusive treatment recommendation, some safety criteria for puberty suppression can be identified and applied.",
"title": ""
},
{
"docid": "ee947daebb5e560570edb1f3ad553b6e",
"text": "We consider the problem of embedding entities and relations of knowledge bases into low-dimensional continuous vector spaces (distributed representations). Unlike most existing approaches, which are primarily efficient for modelling pairwise relations between entities, we attempt to explicitly model both pairwise relations and long-range interactions between entities, by interpreting them as linear operators on the low-dimensional embeddings of the entities. Therefore, in this paper we introduces path ranking to capture the long-range interactions of knowledge graph and at the same time preserve the pairwise relations of knowledge graph; we call it structured embedding via pairwise relation and longrange interactions (referred to as SePLi). Comparing with the-state-of-the-art models, SePLi achieves better performances of embeddings.",
"title": ""
},
{
"docid": "a8a51268e3e4dc3b8dd5102dafcb8f36",
"text": "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"title": ""
},
{
"docid": "3613ae9cfcadee0053a270fe73c6e069",
"text": "Depth-map merging approaches have become more and more popular in multi-view stereo (MVS) because of their flexibility and superior performance. The quality of depth map used for merging is vital for accurate 3D reconstruction. While traditional depth map estimation has been performed in a discrete manner, we suggest the use of a continuous counterpart. In this paper, we first integrate silhouette information and epipolar constraint into the variational method for continuous depth map estimation. Then, several depth candidates are generated based on a multiple starting scales (MSS) framework. From these candidates, refined depth maps for each view are synthesized according to path-based NCC (normalized cross correlation) metric. Finally, the multiview depth maps are merged to produce 3D models. Our algorithm excels at detail capture and produces one of the most accurate results among the current algorithms for sparse MVS datasets according to the Middlebury benchmark. Additionally, our approach shows its outstanding robustness and accuracy in free-viewpoint video scenario.",
"title": ""
},
{
"docid": "3738d3c5d5bf4a3de55aa638adac07bb",
"text": "The term malware stands for malicious software. It is a program installed on a system without the knowledge of owner of the system. It is basically installed by the third party with the intention to steal some private data from the system or simply just to play pranks. This in turn threatens the computer’s security, wherein computer are used by one’s in day-to-day life as to deal with various necessities like education, communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great challenge for malware detectors. This paper focuses on basis study of malwares and various detection techniques which can be used to detect malwares.",
"title": ""
},
{
"docid": "49791684a7a455acc9daa2ca69811e74",
"text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.",
"title": ""
},
{
"docid": "dc8712a71084b01c6ce1cc5fa4618d76",
"text": "Compensation is crucial for improving performance of inductive-power-transfer (IPT) converters. With proper compensation at some specific frequencies, an IPT converter can achieve load-independent constant output voltage or current, near zero reactive power, and soft switching of power switches simultaneously, resulting in simplified control circuitry, reduced component ratings, and improved power conversion efficiency. However, constant output voltage or current depends significantly on parameters of the transformer, which is often space constrained, making the converter design hard to optimize. To free the design from the constraints imposed by the transformer parameters, this paper proposes a family of higher order compensation circuits for IPT converters that achieves any desired constant-voltage or constant-current (CC) output with near zero reactive power and soft switching. Detailed derivation of the compensation method is given for the desired transfer function not constrained by transformer parameters. Prototypes of CC IPT configurations based on a single transformer are constructed to verify the analysis with three different output specifications.",
"title": ""
},
{
"docid": "bfd9b9c07b14acd064b2242b48e37ce2",
"text": "We propose a fully unsupervised framework for ad-hoc cross-lingual information retrieval (CLIR) which requires no bilingual data at all. The framework leverages shared cross-lingual word embedding spaces in which terms, queries, and documents can be represented, irrespective of their actual language. The shared embedding spaces are induced solely on the basis of monolingual corpora in two languages through an iterative process based on adversarial neural networks. Our experiments on the standard CLEF CLIR collections for three language pairs of varying degrees of language similarity (English-Dutch/Italian/Finnish) demonstrate the usefulness of the proposed fully unsupervised approach. Our CLIR models with unsupervised cross-lingual embeddings outperform baselines that utilize cross-lingual embeddings induced relying on word-level and document-level alignments. We then demonstrate that further improvements can be achieved by unsupervised ensemble CLIR models. We believe that the proposed framework is the first step towards development of effective CLIR models for language pairs and domains where parallel data are scarce or non-existent.",
"title": ""
},
{
"docid": "74136e5c4090cc990f62c399781c9bb3",
"text": "This paper compares statistical techniques for text classification using Naïve Bayes and Support Vector Machines, in context of Urdu language. A large corpus is used for training and testing purpose of the classifiers. However, those classifiers cannot directly interpret the raw dataset, so language specific preprocessing techniques are applied on it to generate a standardized and reduced-feature lexicon. Urdu language is morphological rich language which makes those tasks complex. Statistical characteristics of corpus and lexicon are measured which show satisfactory results of text preprocessing module. The empirical results show that Support Vector Machines outperform Naïve Bayes classifier in terms of classification accuracy.",
"title": ""
},
{
"docid": "f99316b4346666cc0ac45058f1d4e410",
"text": "Penetration testing is the process of detecting computer vulnerabilities and gaining access and data on targeted computer systems with goal to detect vulnerabilities and security issues and proactively protect system. In this paper we presented case of internal penetration test which helped to proactively prevent potential weaknesses of targeted system with inherited vulnerabilities which is Bring Your Own Device (BYOD). Many organizations suffer great losses due to risk materialization because of missing implementing standards for information security that includes patching, change management, active monitoring and penetration testing, with goal of better dealing with security vulnerabilities. With BYOD policy in place companies taking greater risk appetite allowing mobile device to be used on corporate networks. In this paper we described how we used network hacking techniques for penetration testing for the right cause which is to prevent potential misuse of computer vulnerabilities. This paper shows how different techniques and tools can be jointly used in step by step process to successfully perform penetration testing analysis and reporting.",
"title": ""
},
{
"docid": "41c99f4746fc299ae886b6274f899c4b",
"text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.",
"title": ""
},
{
"docid": "b0be609048c8497f69991c7acc76dc9c",
"text": "We propose a novel recurrent neural network-based approach to simultaneously handle nested named entity recognition and nested entity mention detection. The model learns a hypergraph representation for nested entities using features extracted from a recurrent neural network. In evaluations on three standard data sets, we show that our approach significantly outperforms existing state-of-the-art methods, which are feature-based. The approach is also efficient: it operates linearly in the number of tokens and the number of possible output labels at any token. Finally, we present an extension of our model that jointly learns the head of each entity mention.",
"title": ""
}
] |
scidocsrr
|
ccdb013a61afbd20ef65bdbc5d5d350a
|
UWB CPW-Fed Fractal Patch Antenna With Band-Notched Function Employing Folded T-Shaped Element
|
[
{
"docid": "99d5eab7b0dfcb59f7111614714ddf95",
"text": "To prevent interference problems due to existing nearby communication systems within an ultrawideband (UWB) operating frequency, the significance of an efficient band-notched design is increased. Here, the band-notches are realized by adding independent controllable strips in terms of the notch frequency and the width of the band-notches to the fork shape of the UWB antenna. The size of the flat type band-notched UWB antenna is etched on 24 times 36 mm2 substrate. Two novel antennas are presented. One antenna is designed for single band-notch with a separated strip to cover the 5.15-5.825 GHz band. The second antenna is designed for dual band-notches using two separated strips to cover the 5.15-5.35 GHz band and 5.725-5.825 GHz band. The simulation and measurement show that the proposed antenna achieves a wide bandwidth from 3 to 12 GHz with the dual band-notches successfully.",
"title": ""
}
] |
[
{
"docid": "08656091b5e8c32080779bf9c7f46e69",
"text": "The National Telecommunications and Information Administration (NTIA) General Model for estimating video quality and its associated calibration techniques were independently evaluated by the Video Quality Experts Group (VQEG) in their Phase II Full Reference Television (FR-TV) test. The NTIA General Model was the only video quality estimator that was in the top performing group for both the 525-line and 625-line video tests. As a result, the American National Standards Institute (ANSI) adopted the NTIA General Model and its associated calibration techniques as a North American Standard in 2003. The International Telecommunication Union (ITU) has also included the NTIA General Model as a normative method in two Draft Recommendations. This paper presents a description of the NTIA General Model and its associated calibration techniques. The independent test results from the VQEG FR-TV Phase II tests are summarized, as well as results from eleven other subjective data sets that were used to develop the method.",
"title": ""
},
{
"docid": "4caaa5bf0ffbbf5c361680fbc4ad7d99",
"text": "In this paper we present a pipeline for automatic detection of traffic signs in images. The proposed system can deal with high appearance variations, which typically occur in traffic sign recognition applications, especially with strong illumination changes and dramatic scale changes. Unlike most existing systems, our pipeline is based on interest regions extraction rather than a sliding window detection scheme. The proposed approach has been specialized and tested in three variants, each aimed at detecting one of the three categories of Mandatory, Prohibitory and Danger traffic signs. Our proposal has been evaluated experimentally within the German Traffic Sign Detection Benchmark competition.",
"title": ""
},
{
"docid": "380380bd46d854febd0bf12e50ec540b",
"text": "STUDY DESIGN\nExperimental laboratory study.\n\n\nOBJECTIVES\nTo quantify and compare electromyographic signal amplitude of the gluteus maximus and gluteus medius muscles during exercises of varying difficulty to determine which exercise most effectively recruits these muscles.\n\n\nBACKGROUND\nGluteal muscle weakness has been proposed to be associated with lower extremity injury. Exercises to strengthen the gluteal muscles are frequently used in rehabilitation and injury prevention programs without scientific evidence regarding their ability to activate the targeted muscles.\n\n\nMETHODS\nSurface electromyography was used to quantify the activity level of the gluteal muscles in 21 healthy, physically active subjects while performing 12 exercises. Repeated-measures analyses of variance were used to compare normalized mean signal amplitude levels, expressed as a percent of a maximum voluntary isometric contraction (MVIC), across exercises.\n\n\nRESULTS\nSignificant differences in signal amplitude among exercises were noted for the gluteus medius (F5,90 = 7.9, P<.0001) and gluteus maximus (F5,95 = 8.1, P<.0001). Gluteus medius activity was significantly greater during side-lying hip abduction (mean +/- SD, 81% +/- 42% MVIC) compared to the 2 types of hip clam (40% +/- 38% MVIC, 38% +/- 29% MVIC), lunges (48% +/- 21% MVIC), and hop (48% +/- 25% MVIC) exercises. The single-limb squat and single-limb deadlift activated the gluteus medius (single-limb squat, 64% +/- 25% MVIC; single-limb deadlift, 59% +/- 25% MVIC) and maximus (single-limb squat, 59% +/- 27% MVIC; single-limb deadlift, 59% +/- 28% MVIC) similarly. The gluteus maximus activation during the single-limb squat and single-limb deadlift was significantly greater than during the lateral band walk (27% +/- 16% MVIC), hip clam (34% +/- 27% MVIC), and hop (forward, 35% +/- 22% MVIC; transverse, 35% +/- 16% MVIC) exercises.\n\n\nCONCLUSION\nThe best exercise for the gluteus medius was side-lying hip abduction, while the single-limb squat and single-limb deadlift exercises led to the greatest activation of the gluteus maximus. These results provide information to the clinician about relative activation of the gluteal muscles during specific therapeutic exercises that can influence exercise progression and prescription. J Orthop Sports Phys Ther 2009;39(7):532-540, Epub 24 February 2009. doi:10.2519/jospt.2009.2796.",
"title": ""
},
{
"docid": "76dc2077f52886ef7c16a9dd28084e6b",
"text": "On the Internet, electronic tribes structured around consumer interests have been growing rapidly. To be effective in this new environment, managers must consider the strategic implications of the existence of different types of both virtual community and community participation. Contrasted with database-driven relationship marketing, marketers seeking success with consumers in virtual communities should consider that they: (1) are more active and discerning; (2) are less accessible to one-on-one processes, and (3) provide a wealth of valuable cultural information. Strategies for effectively targeting more desirable types of virtual communities and types of community members include: interaction-based segmentation, fragmentation-based segmentation, co-opting communities, paying-for-attention, and building networks by giving product away. Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "c1d5df0e2058e3f191a8227fca51a2fb",
"text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.",
"title": ""
},
{
"docid": "f49090ba1157dcdc5666a58043452ea4",
"text": "A large number of algorithms have been developed to perform non-rigid registration and it is a tool commonly used in medical image analysis. The free-form deformation algorithm is a well-established technique, but is extremely time consuming. In this paper we present a parallel-friendly formulation of the algorithm suitable for graphics processing unit execution. Using our approach we perform registration of T1-weighted MR images in less than 1 min and show the same level of accuracy as a classical serial implementation when performing segmentation propagation. This technology could be of significant utility in time-critical applications such as image-guided interventions, or in the processing of large data sets.",
"title": ""
},
{
"docid": "5c954622071b23cf53c9c8cfcb65d7c0",
"text": "With the increasing popularity of the Semantic Web, more and more data becomes available in RDF with SPARQL as a query language. Data sets, however, can become too big to be managed and queried on a single server in a scalable way. Existing distributed RDF stores approach this problem using data partitioning, aiming at limiting the communication between servers and exploiting parallelism. This paper proposes a distributed SPARQL engine that combines a graph partitioning technique with workload-aware replication of triples across partitions, enabling efficient query execution even for complex queries from the workload. Furthermore, it discusses query optimization techniques for producing efficient execution plans for ad-hoc queries not contained in the workload.",
"title": ""
},
{
"docid": "e7a2c229aa2cec00026d2cb39b5416fe",
"text": "The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures. Here we show that seemingly minor choices made on (1) the use of pre-trained word embeddings, and (2) the representation of outof-vocabulary tokens at test time, can turn out to have a larger impact than architectural choices on the final performance. We systematically explore several options for these choices, and provide recommendations to researchers working in this area.",
"title": ""
},
{
"docid": "d229250779ccef56e0e61cfacdf6f199",
"text": "Much of the research in facility layout has focused on static layouts where the material handling flow is assumed to be constant during the planning horizon. But in today’s market-based, dynamic environment, layout rearrangement may be required during the planning horizon to maintain layout effectiveness. A few algorithms have been proposed to solve this problem. They include dynamic programming and pair-wise exchange. In this paper we propose an improved dynamic pair-wise exchange heuristic based on a previous method published in this journal. Tests show that the proposed method is effective and efficient.",
"title": ""
},
{
"docid": "08dbd88adb399721e0f5ee91534c9888",
"text": "Many theories of attention have proposed that visual working memory plays an important role in visual search tasks. The present study examined the involvement of visual working memory in search using a dual-task paradigm in which participants performed a visual search task while maintaining no, two, or four objects in visual working memory. The presence of a working memory load added a constant delay to the visual search reaction times, irrespective of the number of items in the visual search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the search array, indicating that the search process itself was not slowed by the memory load. Moreover, the search task did not substantially impair the maintenance of information in visual working memory. These results suggest that visual search requires minimal visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.",
"title": ""
},
{
"docid": "21d9828d0851b4ded34e13f8552f3e24",
"text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.",
"title": ""
},
{
"docid": "408e6637ed99299bb0067eae216a64fc",
"text": "The aim of this article was to describe and analyze the doctor-patient relationship between fibromyalgia patients and rheumatologists in public and private health care contexts within the Mexican health care system. This medical anthropological study drew on hospital ethnography and patients' illness narratives, as well as the experiences of rheumatologists from both types of health care services. The findings show how each type of medical care subsystem shape different relationships between patients and doctors. Patient stigmatization, overt rejection, and denial of the disease's existence were identified. In this doctor-patient-with-fibromyalgia relationship, there are difficult encounters, rather than difficult patients. These encounters are more fluid in private consultations compared with public hospitals. The doctor-centered health care model is prevalent in public institutions. In the private sector, we find the characteristics of the patient-centered model coexisting with the traditional physician-centered approach.",
"title": ""
},
{
"docid": "ddae0422527c45e37f9a5b204cb0580f",
"text": "Several studies have reported high efficacy and safety of artemisinin-based combination therapy (ACT) mostly under strict supervision of drug intake and limited to children less than 5 years of age. Patients over 5 years of age are usually not involved in such studies. Thus, the findings do not fully reflect the reality in the field. This study aimed to assess the effectiveness and safety of ACT in routine treatment of uncomplicated malaria among patients of all age groups in Nanoro, Burkina Faso. A randomized open label trial comparing artesunate–amodiaquine (ASAQ) and artemether–lumefantrine (AL) was carried out from September 2010 to October 2012 at two primary health centres (Nanoro and Nazoanga) of Nanoro health district. A total of 680 patients were randomized to receive either ASAQ or AL without any distinction by age. Drug intake was not supervised as pertains in routine practice in the field. Patients or their parents/guardians were advised on the time and mode of administration for the 3 days treatment unobserved at home. Follow-up visits were performed on days 3, 7, 14, 21, and 28 to evaluate clinical and parasitological resolution of their malaria episode as well as adverse events. PCR genotyping of merozoite surface proteins 1 and 2 (msp-1, msp-2) was used to differentiate recrudescence and new infection. By day 28, the PCR corrected adequate clinical and parasitological response was 84.1 and 77.8 % respectively for ASAQ and AL. The cure rate was higher in older patients than in children under 5 years old. The risk of re-infection by day 28 was higher in AL treated patients compared with those receiving ASAQ (p < 0.00001). Both AL and ASAQ treatments were well tolerated. This study shows a lowering of the efficacy when drug intake is not directly supervised. This is worrying as both rates are lower than the critical threshold of 90 % required by the WHO to recommend the use of an anti-malarial drug in a treatment policy. Trial registration: NCT01232530",
"title": ""
},
{
"docid": "4cbd227336c5873f74c40846b8fe7711",
"text": "It has become common for MPI-based applications to run on shared-memory machines. However, MPI semantics do not allow leveraging shared memory fully for communication between processes from within the MPI library. This paper presents an approach that combines compiler transformations with a specialized runtime system to achieve zero-copy communication whenever possible by proving certain properties statically and globalizing data selectively by altering the allocation and deallocation of communication buffers. The runtime system provides dynamic optimization, when such proofs are not possible statically, by copying data only when there are write-write or read-write conflicts. We implemented a prototype compiler, using ROSE, and evaluated it on several benchmarks. Our system produces code that performs better than MPI in most cases and no worse than MPI, tuned for shared memory, in all cases.",
"title": ""
},
{
"docid": "853e1e7bc1585bf8cd87a3aeb3797f24",
"text": "Violent video game playing is correlated with aggression, but its relation to antisocial behavior in correctional and juvenile justice samples is largely unknown. Based on a data from a sample of institutionalized juvenile delinquents, behavioral and attitudinal measures relating to violent video game playing were associated with a composite measure of delinquency and a more specific measure of violent delinquency after controlling for the effects of screen time, years playing video games, age, sex, race, delinquency history, and psychopathic personality traits. Violent video games are associated with antisociality even in a clinical sample, and these effects withstand the robust influences of multiple correlates of juvenile delinquency and youth violence most notably psychopathy.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "792907ad8871e63f6b39d344452ca66a",
"text": "This paper presents the design of a hardware-efficient, low-power image processing system for next-generation wireless endoscopy. The presented system is composed of a custom CMOS image sensor, a dedicated image compressor, a forward error correction (FEC) encoder protecting radio transmitted data against random and burst errors, a radio data transmitter, and a controller supervising all operations of the system. The most significant part of the system is the image compressor. It is based on an integer version of a discrete cosine transform and a novel, low complexity yet efficient, entropy encoder making use of an adaptive Golomb-Rice algorithm instead of Huffman tables. The novel hardware-efficient architecture designed for the presented system enables on-the-fly compression of the acquired image. Instant compression, together with elimination of the necessity of retransmitting erroneously received data by their prior FEC encoding, significantly reduces the size of the required memory in comparison to previous systems. The presented system was prototyped in a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. Its power consumption is low and comparable to other application-specific-integrated-circuits-based systems, despite FPGA-based implementation.",
"title": ""
},
{
"docid": "156c62aac106229928ba323cfb9bd53f",
"text": "The Internet is becoming increasingly influential, but some observers have noted that heavy Internet users seem alienated from normal social contacts and may even cut these off as the Internet becomes the predominate social factor in their lives. Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis [American Psychologist 53 (1998) 65] carried out a longitudinal study from which they concluded that Internet use leads to loneliness among its users. However, their study did not take into account that the population of Internet users is not uniform and comprises many different personality types. People use the Internet in a variety of ways in keeping with their own personal preference. Therefore, the results of this interaction between personality and Internet use are likely to vary among different individuals and similarly the impact on user well-being will not be uniform. One of the personality characteristics that has been found to influence Internet use is that of extroversion and neuroticism [Hamburger & Ben-Artzi, Computers in Human Behavior 16 (2000) 441]. For this study, 89 participants completed questionnaires pertaining to their own Internet use and feelings of loneliness and extroversion and neuroticism. The results were compared to two models (a) the Kraut et al. (1998) model which argues that Internet use leads to loneliness (b) an alternative model which argues that it is those people who are already lonely who spend time on the Internet. A satisfactory goodness of fit was found for the alternative model. Building on these results, several different directions are suggested for continuing research in this field. # 2002 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "27c47b97f67dae335b3bc1a09ad78778",
"text": "State-of-charge (SOC) determination is an increasingly important issue in battery technology. In addition to the immediate display of the remaining battery capacity to the user, precise knowledge of SOC exerts additional control over the charging/discharging process, which can be employed to increase battery life. This reduces the risk of overvoltage and gassing, which degrade the chemical composition of the electrolyte and plates. The proposed model in this paper determines the SOC by incorporating the changes occurring due to terminal voltage, current load, and internal resistance, which mitigate the disadvantages of using impedance only. Electromotive force (EMF) voltage is predicted while the battery is under load conditions; from the estimated EMF voltage, the SOC is then determined. The method divides the battery voltage curve into two regions: 1) the linear region for full to partial SOC and 2) the hyperbolic region from partial to low SOC. Algorithms are developed to correspond to the different characteristic changes occurring within each region. In the hyperbolic region, the rate of change in impedance and terminal voltage is greater than that in the linear region. The magnitude of current discharge causes varying rates of change to the terminal voltage and impedance. Experimental tests and results are presented to validate the new models.",
"title": ""
}
] |
scidocsrr
|
a3e91f85e91dfef0530a43a5b7b10a44
|
Learning to Select Knowledge for Response Generation in Dialog Systems
|
[
{
"docid": "36c26d1be5d9ef1ffaf457246bbc3c90",
"text": "In knowledge grounded conversation, domain knowledge plays an important role in a special domain such as Music. The response of knowledge grounded conversation might contain multiple answer entities or no entity at all. Although existing generative question answering (QA) systems can be applied to knowledge grounded conversation, they either have at most one entity in a response or cannot deal with out-ofvocabulary entities. We propose a fully data-driven generative dialogue system GenDS that is capable of generating responses based on input message and related knowledge base (KB). To generate arbitrary number of answer entities even when these entities never appear in the training set, we design a dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context. It does not rely on the representations of entities, enabling our model deal with out-ofvocabulary entities. We collect a human-human conversation data (ConversMusic) with knowledge annotations. The proposed method is evaluated on CoversMusic and a public question answering dataset. Our proposed GenDS system outperforms baseline methods significantly in terms of the BLEU, entity accuracy, entity recall and human evaluation. Moreover,the experiments also demonstrate that GenDS works better even on small datasets.",
"title": ""
},
{
"docid": "cffe9e1a98238998c174e93c73785576",
"text": "๏ The experimental results show that the proposed model effectively generate more diverse and meaningful responses involving more accurate relevant entities compared with the state-of-the-art baselines. We collect a multi-turn conversation corpus which includes not only facts related inquiries but also knowledge-based chit-chats. The data is publicly available at https:// github.com/liushuman/neural-knowledge-diffusion. We obtain the element information of each movie from https://movie.douban.com/ and build the knowledge base K. The question-answering dialogues and knowledge related chit-chat are crawled from https://zhidao.baidu.com/ and https://www.douban.com/group/. The conversations are grounded on the knowledge using NER, string match, and artificial scoring and filtering rules. The total 32977 conversations consisting of 104567 utterances are divided into training (32177) and testing set (800). Overview",
"title": ""
}
] |
[
{
"docid": "ba67c3006c6167550bce500a144e63f1",
"text": "This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.",
"title": ""
},
{
"docid": "f9076f4dbc5789e89ed758d0ad2c6f18",
"text": "This paper presents an innovative manner of obtaining discriminative texture signatures by using the LBP approach to extract additional sources of information from an input image and by using fractal dimension to calculate features from these sources. Four strategies, called Min, Max, Diff Min and Diff Max , were tested, and the best success rates were obtained when all of them were employed together, resulting in an accuracy of 99.25%, 72.50% and 86.52% for the Brodatz, UIUC and USPTex databases, respectively, using Linear Discriminant Analysis. These results surpassed all the compared methods in almost all the tests and, therefore, confirm that the proposed approach is an effective tool for texture analysis. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b0c86f449987ffe8a1dc3dfc39c66f73",
"text": "Smartphones are an ideal platform for local multiplayer games, thanks to their computational and networking capabilities as well as their popularity and portability. However, existing game engines do not exploit the locality of players to improve game latency. In this paper, we propose MicroPlay, a complete networking framework for local multiplayer mobile games. To the best of our knowledge, this is the first framework that exploits local connections between smartphones, and in particular, the broadcast nature of the wireless medium, to provide smooth, accurate rendering of all players with two desired properties. First, it performs direct-input rendering (i.e., without any inter- or extrapolation of game state) for all players; second, it provides very low game latency. We implement a MicroPlay prototype on Android phones, as well as an example multiplayer car racing game, called Racer, in order to demonstrate MicroPlay's capabilities. Our experiments show that cars can be rendered smoothly, without any prediction of state, and with only 20-30 ms game latency.",
"title": ""
},
{
"docid": "06dfc5bb4df3be7f9406be818efe28e7",
"text": "People often make decisions in health care that are not in their best interest, ranging from failing to enroll in health insurance to which they are entitled, to engaging in extremely harmful behaviors. Traditional economic theory provides a limited tool kit for improving behavior because it assumes that people make decisions in a rational way, have the mental capacity to deal with huge amounts of information and choice, and have tastes endemic to them and not open to manipulation. Melding economics with psychology, behavioral economics acknowledges that people often do not act rationally in the economic sense. It therefore offers a potentially richer set of tools than provided by traditional economic theory to understand and influence behaviors. Only recently, however, has it been applied to health care. This article provides an overview of behavioral economics, reviews some of its contributions, and shows how it can be used in health care to improve people's decisions and health.",
"title": ""
},
{
"docid": "3849284adb68f41831434afbf23be9ed",
"text": "Automatic estrus detection techniques in dairy cows have been present by different traits. Pedometers and accelerators are the most common sensor equipment. Most of the detection methods are associated with the supervised classification technique, which the training set becomes a crucial reference. The training set obtained by visual observation is subjective and time consuming. Another limitation of this approach is that it usually does not consider the factors affecting successful alerts, such as the discriminative figure, activity type of cows, the location and direction of the sensor node placed on the neck collar of a cow. This paper presents a novel estrus detection method that uses k-means clustering algorithm to create the training set online for each cow. And the training set is finally used to build an activity classification model by SVM. The activity index counted by the classification results in each sampling period can measure cow’s activity variation for assessing the onset of estrus. The experimental results indicate that the peak of estrus time are higher than that of non-estrus time at least twice in the activity index curve, and it can enhance the sensitivity and significantly reduce the error rate.",
"title": ""
},
{
"docid": "0f6183057c6b61cefe90e4fa048ab47f",
"text": "This paper investigates the use of Deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks (DBLSTM-RNNs) for voice conversion. Temporal correlations across speech frames are not directly modeled in frame-based methods using conventional Deep Neural Networks (DNNs), which results in a limited quality of the converted speech. To improve the naturalness and continuity of the speech output in voice conversion, we propose a sequence-based conversion method using DBLSTM-RNNs to model not only the frame-wised relationship between the source and the target voice, but also the long-range context-dependencies in the acoustic trajectory. Experiments show that DBLSTM-RNNs outperform DNNs where Mean Opinion Scores are 3.2 and 2.3 respectively. Also, DBLSTM-RNNs without dynamic features have better performance than DNNs with dynamic features.",
"title": ""
},
{
"docid": "547423c409d466bcb537a7b0ae0e1758",
"text": "Sequential Bayesian estimation fornonlinear dynamic state-space models involves recursive estimation of filtering and predictive distributions of unobserved time varying signals based on noisy observations. This paper introduces a new filter called the Gaussian particle filter1. It is based on the particle filtering concept, and it approximates the posterior distributions by single Gaussians, similar to Gaussian filters like the extended Kalman filter and its variants. It is shown that under the Gaussianity assumption, the Gaussian particle filter is asymptotically optimal in the number of particles and, hence, has much-improved performance and versatility over other Gaussian filters, especially when nontrivial nonlinearities are present. Simulation results are presented to demonstrate the versatility and improved performance of the Gaussian particle filter over conventional Gaussian filters and the lower complexity than known particle filters. The use of the Gaussian particle filter as a building block of more complex filters is addressed in a companion paper.",
"title": ""
},
{
"docid": "3d007291b5ca2220c15e6eee72b94a76",
"text": "While the number of knowledge bases in the Semantic Web increases, the maintenance and creation of ontology schemata still remain a challenge. In particular creating class expressions constitutes one of the more demanding aspects of ontology engineering. In this article we describe how to adapt a semi-automatic method for learning OWL class expressions to the ontology engineering use case. Specifically, we describe how to extend an existing learning algorithm for the class learning problem. We perform rigorous performance optimization of the underlying algorithms for providing instant suggestions to the user. We also present two plugins, which use the algorithm, for the popular Protégé and OntoWiki ontology editors and provide a preliminary evaluation on real ontologies.",
"title": ""
},
{
"docid": "a53f798d24bb8bd7dc49d96439eefd28",
"text": "In recent times, the worldwide price of fuel is showing an upward surge. One of the major factors leading to this can be attributed to the exponential increase in demand. In a country like Canada, where a majority of the people own vehicles, and more being added to the roads, this demand for fuel is surely going to increase in the future and will also be severely damaging to the environment as transportation sector alone is responsible for a larger share of pollutants emitted into the atmosphere. Electric vehicles offer one way to reduce the level of emissions. Electric motor drives are an integral component of an electric vehicle and consist of one or more electric motors. In this paper an effort has been made to compare different characteristics of motor drives used in electric vehicles and also given is a comprehensive list of references papers published in the field of electric vehicles",
"title": ""
},
{
"docid": "c03e116de528bf16ecbec7f9bf65e87b",
"text": "Kelley's attribution theory is investigated. Subjects filled out a questionnaire that reported 16 different responses ostensibly made by other people. These responses represented four verb categories—emotions, accomplishments, opinions, and actions—and, for experimental subjects, each was accompanied by high or low consensus information, high or low distinctiveness information, and high or low consistency information. Control subjects were not given any information regarding the response. All subjects were asked to attribute each response to characteristics of the person (i.e., the actor), the stimulus, the circumstances, or to some combination of these three factors. In addition, the subjects' expectancies for future response and stimulus generalization on the part of the actor were measured. The three information variables and verb category each had a significant effect on causal attribution and on expectancy for behavioral generalization.",
"title": ""
},
{
"docid": "f672df401b24571f81648066b3181890",
"text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.",
"title": ""
},
{
"docid": "243c14b8ea40b697449200627a09a897",
"text": "Nowadays there is a lot of effort on the study, analysis and finding of new solutions related to high density sensor networks used as part of the IoT (Internet of Things) concept. LoRa (Long Range) is a modulation technique that enables the long-range transfer of information with a low transfer rate. This paper presents a review of the challenges and the obstacles of IoT concept with emphasis on the LoRa technology. A LoRaWAN network (Long Range Network Protocol) is of the Low Power Wide Area Network (LPWAN) type and encompasses battery powered devices that ensure bidirectional communication. The main contribution of the paper is the evaluation of the LoRa technology considering the requirements of IoT. In conclusion LoRa can be considered a suitable candidate in addressing the IoT challenges.",
"title": ""
},
{
"docid": "e50ba614fc997f058f8d495b59c18af5",
"text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.",
"title": ""
},
{
"docid": "23b62c158c71905cbafa2757525d3a84",
"text": "The automotive industry is experiencing a paradigm shift towards autonomous and connected vehicles. Coupled with the increasing usage and complexity of electrical and/or electronic systems, this introduces new safety and security risks. Encouragingly, the automotive industry has relatively well-known and standardised safety risk management practices, but security risk management is still in its infancy. In order to facilitate the derivation of security requirements and security measures for automotive embedded systems, we propose a specifically tailored risk assessment framework, and we demonstrate its viability with an industry use-case. Some of the key features are alignment with existing processes for functional safety, and usability for non-security specialists.\n The framework begins with a threat analysis to identify the assets, and threats to those assets. The following risk assessment process consists of an estimation of the threat level and of the impact level. This step utilises several existing standards and methodologies, with changes where necessary. Finally, a security level is estimated which is used to formulate high-level security requirements.\n The strong alignment with existing standards and processes should make this framework well-suited for the needs in the automotive industry.",
"title": ""
},
{
"docid": "48c9877043b59f3ed69aef3cbd807de7",
"text": "This paper presents an ontology-based approach for data quality inference on streaming observation data originating from large-scale sensor networks. We evaluate this approach in the context of an existing river basin monitoring program called the Intelligent River®. Our current methods for data quality evaluation are compared with the ontology-based inference methods described in this paper. We present an architecture that incorporates semantic inference into a publish/subscribe messaging middleware, allowing data quality inference to occur on real-time data streams. Our preliminary benchmark results indicate delays of 100ms for basic data quality checks based on an existing semantic web software framework. We demonstrate how these results can be maintained under increasing sensor data traffic rates by allowing inference software agents to work in parallel. These results indicate that data quality inference using the semantic sensor network paradigm is viable solution for data intensive, large-scale sensor networks.",
"title": ""
},
{
"docid": "2ba529e0c53554d7aa856a4766d45426",
"text": "Trauma in childhood is a psychosocial, medical, and public policy problem with serious consequences for its victims and for society. Chronic interpersonal violence in children is common worldwide. Developmental traumatology, the systemic investigation of the psychiatric and psychobiological effects of chronic overwhelming stress on the developing child, provides a framework and principles when empirically examining the neurobiological effects of pediatric trauma. This article focuses on peer-reviewed literature on the neurobiological sequelae of childhood trauma in children and in adults with histories of childhood trauma.",
"title": ""
},
{
"docid": "9259d540f93e06b3772eb05ac73369f2",
"text": "A compact reconfigurable rectifying antenna (rectenna) has been proposed for 5.2- and 5.8-GHz microwave power transmission. The proposed rectenna consists of a frequency reconfigurable microstrip antenna and a frequency reconfigurable rectifying circuit. Here, the use of the odd-symmetry mode has significantly cut down the antenna size by half. By controlling the switches installed in the antenna and the rectifying circuit, the rectenna is able to switch operation between 5.2 and 5.8 GHz. Simulated conversion efficiencies of 70.5% and 69.4% are achievable at the operating frequencies of 5.2 and 5.8 GHz, respectively, when the rectenna is given with an input power of 16.5 dBm. Experiment has been conducted to verify the design idea. Due to fabrication tolerances and parametric deviation of the actual diode, the resonant frequencies of the rectenna are measured to be 4.9 and 5.9 GHz. When supplied with input powers of 16 and 15 dBm, the measured maximum conversion efficiencies of the proposed rectenna are found to be 65.2% and 64.8% at 4.9 and 5.9 GHz, respectively, which are higher than its contemporary counterparts.",
"title": ""
},
{
"docid": "0328dd3393285e315347c311bdd421e6",
"text": "Generative adversarial networks (GANs) [7] are a recent approach to train generative models of data, which have been shown to work particularly well on image data. In the current paper we introduce a new model for texture synthesis based on GAN learning. By extending the input noise distribution space from a single vector to a whole spatial tensor, we create an architecture with properties well suited to the task of texture synthesis, which we call spatial GAN (SGAN). To our knowledge, this is the first successful completely data-driven texture synthesis method based on GANs.",
"title": ""
},
{
"docid": "ab2689cd60a72529d61ff7f03f43a5bd",
"text": "In order to enhance the efficiency of radio frequency identification (RFID) and lower system computational complexity, this paper proposes three novel tag anticollision protocols for passive RFID systems. The three proposed protocols are based on a binary tree slotted ALOHA (BTSA) algorithm. In BTSA, tags are randomly assigned to slots of a frame and if some tags collide in a slot, the collided tags in the slot will be resolved by binary tree splitting while the other tags in the subsequent slots will wait. The three protocols utilize a dynamic, an adaptive, and a splitting method to adjust the frame length to a value close to the number of tags, respectively. For BTSA, the identification efficiency can achieve an optimal value only when the frame length is close to the number of tags. Therefore, the proposed protocols efficiency is close to the optimal value. The advantages of the protocols are that, they do not need the estimation of the number of tags, and their efficiency is not affected by the variance of the number of tags. Computer simulation results show that splitting BTSA's efficiency can achieve 0.425, and the other two protocols efficiencies are about 0.40. Also, the results show that the protocols efficiency curves are nearly horizontal when the number of tags increases from 20 to 4,000.",
"title": ""
},
{
"docid": "ca655b741316e8c65b6b7590833396e1",
"text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
}
] |
scidocsrr
|
acdb23963ecdf34bec38a1bdc00c1461
|
Augmented reality in the psychomotor phase of a procedural task
|
[
{
"docid": "0591acdb82c352362de74d6daef10539",
"text": "In this paper we report on our ongoing studies around the application of Augmented Reality methods to support the order picking process of logistics applications. Order picking is the gathering of goods out of a prepared range of items following some customer orders. We named the visual support of this order picking process using Head-mounted Displays “Pick-by-Vision”. This work presents the case study of bringing our previously developed Pickby-Vision system from the lab to an experimental factory hall to evaluate it under more realistic conditions. This includes the execution of two user studies. In the first one we compared our Pickby-Vision system with and without tracking to picking using a paper list to check picking performance and quality in general. In a second test we had subjects using the Pick-by-Vision system continuously for two hours to gain in-depth insight into the longer use of our system, checking user strain besides the general performance. Furthermore, we report on the general obstacles of trying to use HMD-based AR in an industrial setup and discuss our observations of user behaviour.",
"title": ""
}
] |
[
{
"docid": "e7e07b3f603b72b6f2562857762a7af8",
"text": "Coastal visits not only provide psychological benefits but can also contribute to the accumulation of rubbish. Volunteer beach cleans help address this issue, but may only have limited, local impact. Consequently, it is important to study any broader benefits associated with beach cleans. This article examines the well-being and educational value of beach cleans, as well as their impacts on individuals' behavioral intentions. We conducted an experimental study that allocated students (n = 90) to a beach cleaning, rock pooling, or walking activity. All three coastal activities were associated with positive mood and pro-environmental intentions. Beach cleaning and rock pooling were associated with higher marine awareness. The unique impacts of beach cleaning were that they were rated as most meaningful but linked to lower restorativeness ratings of the environment compared with the other activities. This research highlights the interplay between environment and activities, raising questions for future research on the complexities of person-environment interactions.",
"title": ""
},
{
"docid": "5b270044c4e352286c6a1ddc5ca667dd",
"text": "Please note that Tyndall working papers are \"work in progress\". Whilst they are commented on by Tyndall researchers, they have not been subject to a full peer review. The accuracy of this work and the conclusions reached are the responsibility of the author(s) alone and not the Tyndall Centre. Manuscript has also been submitted to a peer-reviewed journal 1 Summary Successful human societies are characterised by their adaptability, evidenced throughout human existence. However, climate change introduces a new challenge, not only because of the expected rise in temperature and sea-levels, but also due to the current context of failure to address the causes of poverty adequately. As a result, policy supporting adaptation has been cast as a necessary strategy for responding to both climate change and supporting development, making adaptation the focus of much recent scholarly and policy research. This paper addresses this new adaptation discourse, arguing that work on adaptation so far has focused on responding to the impacts of climate change, rather than sufficiently addressing the underlying factors that cause vulnerability. While there is a significant push all around for adaptation to be better placed in development planning, the paper finds this to be putting the cart before the horse. A successful adaptation process will require adequately addressing the underlying causes of vulnerability: this is the role that development has to play. This work results from research aimed at exploring the international discourse on adaptation to climate change and the meaning of adaptation to climate change in the context of development. 1. Introduction As a result of evidence that human-induced global climate change is already occurring and will continue to affect society over the coming decades, a surge in interest in impact-oriented action is discernable since the beginning of the century, in contrast to efforts centred on prevention (Burton et al., 2002). Frustration over the lack of progress and effectiveness of policy to reduce greenhouse gas emissions has contributed to this shift. Adapting to the changes has consequently emerged as a solution to address the impacts of climate change that are already evident in some regions. However, this course of action has not always been considered relevant within science and policy (Schipper, 2006a; Klein, 2003). Adaptation responds directly to the impacts of the increased concentrations of greenhouse gases in both precautionary and reactive ways, rather than through the preventative approach of limiting the source of the gases (this …",
"title": ""
},
{
"docid": "3e23069ba8a3ec3e4af942727c9273e9",
"text": "This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.",
"title": ""
},
{
"docid": "42298e39948f51f9db208c9bb221c038",
"text": "We propose a new user-centric recommendation model, called Immersive Recommendation, that incorporates cross-platform and diverse personal digital traces into recommendations. Our contextaware topic modeling algorithm systematically profiles users’ interests based on their traces from different contexts, and our hybrid recommendation algorithm makes high-quality recommendations by fusing users’ personal profiles, item profiles, and existing ratings. Specifically, in this work we target personalized news and local event recommendations for their utility and societal importance. We evaluated the model with a large-scale offline evaluation leveraging users’ public Twitter traces. In addition, we conducted a direct evaluation of the model’s recommendations in a 33-participant study using Twitter, Facebook and email traces. In the both cases, the proposed model showed significant improvement over the stateof-the-art algorithms, suggesting the value of using this new usercentric recommendation model to improve recommendation quality, including in cold-start situations.",
"title": ""
},
{
"docid": "6d47a7579d6e833cbac403381652e140",
"text": "In response to the growing gap between memory access time and processor speed, DRAM manufacturers have created several new DRAM architectures. This paper presents a simulation-based performance study of a representative group, each evaluated in a small system organization. These small-system organizations correspond to workstation-class computers and use on the order of 10 DRAM chips. The study covers Fast Page Mode, Extended Data Out, Synchronous, Enhanced Synchronous, Synchronous Link, Rambus, and Direct Rambus designs. Our simulations reveal several things: (a) current advanced DRAM technologies are attacking the memory bandwidth problem but not the latency problem; (b) bus transmission speed will soon become a primary factor limiting memory-system performance; (c) the post-L2 address stream still contains significant locality, though it varies from application to application; and (d) as we move to wider buses, row access time becomes more prominent, making it important to investigate techniques to exploit the available locality to decrease access time.",
"title": ""
},
{
"docid": "22311cff475c45dc8d4f6204c5831a45",
"text": "We demonstrate InstantEspresso, a system to explain the relationship between two sets of entities in knowledge graphs. InstantEspresso answers questions of the form «Which European politicians are related to politicians in the United States, and how?» or «How can one summarize the relationship between China and countries from the Middle East?» Each question is specified by two sets of query entities. These sets (e. g. European politicians or United States politicians) can be determined by an initial graph query over a knowledge graph capturing relationships between real-world entities. InstantEspresso analyzes the (indirect) relationships that connect entities from both sets and provides a user-friendly explanation of the answer in the form of concise subgraphs. These so-called relatedness cores correspond to important event complexes involving entities from the two sets. Our system provides a user interface for the specification of entity sets and displays a visually appealing visualization of the extracted subgraph to the user. The demonstrated system can be used to provide background information on the current state-of-affairs between realworld entities such as politicians, organizations, and the like, e. g. to a journalist preparing an article involving the entities of interest. InstantEspresso is available for an online demonstration at the URL http://espresso.mpi-inf.mpg.de/.",
"title": ""
},
{
"docid": "1be94a76c9e0835873f8d60f36b38b17",
"text": "DevOps has been identified as an important aspect in the continuous deployment paradigm in practitioner communities and academic research circles. However, little has been presented to describe and formalize what it constitutes. The absence of such understanding means that the phenomenon will not be effectively communicated and its impact not understood in those two communities. This study investigates the elements that characterize the DevOps phenomenon using a literature survey and interviews with practitioners actively involved in the DevOps movement. Four main dimensions of DevOps are identified: collaboration, automation, measurement and monitoring. An initial conceptual framework is developed to communicate the phenomenon to practitioners and the scientific community as well as to facilitate input for future research.",
"title": ""
},
{
"docid": "f6540d23f09c8ee4b6a11187abe82112",
"text": "We propose a visual analytics approach for the exploration and analysis of dynamic networks. We consider snapshots of the network as points in high-dimensional space and project these to two dimensions for visualization and interaction using two juxtaposed views: one for showing a snapshot and one for showing the evolution of the network. With this approach users are enabled to detect stable states, recurring states, outlier topologies, and gain knowledge about the transitions between states and the network evolution in general. The components of our approach are discretization, vectorization and normalization, dimensionality reduction, and visualization and interaction, which are discussed in detail. The effectiveness of the approach is shown by applying it to artificial and real-world dynamic networks.",
"title": ""
},
{
"docid": "aa907899bf41e35082641abdda1a3e85",
"text": "This paper describes the measurement and analysis of the motion of a tennis swing. Over the past decade, people have taken a greater interest in their physical condition in an effort to avoid health problems due to aging. Exercise, especially sports, is an integral part of a healthy lifestyle. As a popular lifelong sport, tennis was selected as the subject of this study, with the focus on the correct form for playing tennis, which is difficult to learn. We used a 3D gyro sensor fixed at the waist to detect the angular velocity in the movement of the stroke and serve of expert and novice tennis players for comparison.",
"title": ""
},
{
"docid": "06c65b566b298cc893388a6f317bfcb1",
"text": "Emotion recognition from speech is one of the key steps towards emotional intelligence in advanced human-machine interaction. Identifying emotions in human speech requires learning features that are robust and discriminative across diverse domains that differ in terms of language, spontaneity of speech, recording conditions, and types of emotions. This corresponds to a learning scenario in which the joint distributions of features and labels may change substantially across domains. In this paper, we propose a deep architecture that jointly exploits a convolutional network for extracting domain-shared features and a long short-term memory network for classifying emotions using domain-specific features. We use transferable features to enable model adaptation from multiple source domains, given the sparseness of speech emotion data and the fact that target domains are short of labeled data. A comprehensive cross-corpora experiment with diverse speech emotion domains reveals that transferable features provide gains ranging from 4.3% to 18.4% in speech emotion recognition. We evaluate several domain adaptation approaches, and we perform an ablation study to understand which source domains add the most to the overall recognition effectiveness for a given target domain.",
"title": ""
},
{
"docid": "8a414e60b4a81da21d21d5bcfcff1ccf",
"text": "We propose an e¢ cient liver allocation system for allocating donated organs to patients waiting for transplantation, the only viable treatment for End-Stage Liver Disease. We optimize two metrics which are used to measure the e¢ ciency: total quality adjusted life years and the number of organs wasted due to patients rejecting some organ o¤ers. Our model incorporates the possibility that the patients may turn down the organ o¤ers. Given the scarcity of available organs relative to the number patients waiting for transplantation, we model the system as a multiclass uid model of overloaded queues. The uid model we advance captures the disease evolution over time by allowing the patients to switch between classes over time, e.g. patients waiting for transplantation may get sicker/better, or may die. We characterize the optimal solution to the uid model using the duality framework for optimal control problems developed by Rockafellar (1970a). The optimal solution for assigning livers to patients is an intuitive dynamic index policy, where the indices depend on patients acceptance probabilities of the organ o¤er, immediate rewards, and the shadow prices calculated from the dual dynamical system. Finally, we perform a detailed simulation study to demonstrate the e¤ectiveness of the proposed policy using data from the United Network for Organ Sharing System (UNOS).",
"title": ""
},
{
"docid": "864fbcdf238dcca3d698a03b470fda07",
"text": "Electronic screens on laptop and tablet computers are being used for reading text, often while multitasking. Two experimental studies with college students explored the effect of medium and opportunities to multitask on reading (Study 1) and report writing (Study 2). In study 1, participants (N = 120) read an easy and difficult passage on paper, a laptop, or tablet, while either multitasking or not multitasking. Neither multitasking nor medium impacted reading comprehension, but those who multitasked took longer to read both passages, indicating loss of efficiency with multitasking. In Study 2, participants (N = 67) were asked to synthesize source material in multiple texts to write a one-page evidence-based report. Participants read the source texts either on (1) paper, (2) computer screen without Internet or printer access, or (3) computer screen with Internet and printer access (called the “real-world” condition). There were no differences in report quality or efficiency between those whose source materials were paper or computer. However, global report quality was significantly better when participants read source texts on a computer screen without Internet or printer access, compared with when they had Internet and printer access. Active use of paper for note-taking greatly reduced the negative impact of Internet and printer access in the real-world condition. Although participants expressed a preference for accessing information on paper, reading the texts on paper did not make a significant difference in report quality, compared with either of the two computer conditions. Implications for formal and informal learning are discussed.",
"title": ""
},
{
"docid": "97cdc4383582e8c4c33caeb9514890f8",
"text": "This experimental study investigated effects of intrinsic motivation and embedded relevance enhancement within a computer-based interactive multimedia (CBIM) lesson for English as a foreign language (EFL) learners. Subjects, categorized as having a higher or lower level of intrinsic motivation, were randomly assigned to learn concepts related to criticism using a CBIM program featuring English language text, videos, and exercises either with or without enhanced relevance components. Two dependent variables, comprehension, as measured by a posttest, and perceptions of motivation, as measured by the Modified Instructional Material Motivation Survey (MIMMS), were assessed after students completed the CBIM program. Two-way ANOVA was used to analyze the collected data. The findings indicated that (a) the use of relevance enhancement strategies facilitated students’ language learning regardless of learners’ level of intrinsic motivation, (b) more highly intrinsically motivated students performed better regardless of the specific treatments they received, (c) the effects of the two variables were additive; intrinsically motivated students who learned from the program with embedded instructional strategies performed the best overall, and (d) there was no significant interaction between the two variables.",
"title": ""
},
{
"docid": "bb31643fcfd51c3250f7dc03c8bb6706",
"text": "In this paper, we propose a joint semantic preserving action attribute learning framework for action recognition from depth videos, which is built on multistream deep neural networks. More specifically, this paper describes the idea to explore action attributes learned from deep activations. Multiple stream deep neural networks rather than conventional hand-crafted low-level features are employed to learn the deep activations. An undirected graph is utilized to model the complex semantics among action attributes and is integrated into our proposed joint action attribute learning algorithm. Experiments on several public datasets for action recognition demonstrate that 1) the deep activations achieve the state-ofthe-art discriminative performance as feature vectors and 2) the attribute learner can produce generic attributes, and thus obtains decent performance on zero-shot action recognition.",
"title": ""
},
{
"docid": "80b5030cbb923f32dc791409eb184a80",
"text": "Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function f which is only accessible via point evaluations. It is typically used in settings where f is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network architectures. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.",
"title": ""
},
{
"docid": "87af466921c1c6a48518859e09e88fa8",
"text": "Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.",
"title": ""
},
{
"docid": "ab54e41b0e79eed8f6f7fc1b0f9d9ddb",
"text": "The stemming is the process to derive the basic word by removing affix of the word. The stemming is tightly related to basic word or lemma and the sub lemmas. The lemma and sub lemma of Indonesian Language have been grown and absorb from foreign languages or Indonesian traditional languages. Our approach provides the easy way of stemming Indonesian language through flexibility affix classification. Therefore, the affix additional can be applied in easy way. We experiment with 1,704 text documents with 255,182 tokens and the stemmed words is 3,648 words. In this experiment, we compare our approach performance to the confix-stripping approach performance. The result shows that our performance can cover the failure in stemming reduplicated words of confix-stripping approach.",
"title": ""
},
{
"docid": "3872e6183c75ea92f6f77b48c69d7a62",
"text": "Although contactless fingerprint images are rarely affected by skin conditions and finger pressure in comparison with touch-based fingerprint images, they are usually noisy and suffer from low ridge-valley contrast. This paper proposes a robust contactless fingerprint enhancement method based on intrinsic image decomposition and guided image filtering. In order to strengthen the contrast of ridge and valley, intrinsic image decomposition is firstly performed on the observed fingerprint image. Then, the obtained intrinsic fingerprint image is used as the guided image to filter the observed fingerprint image, which can efficiently eliminate noise while preserving the ridge-valley information. Finally, an improved Gabor-based contextual filter is adopted to further enhance the fingerprint image quality. Experimental results of minutiae extraction based on the enhanced fingerprint image demonstrate the validity of the proposed method.",
"title": ""
},
{
"docid": "5158b5da8a561799402cb1ef3baa3390",
"text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.",
"title": ""
},
{
"docid": "14b6ff85d404302af45cf608137879c7",
"text": "In this paper, an automatic multi-organ segmentation based on multi-boost learning and statistical shape model search was proposed. First, simple but robust Multi-Boost Classifier was trained to hierarchically locate and pre-segment multiple organs. To ensure the generalization ability of the classifier relative location information between organs, organ and whole body is exploited. Left lung and right lung are first localized and pre-segmented, then liver and spleen are detected upon its location in whole body and its relative location to lungs, kidney is finally detected upon the features of relative location to liver and left lung. Second, shape and appearance models are constructed for model fitting. The final refinement delineation is performed by best point searching guided by appearance profile classifier and is constrained with multi-boost classified probabilities, intensity and gradient features. The method was tested on 30 unseen CT and 30 unseen enhanced CT (CTce) datasets from ISBI 2015 VISCERAL challenge. The results demonstrated that the multi-boost learning can be used to locate multi-organ robustly and segment lung and kidney accurately. The liver and spleen segmentation based on statistical shape searching has shown good performance too. Copyright c © by the paper’s authors. Copying permitted only for private and academic purposes. In: O. Goksel (ed.): Proceedings of the VISCERAL Anatomy Grand Challenge at the 2015 IEEE International Symposium on Biomedical Imaging (ISBI), New York, NY, Apr 16, 2015 published at http://ceur-ws.org",
"title": ""
}
] |
scidocsrr
|
376d0f06bff225e952df946ed3e1b5d9
|
Breast cancer imaging at mm-waves: Feasibility study on the safety exposure limits
|
[
{
"docid": "8c58b608430e922284d8b4b8cd5cc51d",
"text": "At the end of the 19th century, researchers observed that biological substances have frequency- dependent electrical properties and that tissue behaves \"like a capacitor\" [1]. Consequently, in the first half of the 20th century, the permittivity of many types of cell suspensions and tissues was characterized up to frequencies of approximately 100 MHz. From the measurements, conclusions were drawn, in particular, about the electrical properties of the cell membranes, which are the main contributors to the tissue impedance at frequencies below 10 MHz [2]. In 1926, a study found a significant different permittivity for breast cancer tissue compared with healthy tissue at 20 kHz [3]. After World War II, new instrumentation enabled measurements up to 10 GHz, and a vast amount of data on the dielectric properties of different tissue types in the microwave range was published [4]-[6].",
"title": ""
},
{
"docid": "fbb164c5c0b4db853b92e0919c260331",
"text": "The dielectric properties of tissues have been extracted from the literature of the past five decades and presented in a graphical format. The purpose is to assess the current state of knowledge, expose the gaps there are and provide a basis for the evaluation and analysis of corresponding data from an on-going measurement programme.",
"title": ""
},
{
"docid": "2dc23ce5b1773f12905ebace6ef221a5",
"text": "With the increasing demand for higher data rates and more reliable service capabilities for wireless devices, wireless service providers are facing an unprecedented challenge to overcome a global bandwidth shortage. Early global activities on beyond fourth-generation (B4G) and fifth-generation (5G) wireless communication systems suggest that millimeter-wave (mmWave) frequencies are very promising for future wireless communication networks due to the massive amount of raw bandwidth and potential multigigabit-per-second (Gb/s) data rates [1]?[3]. Both industry and academia have begun the exploration of the untapped mmWave frequency spectrum for future broadband mobile communication networks. In April 2014, the Brooklyn 5G Summit [4], sponsored by Nokia and the New York University (NYU) WIRELESS research center, drew global attention to mmWave communications and channel modeling. In July 2014, the IEEE 802.11 next-generation 60-GHz study group was formed to increase the data rates to over 20 Gb/s in the unlicensed 60-GHz frequency band while maintaining backward compatibility with the emerging IEEE 802.11ad wireless local area network (WLAN) standard [5].",
"title": ""
}
] |
[
{
"docid": "f579065ddd7f8771a0aa3ea6fa71c490",
"text": "A six-port <inline-formula> <tex-math notation=\"LaTeX\">$3\\times 3$ </tex-math></inline-formula> series-fed planar antenna array design for dual polarized X-band airborne synthetic aperture radar (SAR) applications is presented in this paper. The antenna array patches interconnected with series feeding one to another as the structure to realize directional radiation and vertical/horizontal polarization. The patch elements are excited through <inline-formula> <tex-math notation=\"LaTeX\">$50~\\Omega $ </tex-math></inline-formula> microstrip line feeding in series with quarter wave transformer for good impedance characteristics. A six-port feeding allows selecting the direction of the traveling waves, and consequently the sense of linear (vertical and horizontal) polarization. A prototype of the antenna is fabricated and validated the proposed method. The dimensions of the fabricated prototype <inline-formula> <tex-math notation=\"LaTeX\">$3\\times 3$ </tex-math></inline-formula> array antenna are <inline-formula> <tex-math notation=\"LaTeX\">$3.256\\lambda \\text {g} \\times 3.256\\lambda _{\\mathrm {g}}\\times 0.0645\\lambda _{\\mathrm {g}}$ </tex-math></inline-formula> (<inline-formula> <tex-math notation=\"LaTeX\">$\\lambda _{\\mathrm {g}}$ </tex-math></inline-formula> is guided wavelength at center frequency 9.65 GHz). The measured S<sub>11</sub> < −10 dB reflection bandwidths are 1.4% at each port. Furthermore, an interport and intraport isolation S21<−25 dB was also achieved across the operation band. The measured peak gains are higher than 11.5 dBi with 90% efficiency for all individual ports/polarizations V1, V2, V3 H1, H2, and H3). In order to satisfy the different performance requirement of series-fed array antenna, such as realized gain, radiation efficiency, side lobe level, half power beam width and front to back ratio are also examined. Measured results of an antenna prototype successfully validate the concept. The proposed antenna is found suitable for dual polarized airborne SAR applications.",
"title": ""
},
{
"docid": "9a04006d0328b838b9360a381401e436",
"text": "In this paper, a novel approach for two-loop control of the DC-DC flyback converter in discontinuous conduction mode is presented by using sliding mode controller. The proposed controller can regulate output of the converter in wide range of input voltage and load resistance. In order to verify accuracy and efficiency of the developed sliding mode controller, proposed method is simulated in MATLAB/Simulink. It is shown that the developed controller has faster dynamic response compared with standard integrated circuit (MIC38C42-5) based regulators.",
"title": ""
},
{
"docid": "e0e33d26cc65569e80213069cb5ad857",
"text": "Capsule Networks have great potential to tackle problems in structural biology because of their aention to hierarchical relationships. is paper describes the implementation and application of a Capsule Network architecture to the classication of RAS protein family structures on GPU-based computational resources. e proposed Capsule Network trained on 2D and 3D structural encodings can successfully classify HRAS and KRAS structures. e Capsule Network can also classify a protein-based dataset derived from a PSI-BLAST search on sequences of KRAS and HRAS mutations. Our results show an accuracy improvement compared to traditional convolutional networks, while improving interpretability through visualization of activation vectors.",
"title": ""
},
{
"docid": "0600b610a9ebb3fcd275c5820b37cb5b",
"text": "In this paper, we solve the following data summarization problem: given a multi-dimensional data set augmented with a binary attribute, how can we construct an interpretable and informative summary of the factors affecting the binary attribute in terms of the combinations of values of the dimension attributes? We refer to such summaries as explanation tables. We show the hardness of constructing optimally-informative explanation tables from data, and we propose effective and efficient heuristics. The proposed heuristics are based on sampling and include optimizations related to computing the information content of a summary from a sample of the data. Using real data sets, we demonstrate the advantages of explanation tables compared to related approaches that can be adapted to solve our problem, and we show significant performance benefits of our optimizations.",
"title": ""
},
{
"docid": "36d5ba974945cba3bf9120f3ab9aa7a0",
"text": "In this paper, we analyze the spectral efficiency of multicell massive multiple-input-multiple-output (MIMO) systems with downlink training and a new pilot contamination precoding (PCP) scheme. First, we analyze the spectral efficiency of the beamforming training (BT) scheme with maximum-ratio transmission (MRT) precoding. Then, we derive an approximate closed-form expression of the spectral efficiency to find the optimal lengths of uplink and downlink pilots. Simulation results show that the achieved spectral efficiency can be improved due to channel estimation at the user side, but in comparison with a single-cell scenario, the spectral efficiency per cell in multicell scenario degrades because of pilot contamination. We focus on the practical case where the number of base station (BS) antennas is large but still finite and propose the BT and PCP (BT-PCP) transmission scheme to mitigate the pilot contamination with limited cooperation between BSs. We confirm the effectiveness of the proposed BT-PCP scheme with simulation, and we show that the proposed BT-PCP scheme achieves higher spectral efficiency than the conventional PCP method and that the performance gap from the perfect channel state information (CSI) scenario without pilot contamination is small.",
"title": ""
},
{
"docid": "998c631c38d49705994e85252b500882",
"text": "The botnet, as one of the most formidable threats to cyber security, is often used to launch large-scale attack sabotage. How to accurately identify the botnet, especially to improve the performance of the detection model, is a key technical issue. In this paper, we propose a framework based on generative adversarial networks to augment botnet detection models (Bot-GAN). Moreover, we explore the performance of the proposed framework based on flows. The experimental results show that Bot-GAN is suitable for augmenting the original detection model. Compared with the original detection model, the proposed approach improves the detection performance, and decreases the false positive rate, which provides an effective method for improving the detection performance. In addition, it also retains the primary characteristics of the original detection model, which does not care about the network payload information, and has the ability to detect novel botnets and others using encryption or proprietary protocols.",
"title": ""
},
{
"docid": "b3f423e513c543ecc9fe7003ff9880ea",
"text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.",
"title": ""
},
{
"docid": "3f9e5be7bfe8c28291758b0670afc61c",
"text": "Grayscale error di usion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. In color error di usion what color to render is a major concern in addition to nding optimal dot patterns. This article presents a survey of key methods for artifact reduction in grayscale and color error di usion. The linear gain model by Kite et al. replaces the thresholding quantizer with a scalar gain plus additive noise. They show that the sharpening is proportional to the scalar gain. Kite et al. derive the sharpness control parameter value in threshold modulation (Eschbach and Knox, 1991) to compensate linear distortion. False textures at mid-gray (Fan and Eschbach, 1994) are due to limit cycles, which can be broken up by using a deterministic bit ipping quantizer (Damera-Venkata and Evans, 2001). Several other variations on grayscale error di usion have been proposed to reduce false textures in shadow and highlight regions, including green noise halftoning (Levien, 1993) and tone-dependent error di usion (Li and Allebach, 2002). Color error di usion ideally requires the quantization error to be di used to frequencies and colors, to which the HVS is least sensitive. We review the following approaches: color plane separable (Kolpatzik and Bouman 1992) design; perceptual quantization (Shaked et al. 1996, Haneishi et al. 1996) ; green noise extensions (Lau et al. 2000); and matrix-valued error lters (Damera-Venkata and Evans, 2001).",
"title": ""
},
{
"docid": "4e4560d1434ee05c30168e49ffc3d94a",
"text": "We present a tree data structure for fast nearest neighbor operations in general <i>n</i>-point metric spaces (where the data set consists of <i>n</i> points). The data structure requires <i>O</i>(<i>n</i>) space <i>regardless</i> of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant <i>c</i>, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in <i>O</i> (<i>c</i><sup>6</sup><i>n</i> log <i>n</i>) time. Furthermore, nearest neighbor queries require time only logarithmic in <i>n</i>, in particular <i>O</i> (<i>c</i><sup>12</sup> log <i>n</i>) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"title": ""
},
{
"docid": "3cbb932e65cf2150cb32aaf930b45492",
"text": "In software industries, various open source projects utilize the services of Bug Tracking Systems that let users submit software issues or bugs and allow developers to respond to and fix them. The users label the reports as bugs or any other relevant class. This classification helps to decide which team or personnel would be responsible for dealing with an issue. A major problem here is that users tend to wrongly classify the issues, because of which a middleman called a bug triager is required to resolve any misclassifications. This ensures no time is wasted at the developer end. This approach is very time consuming and therefore it has been of great interest to automate the classification process, not only to speed things up, but to lower the amount of errors as well. In the literature, several approaches including machine learning techniques have been proposed to automate text classification. However, there has not been an extensive comparison on the performance of different natural language classifiers in this field. In this paper we compare general natural language data classifying techniques using five different machine learning algorithms: Naive Bayes, kNN, Pegasos, Rocchio and Perceptron. The performance comparison of these algorithms was done on the basis of their apparent error rates. The data-set involved four different projects, Httpclient, Jackrabbit, Lucene and Tomcat5, that used two different Bug Tracking Systems - Bugzilla and Jira. An experimental comparison of pre-processing techniques was also performed.",
"title": ""
},
{
"docid": "a8a8656f2f7cdcab79662cb150c8effa",
"text": "As networks grow both in importance and size, there is an increasing need for effective security monitors such as Network Intrusion Detection System to prevent such illicit accesses. Intrusion Detection Systems technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on hybrid fuzzy logic and neural network. The key idea is to take advantage of different classification abilities of fuzzy logic and neural network for intrusion detection system. The new model has ability to recognize an attack, to differentiate one attack from another i.e. classifying attack, and the most important, to detect new attacks with high detection rate and low false negative. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data set.",
"title": ""
},
{
"docid": "33aef68f318147653726bc5f4f37d8e9",
"text": "This study was designed to investigate mentally simulated actions in a virtual reality environment. Naive human subjects (n = 15) were instructed to imagine themselves walking in a three-dimensional virtual environment toward gates of different apparent widths placed at three different apparent distances. Each subject performed nine blocks of six trials in a randomised order. The response time (reaction time and mental walking time) was measured as the duration between an acoustic go signal and a motor signal produced by the subject. There was a combined effect on response time of both gate width and distance. Response time increased for decreasing apparent gate widths when the gate was placed at different distances. These results support the notion that mentally simulated actions are governed by central motor rules.",
"title": ""
},
{
"docid": "ad4547c0a82353f122f536352684384f",
"text": "Reported complication rates are low for lateral epicondylitis management, but the anatomic complexity of the elbow allows for possible catastrophic complication. This review documents complications associated with lateral epicondylar release: 67 studies reporting outcomes of lateral epicondylar release with open, percutaneous, or arthroscopic methods were reviewed and 6 case reports on specific complications associated with the procedure are included. Overall complication rate was 3.3%. For open procedures it was 4.3%, percutaneous procedures 1.9%, and arthroscopic procedures 1.1%. In higher-level studies directly comparing modalities, the complication rates were 1.3%, 0%, and 1.2%, respectively.",
"title": ""
},
{
"docid": "e39ad8ee1d913cba1707b6aafafceefb",
"text": "Thoracic Outlet Syndrome (TOS) is the constellation of symptoms caused by compression of neurovascular structures at the superior aperture of the thorax, properly the thoracic inlet! The diagnosis and treatment is contentious and some even question its existence. Symptoms are often confused with distal compression neuropathies or cervical",
"title": ""
},
{
"docid": "ac8f1d1007dd725312e6481748a1df63",
"text": "We present a novel method to estimate full-body human pose in video sequence by incorporating global motion cues. It has been demonstrated that temporal constraints can largely enhance the pose estimation. Most current approaches typically employ local motion to propagate pose detections to supplement the pose candidates. However, the local motion estimation is often inaccurate under fast movements of body parts and unhelpful when no strong detections achieved in adjacent frames. In this paper, we propose to propagate the detection in each frame using the global motion estimation. Benefiting from the strong detections, our algorithm first produces reasonable trajectory hypotheses for each body part. Then, we cast pose estimation as an optimization problem defined on these trajectories with spatial links between body parts. In the optimization process, we select body part trajectory rather than body part candidate to infer the human pose. Experimental results demonstrate significant performance improvement in comparison with the state-of-the-art methods.",
"title": ""
},
{
"docid": "a1914bf9e03fe2ea53d7dcbbde8cc94b",
"text": "Organic User Interfaces (OUIs) are flexible, actuated interfaces characterized by being aesthetically pleasing, intuitively manipulated and ubiquitously embedded in our daily life. In this paper, we critically survey the state-of-the-art for OUIs in interactive architecture research at two levels: 1) Architecture and Landscape; and 2) Interior Design. We postulate that OUIs have specific qualities that offer great potential for building interactive interiors and entire architectures that have the potential to -finally- transform the vision of smart homes and ubiquitous computing environments (calm computing) into reality. We formulate a manifesto for OUI Architecture in both exterior and interior design, arguing that OUIs should be at the core of a new interdisciplinary field driving research and practice in architecture. Based on this research agenda we propose concerted efforts to be made to begin addressing the challenges and opportunities of OUIs. This agenda offers us the strongest means through which to deliver a future of interactive architecture.",
"title": ""
},
{
"docid": "1004d314aecd1fd13c68c6ea2db9e8bd",
"text": "Hand, foot and mouth disease (HFMD) is a highly contagious viral infection affecting young children during the spring to fall seasons. Recently, serious outbreaks of HFMD were reported frequently in the Asia-Pacific region, including China and Korea. The symptoms of HFMD are usually mild, comprising fever, loss of appetite, and a rash with blisters, which do not need specific treatment. However, there are uncommon neurological or cardiac complications such as meningitis and acute flaccid paralysis that can be fatal. HFMD is most commonly caused by infection with coxsackievirus A16, and secondly by enterovirus 71 (EV71). Many other strains of coxsackievirus and enterovirus can also cause HFMD. Importantly, HFMD caused by EV71 tends to be associated with fatal complications. Therefore, there is an urgent need to protect against EV71 infection. Development of vaccines against EV71 would be the most effective approach to prevent EV71 outbreaks. Here, we summarize EV71 infection and development of vaccines, focusing on current scientific and clinical progress.",
"title": ""
},
{
"docid": "22e677f2073599d6ffc9eadf6f3a833f",
"text": "Statistical inference in psychology has traditionally relied heavily on p-value significance testing. This approach to drawing conclusions from data, however, has been widely criticized, and two types of remedies have been advocated. The first proposal is to supplement p values with complementary measures of evidence, such as effect sizes. The second is to replace inference with Bayesian measures of evidence, such as the Bayes factor. The authors provide a practical comparison of p values, effect sizes, and default Bayes factors as measures of statistical evidence, using 855 recently published t tests in psychology. The comparison yields two main results. First, although p values and default Bayes factors almost always agree about what hypothesis is better supported by the data, the measures often disagree about the strength of this support; for 70% of the data sets for which the p value falls between .01 and .05, the default Bayes factor indicates that the evidence is only anecdotal. Second, effect sizes can provide additional evidence to p values and default Bayes factors. The authors conclude that the Bayesian approach is comparatively prudent, preventing researchers from overestimating the evidence in favor of an effect.",
"title": ""
},
{
"docid": "79c80b3aea50ab971f405b8b58da38de",
"text": "In this paper, the design and implementation of small inductors in printed circuit board (PCB) for domestic induction heating applications is presented. With this purpose, we have developed both a manufacturing technique and an electromagnetic model of the system based on finite-element method (FEM) simulations. The inductor arrangement consists of a stack of printed circuit boards in which a planar litz wire structure is implemented. The developed PCB litz wire structure minimizes the losses in a similar way to the conventional multi-stranded litz wires; whereas the stack of PCBs allows increasing the power transferred to the pot. Different prototypes of the proposed PCB inductor have been measured at low signal levels. Finally, a PCB inductor has been integrated in an electronic stage to test at high signal levels, i.e. in the similar working conditions to the commercial application.",
"title": ""
},
{
"docid": "d263d778738494e26e160d1c46874fff",
"text": "We introduce new online models for two important aspectsof modern financial markets: Volume Weighted Average Pricetrading and limit order books. We provide an extensivestudy of competitive algorithms in these models and relatethem to earlier online algorithms for stock trading.",
"title": ""
}
] |
scidocsrr
|
a92324172cfd09afa05ef9065dc06edc
|
The Utility of Hello Messages for Determining Link Connectivity
|
[
{
"docid": "ef5f1aa863cc1df76b5dc057f407c473",
"text": "GLS is a new distributed location service which tracks mobile node locations. GLS combined with geographic forwarding allows the construction of ad hoc mobile networks that scale to a larger number of nodes than possible with previous work. GLS is decentralized and runs on the mobile nodes themselves, requiring no fixed infrastructure. Each mobile node periodically updates a small set of other nodes (its location servers) with its current location. A node sends its position updates to its location servers without knowing their actual identities, assisted by a predefined ordering of node identifiers and a predefined geographic hierarchy. Queries for a mobile node's location also use the predefined identifier ordering and spatial hierarchy to find a location server for that node.\nExperiments using the ns simulator for up to 600 mobile nodes show that the storage and bandwidth requirements of GLS grow slowly with the size of the network. Furthermore, GLS tolerates node failures well: each failure has only a limited effect and query performance degrades gracefully as nodes fail and restart. The query performance of GLS is also relatively insensitive to node speeds. Simple geographic forwarding combined with GLS compares favorably with Dynamic Source Routing (DSR): in larger networks (over 200 nodes) our approach delivers more packets, but consumes fewer network resources.",
"title": ""
}
] |
[
{
"docid": "30b1b4df0901ab61ab7e4cfb094589d1",
"text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.",
"title": ""
},
{
"docid": "701fb71923bb8a2fc90df725074f576b",
"text": "Quantum computing poses challenges to public key signatures as we know them today. LMS and XMSS are two hash based signature schemes that have been proposed in the IETF as quantum secure. Both schemes are based on well-studied hash trees, but their similarities and differences have not yet been discussed. In this work, we attempt to compare the two standards. We compare their security assumptions and quantify their signature and public key sizes. We also address the computation overhead they introduce. Our goal is to provide a clear understanding of the schemes’ similarities and differences for implementers and protocol designers to be able to make a decision as to which standard to chose.",
"title": ""
},
{
"docid": "56b42c551ad57c82ad15e6fc2e98f528",
"text": "Recent work has demonstrated that when artificial agents are limited in their ability to achieve their goals, the agent designer can benefit by making the agent’s goals different from the designer’s. This gives rise to the optimization problem of designing the artificial agent’s goals—in the RL framework, designing the agent’s reward function. Existing attempts at solving this optimal reward problem do not leverage experience gained online during the agent’s lifetime nor do they take advantage of knowledge about the agent’s structure. In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent’s lifetime. We show that our method generalizes a standard policy gradient approach, and we demonstrate its ability to improve reward functions in agents with various forms of limitations. 1 The Optimal Reward Problem In this work, we consider the scenario of an agent designer building an autonomous agent. The designer has his or her own goals which must be translated into goals for the autonomous agent. We represent goals using the Reinforcement Learning (RL) formalism of the reward function. This leads to the optimal reward problem of designing the agent’s reward function so as to maximize the objective reward received by the agent designer. Typically, the designer assigns his or her own reward to the agent. However, there is ample work which demonstrates the benefit of assigning reward which does not match the designer’s. For example, work on reward shaping [11] has shown how to modify rewards to accelerate learning without altering the optimal policy, and PAC-MDP methods [5, 20] including approximate Bayesian methods [7, 19] add bonuses to the objective reward to achieve optimism under uncertainty. These approaches explicitly or implicitly assume that the asymptotic behavior of the agent should be the same as that which would occur using the objective reward function. These methods do not explicitly consider the optimal reward problem; however, they do show improved performance through reward modification. In our recent work that does explicitly consider the optimal reward problem [18], we analyzed an explicit hypothesis about the benefit of reward design—that it helps mitigate the performance loss caused by computational constraints (bounds) on agent architectures. We considered various types of agent limitations—limits on planning depth, failure to account for partial observability, and other erroneous modeling assumptions—and demonstrated the benefits of good reward functions in each case empirically. Crucially, in bounded agents, the optimal reward function often leads to behavior that is different from the asymptotic behavior achieved with the objective reward function. In this work, we develop an algorithm, Policy Gradient for Reward Design (PGRD), for improving reward functions for a family of bounded agents that behave according to repeated local (from the current state) model-based planning. We show that this algorithm is capable of improving the reward functions in agents with computational limitations necessitating small bounds on the depth of planning, and also from the use of an inaccurate model (which may be inaccurate due to computationally-motivated approximations). PGRD has few parameters, improves the reward",
"title": ""
},
{
"docid": "09132f8695e6f8d32d95a37a2bac46ee",
"text": "Social media has become one of the main channels for people to access and consume news, due to the rapidness and low cost of news dissemination on it. However, such properties of social media also make it a hotbed of fake news dissemination, bringing negative impacts on both individuals and society. Therefore, detecting fake news has become a crucial problem attracting tremendous research effort. Most existing methods of fake news detection are supervised, which require an extensive amount of time and labor to build a reliably annotated dataset. In search of an alternative, in this paper, we investigate if we could detect fake news in an unsupervised manner. We treat truths of news and users’ credibility as latent random variables, and exploit users’ engagements on social media to identify their opinions towards the authenticity of news. We leverage a Bayesian network model to capture the conditional dependencies among the truths of news, the users’ opinions, and the users’ credibility. To solve the inference problem, we propose an efficient collapsed Gibbs sampling approach to infer the truths of news and the users’ credibility without any labelled data. Experiment results on two datasets show that the proposed method significantly outperforms the compared unsupervised methods.",
"title": ""
},
{
"docid": "e729d7b399b3a4d524297ae79b28f45d",
"text": "The aim of this paper is to solve optimal design problems for industrial applications when the objective function value requires the evaluation of expensive simulation codes and its first derivatives are not available. In order to achieve this goal we propose two new algorithms that draw inspiration from two existing approaches: a filled function based algorithm and a Particle Swarm Optimization method. In order to test the efficiency of the two proposed algorithms, we perform a numerical comparison both with the methods we drew inspiration from, and with some standard Global Optimization algorithms that are currently adopted in industrial design optimization. Finally, a realistic ship design problem, namely the reduction of the amplitude of the heave motion of a ship advancing in head seas (a problem connected to both safety and comfort), is solved using the new codes and other global and local derivativeThis work has been partially supported by the Ministero delle Infrastrutture e dei Trasporti in the framework of the research plan “Programma di Ricerca sulla Sicurezza”, Decreto 17/04/2003 G.U. n. 123 del 29/05/2003, by MIUR, FIRB 2001 Research Program Large-Scale Nonlinear Optimization and by the U.S. Office of Naval Research (NICOP grant N. 000140510617). E.F. Campana ( ) · D. Peri · A. Pinto INSEAN—Istituto Nazionale per Studi ed Esperienze di Architettura Navale, Via di Vallerano 139, 00128 Roma, Italy e-mail: E.Campana@insean.it G. Liuzzi Consiglio Nazionale delle Ricerche, Istituto di Analisi dei Sistemi ed Informatica “A. Ruberti”, Viale Manzoni 30, 00185 Roma, Italy S. Lucidi Dipartimento di Informatica e Sistemistica “A. Ruberti”, Università degli Studi di Roma “Sapienza”, Via Ariosto 25, 00185 Roma, Italy V. Piccialli Dipartimento di Ingegneria dell’Impresa, Università degli Studi di Roma “Tor Vergata”, Via del Policlinico 1, 00133 Roma, Italy 534 E.F. Campana et al. free optimization methods. All the numerical results show the effectiveness of the two new algorithms.",
"title": ""
},
{
"docid": "e95649b06c70682ba4229cff11fefeaf",
"text": "In this paper, we present Black SDN, a Software Defined Networking (SDN) architecture for secure Internet of Things (IoT) networking and communications. SDN architectures were developed to provide improved routing and networking performance for broadband networks by separating the control plain from the data plain. This basic SDN concept is amenable to IoT networks, however, the common SDN implementations designed for wired networks are not directly amenable to the distributed, ad hoc, low-power, mesh networks commonly found in IoT systems. SDN promises to improve the overall lifespan and performance of IoT networks. However, the SDN architecture changes the IoT network's communication patterns, allowing new types of attacks, and necessitating a new approach to securing the IoT network. Black SDN is a novel SDN-based secure networking architecture that secures both the meta-data and the payload within each layer of an IoT communication packet while utilizing the SDN centralized controller as a trusted third party for secure routing and optimized system performance management. We demonstrate through simulation the feasibility of Black SDN in networks where nodes are asleep most of their lives, and specifically examine a Black SDN IoT network based upon the IEEE 802.15.4 LR WPAN (Low Rate - Wireless Personal Area Network) protocol.",
"title": ""
},
{
"docid": "01d74a3a50d1121646ddab3ea46b5681",
"text": "Sleep quality is important, especially given the considerable number of sleep-related pathologies. The distribution of sleep stages is a highly effective and objective way of quantifying sleep quality. As a standard multi-channel recording used in the study of sleep, polysomnography (PSG) is a widely used diagnostic scheme in sleep medicine. However, the standard process of sleep clinical test, including PSG recording and manual scoring, is complex, uncomfortable, and time-consuming. This process is difficult to implement when taking the whole PSG measurements at home for general healthcare purposes. This work presents a novel sleep stage classification system, based on features from the two forehead EEG channels FP1 and FP2. By recording EEG from forehead, where there is no hair, the proposed system can monitor physiological changes during sleep in a more practical way than previous systems. Through a headband or self-adhesive technology, the necessary sensors can be applied easily by users at home. Analysis results demonstrate that classification performance of the proposed system overcomes the individual differences between different participants in terms of automatically classifying sleep stages. Additionally, the proposed sleep stage classification system can identify kernel sleep features extracted from forehead EEG, which are closely related with sleep clinician's expert knowledge. Moreover, forehead EEG features are classified into five sleep stages by using the relevance vector machine. In a leave-one-subject-out cross validation analysis, we found our system to correctly classify five sleep stages at an average accuracy of 76.7 ± 4.0 (SD) % [average kappa 0.68 ± 0.06 (SD)]. Importantly, the proposed sleep stage classification system using forehead EEG features is a viable alternative for measuring EEG signals at home easily and conveniently to evaluate sleep quality reliably, ultimately improving public healthcare.",
"title": ""
},
{
"docid": "6b1dc94c4c70e1c78ea32a760b634387",
"text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.",
"title": ""
},
{
"docid": "a341bcf8efb975c078cc452e0eecc183",
"text": "We show that, during inference with Convolutional Neural Networks (CNNs), more than 2× to 8× ineffectual work can be exposed if instead of targeting those weights and activations that are zero, we target different combinations of value stream properties. We demonstrate a practical application with Bit-Tactical (TCL), a hardware accelerator which exploits weight sparsity, per layer precision variability and dynamic fine-grain precision reduction for activations, and optionally the naturally occurring sparse effectual bit content of activations to improve performance and energy efficiency. TCL benefits both sparse and dense CNNs, natively supports both convolutional and fully-connected layers, and exploits properties of all activations to reduce storage, communication, and computation demands. While TCL does not require changes to the CNN to deliver benefits, it does reward any technique that would amplify any of the aforementioned weight and activation value properties. Compared to an equivalent data-parallel accelerator for dense CNNs, TCLp, a variant of TCL improves performance by 5.05× and is 2.98× more energy efficient while requiring 22% more area.",
"title": ""
},
{
"docid": "5700ba2411f9b4e4ed59c8c5839dc87d",
"text": "Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g., MSR-RF: C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.",
"title": ""
},
{
"docid": "081c350100f4db11818c75507f715cda",
"text": "Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.",
"title": ""
},
{
"docid": "051c530bf9d49bf1066ddf856488dff1",
"text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.",
"title": ""
},
{
"docid": "dce75562a7e8b02364d39fd7eb407748",
"text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.",
"title": ""
},
{
"docid": "9dde89f24f55602e21823620b49633dd",
"text": "Darier's disease is a rare late-onset genetic disorder of keratinisation. Mosaic forms of the disease characterised by localised and unilateral keratotic papules carrying post-zygotic ATP2A2 mutation in affected areas have been documented. Segmental forms of Darier's disease are classified into two clinical subtypes: type 1 manifesting with distinct lesions on a background of normal appearing skin and type 2 with well-defined areas of Darier's disease occurring on a background of less severe non-mosaic phenotype. Herein we describe two cases of type 1 segmental Darier's disease with favourable response to topical retinoids.",
"title": ""
},
{
"docid": "c0c064fdc011973848568f5b087ba20b",
"text": "’InfoVis novices’ have been found to struggle with visual data exploration. A ’conversational interface’ which would take natural language inputs to visualization generation and modification, while maintaining a history of the requests, visualizations and findings of the user, has the potential to ameliorate many of these challenges. We present Articulate2, initial work toward a conversational interface to visual data exploration.",
"title": ""
},
{
"docid": "0b024671e04090051292b5e76a4690ae",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "25828231caaf3288ed4fdb27df7f8740",
"text": "This paper reports on an algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions. In autonomous driving scenarios, other traffic participants are often occluded from sensor measurements by buildings or large vehicles like buses or trucks, which makes tracking dynamic objects challenging.We present a method to augment standard dynamic object trackers with means to 1) estimate the occluded state of other traffic agents and 2) robustly associate the occluded estimates with new observations after the tracked object reenters the visible region of the sensor horizon. We perform occluded state estimation using a dynamics model that accounts for the driving behavior of traffic agents and a hybrid Gaussian mixture model (hGMM) to capture multiple hypotheses over discrete behavior, such as driving along different lanes or turning left or right at an intersection. Upon new observations, we associate them to existing estimates in terms of the Kullback-Leibler divergence (KLD). We evaluate the proposed method in simulation and using a real-world traffic-tracking dataset from an autonomous vehicle platform. Results show that our method can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system.",
"title": ""
},
{
"docid": "2318fbd8ca703c0ff5254606b8dce442",
"text": "Historically, the inspection and maintenance of high-voltage power lines have been performed by linemen using various traditional means. In recent years, the use of robots appeared as a new and complementary method of performing such tasks, as several initiatives have been explored around the world. Among them is the teleoperated robotic platform called LineScout Technology, developed by Hydro-Québec, which has the capacity to clear most obstacles found on the grid. Since its 2006 introduction in the operations, it is considered by many utilities as the pioneer project in the domain. This paper’s purpose is to present the mobile platform design and its main mechatronics subsystems to support a comprehensive description of the main functions and application modules it offers. This includes sensors and a compact modular arm equipped with tools to repair cables and broken conductor strands. This system has now been used on many occasions to assess the condition of power line infrastructure and some results are presented. Finally, future developments and potential technologies roadmap are briefly discussed.",
"title": ""
}
] |
scidocsrr
|
855d1d47b3a1ddb7069912ec769cd41b
|
Stock portfolio selection using learning-to-rank algorithms with news sentiment
|
[
{
"docid": "14838947ee3b95c24daba5a293067730",
"text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.",
"title": ""
}
] |
[
{
"docid": "bd5589d700173efdfb38a8cf9f8bbb3a",
"text": "Interior permanent-magnet (IPM) synchronous motors possess special features for adjustable-speed operation which distinguish them from other classes of ac machines. They are robust high powerdensity machines capable of operating at high motor and inverter efficiencies over wide speed ranges, including considerable ranges of constant-power operation. The magnet cost is minimized by the low magnet weight requirements of the IPM design. The impact of the buried-magnet configuration on the motor's electromagnetic characteristics is discussed. The rotor magnetic circuit saliency preferentially increases the quadrature-axis inductance and introduces a reluctance torque term into the IPM motor's torque equation. The electrical excitation requirements for the IPM synchronous motor are also discussed. The control of the sinusoidal phase currents in magnitude and phase angle with respect to the rotor orientation provides a means for achieving smooth responsive torque control. A basic feedforward algorithm for executing this type of current vector torque control is discussed, including the implications of current regulator saturation at high speeds. The key results are illustrated using a combination of simulation and prototype IPM drive measurements.",
"title": ""
},
{
"docid": "9869bc5dfc8f20b50608f0d68f7e49ba",
"text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.",
"title": ""
},
{
"docid": "982253c9f0c05e50a070a0b2e762abd7",
"text": "In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.",
"title": ""
},
{
"docid": "6fee1cce864d858af6e28959961f5c24",
"text": "Much of the organic light emitting diode (OLED) characterization published to date addresses the high current regime encountered in the operation of passively addressed displays. Higher efficiency and brightness can be obtained by driving with an active matrix, but the lower instantaneous pixel currents place the OLEDs in a completely different operating mode. Results at these low current levels are presented and their impact on active matrix display design is discussed.",
"title": ""
},
{
"docid": "b8466da90f2e75df2cc8453564ddb3e8",
"text": "Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.",
"title": ""
},
{
"docid": "f1aee9423f768081f575eeb1334cf7e4",
"text": "The mobile robots often perform the dangerous missions such as planetary exploration, reconnaissance, anti-terrorism, rescue, and so on. So it is required that the robots should be able to move in the complex and unpredictable environment where the ground might be soft and hard, even and uneven. To access to such terrains, a novel robot (NEZA-I) with the self-adaptive mobile mechanism is proposed and developed. It consists of a control system unit and two symmetric transformable wheel-track (TWT) units. Each TWT unit is driven only by one servo motor, and can efficiently move over rough terrain by changing the locomotion mode and transforming the track configuration. It means that the mobile mechanism of NEZA-I has self-adaptability to the irregular environment. The paper proposes the design concept of NEZA-I, presents the structure and the drive system of NEZA-I, and describes the self-adaptive principle of the mobile mechanism to the rough terrains. The locomotion mode and posture of the mobile mechanism is analyzed by the means of simulation. Finally, basic experiments verify the mobility of NEZA-I.",
"title": ""
},
{
"docid": "a84d2de19a34b914e583c9f4379b68da",
"text": "English) xx Abstract(Arabic) xxiiArabic) xxii",
"title": ""
},
{
"docid": "fce58bfa94acf2b26a50f816353e6bf2",
"text": "The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered.",
"title": ""
},
{
"docid": "fd721261c29395867ce3966bdaeeaa7a",
"text": "Cutaneous saltation provides interesting possibilities for applications. An illusion of vibrotactile mediolateral movement was elicited to a left dorsal forearm to investigate emotional (i.e., pleasantness) and cognitive (i.e., continuity) experiences to vibrotactile stimulation. Twelve participants were presented with nine saltatory stimuli delivered to a linearly aligned row of three vibrotactile actuators separated by 70 mm in distance. The stimuli were composed of three temporal parameters of 12, 24 and 48 ms for both burst duration and inter-burst interval to form all nine possible uniform pairs. First, the stimuli were ranked by the participants using a special three-step procedure. Second, the participants rated the stimuli using two nine-point bipolar scales measuring the pleasantness and continuity of each stimulus, separately. The results showed especially the interval between two successive bursts was a significant factor for saltation. Moreover, the temporal parameters seemed to affect more the experienced continuity of the stimuli compared to pleasantness. These findings encourage us to continue to further study the saltation and the effect of different parameters for subjective experience.",
"title": ""
},
{
"docid": "85c800d32457fe9532f892c1703ba2d3",
"text": "In this paper design and implementation of a two stage fully differential, RC Miller compensated CMOS operational amplifier is presented. High gain enables this circuit to operate efficiently in a closed loop feedback system, whereas high bandwidth makes it suitable for high speed applications. The design is also able to address any fluctuation in supply or dc input voltages and stabilizes the operation by nullifying the effects due to perturbations. Implementation has been done in 0.18 um technology using libraries from tsmc with the help of tools from Mentor Graphics and Cadence. Op-amp designed here exhibits >95 dB DC differential gain, ~135 MHz unity gain bandwidth, phase margin of ~53, and ~132 V/uS slew rate for typical 1 pF differential capacitive load. The power dissipation for 3.3V supply voltage at 27C temperature under other nominal conditions is 2.29mW. Excellent output differential swing of 5.9V and good liner range of operation are some of the additional features of design.",
"title": ""
},
{
"docid": "2a1eb2fa37809bfce258476463af793c",
"text": "Parkinson’s disease (PD) is a chronic disease that develops over years and varies dramatically in its clinical manifestations. A preferred strategy to resolve this heterogeneity and thus enable better prognosis and targeted therapies is to segment out more homogeneous patient sub-populations. However, it is challenging to evaluate the clinical similarities among patients because of the longitudinality and temporality of their records. To address this issue, we propose a deep model that directly learns patient similarity from longitudinal and multi-modal patient records with an Recurrent Neural Network (RNN) architecture, which learns the similarity between two longitudinal patient record sequences through dynamically matching temporal patterns in patient sequences. Evaluations on real world patient records demonstrate the promising utility and efficacy of the proposed architecture in personalized predictions.",
"title": ""
},
{
"docid": "6baefa75db89210c4059d3c1dad46488",
"text": "In this paper, we propose a framework for low-energy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput. This deliberate introduction of input-dependent errors leads to degradation in the algorithmic performance, which is compensated for via algorithmic noise-tolerance (ANT) schemes. The resulting setup that comprises of the DSP architecture operating at sub-critical voltage and the error control scheme is referred to as soft DSP. It is shown that technology scaling renders the proposed scheme more effective as the delay penalty suffered due to voltage scaling reduces due to short channel effects. The effectiveness of the proposed scheme is also enhanced when arithmetic units with a higher “delay-imbalance” are employed. A prediction based error-control scheme is proposed to enhance the performance of the filtering algorithm in presence of errors due to soft computations. For a frequency selective filter, it is shown that the proposed scheme provides 60% 81% reduction in energy dissipation for filter bandwidths up to 0.5~ (where 27r corresponds to the sampling frequency fs) over that achieved via conventional voltage scaling, with a maximum of 0.5dB degradation in the output signal-to-noise ratio (SN%). It is also shown that the proposed algorithmic noise-tolerance schemes can be used to improve the performance of DSP algorithms in presence of bit-error rates of upto 10-s due to deep submicron (DSM) noise.",
"title": ""
},
{
"docid": "d3444b0cee83da2a94f4782c79e0ce48",
"text": "Predicting student academic performance plays an important role in academics. Classifying st udents using conventional techniques cannot give the desired lev l of accuracy, while doing it with the use of soft computing techniques may prove to be beneficial. A student can be classi fied into one of the available categories based on his behavioral and qualitative features. The paper presents a Neural N etwork model fused with Fuzzy Logic to model academi c profile of students. The model mimics teacher’s ability to deal with imprecise information representing student’s characteristics in linguistic form. The suggested model is developed in MATLAB which takes into consideration various features of students under study. The input to the model consists of dat of students studying in any faculty. A combination of Fuzzy Logic ARTMAP Neural Network results into a model useful for management of educational institutes for improving the quality of education. A good prediction of student’s success ione way to be in the competition in education sys tem. The use of Soft Computing methodology is justified for its real-time applicability in education system.",
"title": ""
},
{
"docid": "f4b271c7ee8bfd9f8aa4d4cf84c4efd4",
"text": "Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. MIT Autonomous Vehicle",
"title": ""
},
{
"docid": "49ef68eabca989e07f420a3a88386c77",
"text": "Identifying the language used will typically be the first step in most natural language processing tasks. Among the wide variety of language identification methods discussed in the literature, the ones employing the Cavnar and Trenkle (1994) approach to text categorization based on character n-gram frequencies have been particularly successful. This paper presents the R extension package textcat for n-gram based text categorization which implements both the Cavnar and Trenkle approach as well as a reduced n-gram approach designed to remove redundancies of the original approach. A multi-lingual corpus obtained from the Wikipedia pages available on a selection of topics is used to illustrate the functionality of the package and the performance of the provided language identification methods.",
"title": ""
},
{
"docid": "17953a3e86d3a4396cbd8a911c477f07",
"text": "We introduce Deep Semantic Embedding (DSE), a supervised learning algorithm which computes semantic representation for text documents by respecting their similarity to a given query. Unlike other methods that use singlelayer learning machines, DSE maps word inputs into a lowdimensional semantic space with deep neural network, and achieves a highly nonlinear embedding to model the human perception of text semantics. Through discriminative finetuning of the deep neural network, DSE is able to encode the relative similarity between relevant/irrelevant document pairs in training data, and hence learn a reliable ranking score for a query-document pair. We present test results on datasets including scientific publications and user-generated knowledge base.",
"title": ""
},
{
"docid": "9897f5e64b4a5d6d80fadb96cb612515",
"text": "Deep convolutional neural networks (CNNs) are rapidly becoming the dominant approach to computer vision and a major component of many other pervasive machine learning tasks, such as speech recognition, natural language processing, and fraud detection. As a result, accelerators for efficiently evaluating CNNs are rapidly growing in popularity. The conventional approaches to designing such CNN accelerators is to focus on creating accelerators to iteratively process the CNN layers. However, by processing each layer to completion, the accelerator designs must use off-chip memory to store intermediate data between layers, because the intermediate data are too large to fit on chip. In this work, we observe that a previously unexplored dimension exists in the design space of CNN accelerators that focuses on the dataflow across convolutional layers. We find that we are able to fuse the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. We demonstrate the effectiveness of our approach by constructing a fused-layer CNN accelerator for the first five convolutional layers of the VGGNet-E network and comparing it to the state-of-the-art accelerator implemented on a Xilinx Virtex-7 FPGA. We find that, by using 362KB of on-chip storage, our fused-layer accelerator minimizes off-chip feature map data transfer, reducing the total transfer by 95%, from 77MB down to 3.6MB per image.",
"title": ""
},
{
"docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba",
"text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.",
"title": ""
},
{
"docid": "5cc374d64b9f62de9c1142770bb6e0e7",
"text": "The demand for inexpensive and ubiquitous accurate motion-detection sensors for road safety, smart homes and robotics justifies the interest in single-chip mm-Wave radars: a high carrier frequency allows for a high angular resolution in a compact multi-antenna system and a wide bandwidth allows fora high depth resolution. With the objective of single-chip radar systems, CMOS is the natural candidate to replace SiGe as a leading technology [1-6].",
"title": ""
},
{
"docid": "c8bbc713aecbc6682d21268ee58ca258",
"text": "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X,Y )⇒ nationality(X,Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.",
"title": ""
}
] |
scidocsrr
|
f2dfe17a41550f3ee1fca7d51438e76c
|
Open source real-time control software for the Kuka light weight robot
|
[
{
"docid": "81b03da5e09cb1ac733c966b33d0acb1",
"text": "Abstrud In the last two years a third generation of torque-controlled light weight robots has been developed in DLR‘s robotics and mechatronics lab which is based on all the experiences that have been made with the first two generations. It aims at reaching the limits of what seems achievable with present day technologies not only with respect to light-weight, but also with respect to minimal power consumption and losses. One of the main gaps we tried to close in version III was the development of a new, robot-dedicated high energy motor designed with the best available techniques of concurrent engineering, and the renewed efforts to save weight in the links by using ultralight carbon fibres.",
"title": ""
}
] |
[
{
"docid": "2b9b9c1a012cd9f549acef26cd8b0156",
"text": "VAES stands for variable AES. VAES3 is the third generation format-preserving encryption algorithm that was developed in a report [4] simultaneously with the comprehensive paper on FPE [1] and subsequently updated slightly to be in concert with the FFX standard proposal. The standard proposal of FFX includes, in an appendix, example instantiations called A2 and A10. A follow on addendum [3] includes an instantiation called FFX[radix] . The stated intent of FFX is that it is a framework under which many implementations are compliant. The VAES3 scheme is compliant to those requirements. VAES3 was designed to meet security goals and requirements beyond the original example instantiations, and its design goals are slightly different than those of FFX[radix]. One of the unique features of VAES3 is a subkey step that enhances security and lengthens the lifetime of the key.",
"title": ""
},
{
"docid": "7b4dd695182f7e15e58f44e309bf897c",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "52a1f1de8db1a9aca14cb4df2395868b",
"text": "We propose a new approach to localizing handle-like grasp affordances in 3-D point clouds. The main idea is to identify a set of sufficient geometric conditions for the existence of a grasp affordance and to search the point cloud for neighborhoods that satisfy these conditions. Our goal is not to find all possible grasp affordances, but instead to develop a method of localizing important types of grasp affordances quickly and reliably. The strength of this method relative to other current approaches is that it is very practical: it can have good precision/recall for the types of affordances under consideration, it runs in real-time, and it is easy to adapt to different robots and operating scenarios. We validate with a set of experiments where the approach is used to enable the Rethink Baxter robot to localize and grasp unmodelled objects.",
"title": ""
},
{
"docid": "aa88b71c68ed757faf9eb896a81003f5",
"text": "Purpose The present study evaluated the platelet distribution pattern and growth factor release (VEGF, TGF-β1 and EGF) within three PRF (platelet-rich-fibrin) matrices (PRF, A-PRF and A-PRF+) that were prepared using different relative centrifugation forces (RCF) and centrifugation times. Materials and methods immunohistochemistry was conducted to assess the platelet distribution pattern within three PRF matrices. The growth factor release was measured over 10 days using ELISA. Results The VEGF protein content showed the highest release on day 7; A-PRF+ showed a significantly higher rate than A-PRF and PRF. The accumulated release on day 10 was significantly higher in A-PRF+ compared with A-PRF and PRF. TGF-β1 release in A-PRF and A-PRF+ showed significantly higher values on days 7 and 10 compared with PRF. EGF release revealed a maximum at 24 h in all groups. Toward the end of the study, A-PRF+ demonstrated significantly higher EGF release than PRF. The accumulated growth factor releases of TGF-β1 and EGF on day 10 were significantly higher in A-PRF+ and A-PRF than in PRF. Moreover, platelets were located homogenously throughout the matrix in the A-PRF and A-PRF+ groups, whereas platelets in PRF were primarily observed within the lower portion. Discussion the present results show an increase growthfactor release by decreased RCF. However, further studies must be conducted to examine the extent to which enhancing the amount and the rate of released growth factors influence wound healing and biomaterial-based tissue regeneration. Conclusion These outcomes accentuate the fact that with a reduction of RCF according to the previously LSCC (described low speed centrifugation concept), growth factor release can be increased in leukocytes and platelets within the solid PRF matrices.",
"title": ""
},
{
"docid": "427bdf9dc6462c2569956745eaee6a1b",
"text": "Because there is increasing concern about low-back disability and its current medical management, this analysis attempts to construct a new theoretic framework for treatment. Observations of natural history and epidemiology suggest that low-back pain should be a benign, self-limiting condition, that low back-disability as opposed to pain is a relatively recent Western epidemic, and that the role of medicine in that epidemic must be critically examined. The traditional medical model of disease is contrasted with a biopsychosocial model of illness to analyze success and failure in low-back disorders. Studies of the mathematical relationship between the elements of illness in chronic low-back pain suggest that the biopsychosocial concept can be used as an operational model that explains many clinical observations. This model is used to compare rest and active rehabilitation for low-back pain. Rest is the commonest treatment prescribed after analgesics but is based on a doubtful rationale, and there is little evidence of any lasting benefit. There is, however, little doubt about the harmful effects--especially of prolonged bed rest. Conversely, there is no evidence that activity is harmful and, contrary to common belief, it does not necessarily make the pain worse. Experimental studies clearly show that controlled exercises not only restore function, reduce distress and illness behavior, and promote return to work, but actually reduce pain. Clinical studies confirm the value of active rehabilitation in practice. To achieve the goal of treating patients rather than spines, we must approach low-back disability as an illness rather than low-back pain as a purely physical disease. We must distinguish pain as a purely the symptoms and signs of distress and illness behavior from those of physical disease, and nominal from substantive diagnoses. Management must change from a negative philosophy of rest for pain to more active restoration of function. Only a new model and understanding of illness by physicians and patients alike makes real change possible.",
"title": ""
},
{
"docid": "989cdc80521e1c8761f733ad3ed49d79",
"text": "The wide availability of sensing devices in the medical domain causes the creation of large and very large data sets. Hence, tasks as the classification in such data sets becomes more and more difficult. Deep Neural Networks (DNNs) are very effective in classification, yet finding the best values for their hyper-parameters is a difficult and time-consuming task. This paper introduces an approach to decrease execution times to automatically find good hyper-parameter values for DNN through Evolutionary Algorithms when classification task is faced. This decrease is obtained through the combination of two mechanisms. The former is constituted by a distributed version for a Differential Evolution algorithm. The latter is based on a procedure aimed at reducing the size of the training set and relying on a decomposition into cubes of the space of the data set attributes. Experiments are carried out on a medical data set about Obstructive Sleep Anpnea. They show that sub-optimal DNN hyper-parameter values are obtained in a much lower time with respect to the case where this reduction is not effected, and that this does not come to the detriment of the accuracy in the classification over the test set items.",
"title": ""
},
{
"docid": "e2a605f5c22592bd5ca828d4893984be",
"text": "Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract humanunderstandable representations of network activations. We then build a bayesian causal model using these extracted concepts as variables in order to explain image classification. Finally, we use this causal model to identify and visualize features with significant causal influence on final classification.",
"title": ""
},
{
"docid": "22658b675b501059ec5a7905f6b766ef",
"text": "The purpose of this study was to compare the physiological results of 2 incremental graded exercise tests (GXTs) and correlate these results with a short-distance laboratory cycle time trial (TT). Eleven men (age 25 +/- 5 years, Vo(2)max 62 +/- 8 ml.kg(-1).min(-1)) randomly underwent 3 laboratory tests performed on a cycle ergometer. The first 2 tests consisted of a GXT consisting of either 3-minute (GXT(3-min)) or 5-minute (GXT(5-min)) workload increments. The third test involved 1 laboratory 30-minute TT. The peak power output, lactate threshold, onset of blood lactate accumulation, and maximum displacement threshold (Dmax) determined from each GXT was not significantly different and in agreement when measured from the GXT(3-min) or GXT(5-min). Furthermore, similar correlation coefficients were found among the results of each GXT and average power output in the 30-minute cycling TT. Hence, the results of either GXT can be used to predict performance or for training prescription.",
"title": ""
},
{
"docid": "343a2035ca2136bc38451c0e92aeb7fc",
"text": "Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.",
"title": ""
},
{
"docid": "c90eae76dbde16de8d52170c2715bd7a",
"text": "Several literatures converge on the idea that approach and avoidance/withdrawal behaviors are managed by two partially distinct self-regulatory system. The functions of these systems also appear to be embodied in discrepancyreducing and -enlarging feedback loops, respectively. This article describes how the feedback construct has been used to address these two classes of action and the affective experiences that relate to them. Further discussion centers on the development of measures of individual differences in approach and avoidance tendencies, and how these measures can be (and have been) used as research tools, to investigate whether other phenomena have their roots in approach or avoidance.",
"title": ""
},
{
"docid": "fd87b56e57b6750aa0e018724f5ba975",
"text": "An effective design of effective and efficient self-adaptive systems may rely on several existing approaches. Software models and model checking techniques at run time represent one of them since they support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. For this reason, efficient run-time model checking represents a crucial research challenge. This paper precisely addresses this issue and focuses on probabilistic run-time model checking in which reliability models are given in terms of Discrete Time Markov Chains which are verified at run-time against a set of requirements expressed as logical formulae. In particular, the paper discusses the use of probabilistic model checking at run-time for selfadaptive systems by surveying and comparing the existing approaches divided in two categories: state-elimination algorithms and algebra-based algorithms. The discussion is supported by a realistic example and by empirical experiments.",
"title": ""
},
{
"docid": "e4e0e01b3af99dfd88ff03a1057b40d3",
"text": "There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation - events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.",
"title": ""
},
{
"docid": "047480185afbea439eee2ee803b9d1f9",
"text": "The ability to perceive and analyze terrain is a key problem in mobile robot navigation. Terrain perception problems arise in planetary robotics, agriculture, mining, and, of course, self-driving cars. Here, we introduce the PTA (probabilistic terrain analysis) algorithm for terrain classification with a fastmoving robot platform. The PTA algorithm uses probabilistic techniques to integrate range measurements over time, and relies on efficient statistical tests for distinguishing drivable from nondrivable terrain. By using probabilistic techniques, PTA is able to accommodate severe errors in sensing, and identify obstacles with nearly 100% accuracy at speeds of up to 35mph. The PTA algorithm was an essential component in the DARPA Grand Challenge, where it enabled our robot Stanley to traverse the entire course in record time.",
"title": ""
},
{
"docid": "1573dcbb7b858ab6802018484f00ef91",
"text": "There is a multitude of tools available for Business Model Innovation (BMI). However, Business models (BM) and supporting tools are not yet widely known by micro, small and medium sized companies (SMEs). In this paper, we build on analysis of 61 cases to present typical BMI paths of European SMEs. Firstly, we constructed two paths for established companies that we named as 'I want to grow' and 'I want to make my business profitable'. We also found one path for start-ups: 'I want to start a new business'. Secondly, we suggest appropriate BM toolsets for the three paths. The identified paths and related tools contribute to BMI research and practise with an aim to boost BMI in SMEs.",
"title": ""
},
{
"docid": "3cb6829b876787018856abfaf63f05ad",
"text": "BACKGROUND\nRhinoplasty remains one of the most challenging operations, as exemplified in the Middle Eastern patient. The ill-defined, droopy tip, wide and high dorsum, and thick skin envelope mandate meticulous attention to preoperative evaluation and efficacious yet safe surgical maneuvers. The authors provide a systematic approach to evaluation and improvement of surgical outcomes in this patient population.\n\n\nMETHODS\nA retrospective, 3-year review identified patients of Middle Eastern heritage who underwent primary rhinoplasty and those who did not but had nasal photographs. Photographs and operative records (when applicable) were reviewed. Specific nasal characteristics, component-directed surgical techniques, and aesthetic outcomes were delineated.\n\n\nRESULTS\nThe Middle Eastern nose has a combination of specific nasal traits, with some variability, including thick/sebaceous skin (excess fibrofatty tissue), high/wide dorsum with cartilaginous and bony humps, ill-defined nasal tip, weak/thin lateral crura relative to the skin envelope, nostril-tip imbalance, acute nasolabial and columellar-labial angles, and a droopy/hyperdynamic nasal tip. An aggressive yet nondestructive surgical approach to address the nasal imbalance often requires soft-tissue debulking, significant cartilaginous framework modification (with augmentation/strengthening), tip refinement/rotation/projection, low osteotomies, and depressor septi nasi muscle treatment. The most common postoperative defects were related to soft-tissue scarring, thickened skin envelope, dorsum irregularities, and prolonged edema in the supratip/tip region.\n\n\nCONCLUSIONS\nIt is critical to improve the strength of the cartilaginous framework with respect to the thick, noncontractile skin/soft-tissue envelope, particularly when moderate to large dorsal reduction is required. A multitude of surgical maneuvers are often necessary to address all the salient characteristics of the Middle Eastern nose and to produce the desired aesthetic result.",
"title": ""
},
{
"docid": "39180c1e2636a12a9d46d94fe3ebfa65",
"text": "We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.",
"title": ""
},
{
"docid": "3b5555c5624fc11bbd24cfb8fff669f0",
"text": "Redundancy resolution is a critical problem in the control of robotic manipulators. Recurrent neural networks (RNNs), as inherently parallel processing models for time-sequence processing, are potentially applicable for the motion control of manipulators. However, the development of neural models for high-accuracy and real-time control is a challenging problem. This paper identifies two limitations of the existing RNN solutions for manipulator control, i.e., position error accumulation and the convex restriction on the projection set, and overcomes them by proposing two modified neural network models. Our method allows nonconvex sets for projection operations, and control error does not accumulate over time in the presence of noise. Unlike most works in which RNNs are used to process time sequences, the proposed approach is model-based and training-free, which makes it possible to achieve fast tracking of reference signals with superior robustness and accuracy. Theoretical analysis reveals the global stability of a system under the control of the proposed neural networks. Simulation results confirm the effectiveness of the proposed control method in both the position regulation and tracking control of redundant PUMA 560 manipulators.",
"title": ""
},
{
"docid": "a7bf370e83bd37ed4f83c3846cfaaf97",
"text": "This paper presents the design and implementation of an evanescent tunable combline filter based on electronic tuning with the use of RF-MEMS capacitor banks. The use of MEMS tuning circuit results in the compact implementation of the proposed filter with high-Q and near to zero DC power consumption. The proposed filter consist of combline resonators with tuning disks that are loaded with RF-MEMS capacitor banks. A two-pole filter is designed and measured based on the proposed tuning concept. The filter operates at 2.5 GHz with a bandwidth of 22 MHz. Measurement results demonstrate a tuning range of 110 MHz while the quality factor is above 374 (1300–374 over the tuning range).",
"title": ""
}
] |
scidocsrr
|
3522f6f9a5740a1562e42366aa734fe0
|
Routing betweenness centrality
|
[
{
"docid": "e054c2d3b52441eaf801e7d2dd54dce9",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "d041b33794a14d07b68b907d38f29181",
"text": "This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called \"Constant Load\" and \"Constant Number of Records\", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.",
"title": ""
},
{
"docid": "801a197f630189ab0a9b79d3cbfe904b",
"text": "Historically, Vivaldi arrays are known to suffer from high cross-polarization when scanning in the nonprincipal planes—a fault without a universal solution. In this paper, a solution to this issue is proposed in the form of a new Vivaldi-type array with low cross-polarization termed the Sliced Notch Antenna (SNA) array. For the first proof-of-concept demonstration, simulations and measurements are comparatively presented for two single-polarized <inline-formula> <tex-math notation=\"LaTeX\">$19 \\times 19$ </tex-math></inline-formula> arrays—the proposed SNA and its Vivaldi counterpart—each operating over a 1.2–12 GHz (10:1) band. Both arrays are built using typical vertically integrated printed-circuit board cards, and are designed to exhibit VSWR < 2.5 within a 60° scan cone over most of the 10:1 band as infinite arrays. Measurement results compare very favorably with full-wave finite array simulations that include array truncation effects. The SNA array element demonstrates well-behaved polarization performance versus frequency, with more than 20 dB of D-plane <inline-formula> <tex-math notation=\"LaTeX\">$\\theta \\!=\\!45 {^{\\circ }}$ </tex-math></inline-formula> polarization purity improvement at the high frequency. Moreover, the SNA element also: 1) offers better suppression of classical Vivaldi E-plane scan blindnesses; 2) requires fewer plated through vias for stripline-based designs; and 3) allows relaxed adjacent element electrical contact requirements for dual-polarized arrangements.",
"title": ""
},
{
"docid": "53aa1145047cc06a1c401b04896ff1b1",
"text": "Due to the increasing availability of whole slide scanners facilitating digitization of histopathological tissue, there is a strong demand for the development of computer based image analysis systems. In this work, the focus is on the segmentation of the glomeruli constituting a highly relevant structure in renal histopathology, which has not been investigated before in combination with CNNs. We propose two different CNN cascades for segmentation applications with sparse objects. These approaches are applied to the problem of glomerulus segmentation and compared with conventional fully-convolutional networks. Overall, with the best performing cascade approach, single CNNs are outperformed and a pixel-level Dice similarity coefficient of 0.90 is obtained. Combined with qualitative and further object-level analyses the obtained results are assessed as excellent also compared to recent approaches. In conclusion, we can state that especially one of the proposed cascade networks proved to be a highly powerful tool for segmenting the renal glomeruli providing best segmentation accuracies and also keeping the computing time at a low level.",
"title": ""
},
{
"docid": "e31fd6ce6b78a238548e802d21b05590",
"text": "Machine learning techniques have long been used for various purposes in software engineering. This paper provides a brief overview of the state of the art and reports on a number of novel applications I was involved with in the area of software testing. Reflecting on this personal experience, I draw lessons learned and argue that more research should be performed in that direction as machine learning has the potential to significantly help in addressing some of the long-standing software testing problems.",
"title": ""
},
{
"docid": "e2535e6887760b20a18c25385c2926ef",
"text": "The rapid growth in demands for computing everywhere has made computer a pivotal component of human mankind daily lives. Whether we use the computers to gather information from the Web, to utilize them for entertainment purposes or to use them for running businesses, computers are noticeably becoming more widespread, mobile and smaller in size. What we often overlook and did not notice is the presence of those billions of small pervasive computing devices around us which provide the intelligence being integrated into the real world. These pervasive computing devices can help to solve some crucial problems in the activities of our daily lives. Take for examples, in the military application, a large quantity of the pervasive computing devices could be deployed over a battlefield to detect enemy intrusion instead of manually deploying the landmines for battlefield surveillance and intrusion detection Chong et al. (2003). Additionally, in structural health monitoring, these pervasive computing devices are also used to detect for any damage in buildings, bridges, ships and aircraft Kurata et al. (2006). To achieve this vision of pervasive computing, also known as ubiquitous computing, many computational devices are integrated in everyday objects and activities to enable better humancomputer interaction. These computational devices are generally equipped with sensing, processing and communicating abilities and these devices are known as wireless sensor nodes. When several wireless sensor nodes are meshed together, they form a network called the Wireless Sensor Network (WSN). Sensor nodes arranged in network form will definitely exhibit more and better characteristics than individual sensor nodes. WSN is one of the popular examples of ubiquitous computing as it represents a new generation of real-time embedded system which offers distinctly attractive enabling technologies for pervasive computing environments. Unlike the conventional networked systems like Wireless Local Area Network (WLAN) and Global System for Mobile communications (GSM), WSN promise to couple end users directly to sensor measurements and provide information that is precisely localized in time and/or space, according to the users’ needs or demands. In the Massachusetts Institute of Technology (MIT) technology review magazine of innovation published in February 2003 MIT (2003), the editors have identified Wireless Sensor Networks as the first of the top ten emerging technologies that will change the world. This explains why WSN has swiftly become a hot research topic in both academic and industry. 2",
"title": ""
},
{
"docid": "958fea977cf31ddabd291da68754367d",
"text": "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.",
"title": ""
},
{
"docid": "2e7ee3674bdd58967380a59d638b2b17",
"text": "Media applications are characterized by large amounts of available parallelism, little data reuse, and a high computation to memory access ratio. While these characteristics are poorly matched to conventional microprocessor architectures, they are a good fit for modern VLSI technology with its high arithmetic capacity but limited global bandwidth. The stream programming model, in which an application is coded as streams of data records passing through computation kernels, exposes both parallelism and locality in media applications that can be exploited by VLSI architectures. The Imagine architecture supports the stream programming model by providing a bandwidth hierarchy tailored to the demands of media applications. Compared to a conventional scalar processor, Imagine reduces the global register and memory bandwidth required by typical applications by factors of 13 and 21 respectively. This bandwidth efficiency enables a single chip Imagine processor to achieve a peak performance of 16.2GFLOPS (single-precision floating point) and sustained performance of up to 8.5GFLOPS on media processing kernels.",
"title": ""
},
{
"docid": "f54631ac73d42af0ccb2811d483fe8c2",
"text": "Understanding large, structured documents like scholarly articles, requests for proposals or business reports is a complex and difficult task. It involves discovering a document’s overall purpose and subject(s), understanding the function and meaning of its sections and subsections, and extracting low level entities and facts about them. In this research, we present a deep learning based document ontology to capture the general purpose semantic structure and domain specific semantic concepts from a large number of academic articles and business documents. The ontology is able to describe different functional parts of a document, which can be used to enhance semantic indexing for a better understanding by human beings and machines. We evaluate our models through extensive experiments on datasets of scholarly articles from arxiv and Request for Proposal documents.",
"title": ""
},
{
"docid": "3038ec4ac3d648a4ec052b8d7f854107",
"text": "Anomalous data can negatively impact energy forecasting by causing model parameters to be incorrectly estimated. This paper presents two approaches for the detection and imputation of anomalies in time series data. Autoregressive with exogenous inputs (ARX) and artificial neural network (ANN) models are used to extract the characteristics of time series. Anomalies are detected by performing hypothesis testing on the extrema of the residuals, and the anomalous data points are imputed using the ARX and ANN models. Because the anomalies affect the model coefficients, the data cleaning process is performed iteratively. The models are re-learned on “cleaner” data after an anomaly is imputed. The anomalous data are reimputed to each iteration using the updated ARX and ANN models. The ARX and ANN data cleaning models are evaluated on natural gas time series data. This paper demonstrates that the proposed approaches are able to identify and impute anomalous data points. Forecasting models learned on the unclean data and the cleaned data are tested on an uncleaned out-of-sample dataset. The forecasting model learned on the cleaned data outperforms the model learned on the unclean data with 1.67% improvement in the mean absolute percentage errors and a 32.8% improvement in the root mean squared error. Existing challenges include correctly identifying specific types of anomalies such as negative flows.",
"title": ""
},
{
"docid": "43685bd1927f309c8b9a5edf980ab53f",
"text": "In this paper we propose a pipeline for accurate 3D reconstruction from multiple images that deals with some of the possible sources of inaccuracy present in the input data. Namely, we address the problem of inaccurate camera calibration by including a method [1] adjusting the camera parameters in a global structure-and-motion problem which is solved with a depth map representation that is suitable to large scenes. Secondly, we take the triangular mesh and calibration improved by the global method in the first phase to refine the surface both geometrically and radiometrically. Here we propose surface energy which combines photo consistency with contour matching and minimize it with a gradient method. Our main contribution lies in effective computation of the gradient that naturally balances weight between regularizing and data terms by employing scale space approach to find the correct local minimum. The results are demonstrated on standard high-resolution datasets and a complex outdoor scene.",
"title": ""
},
{
"docid": "3eeacf0fb315910975e5ff0ffc4fe800",
"text": "Social networks are rich in various kinds of contents such as text and multimedia. The ability to apply text mining algorithms effectively in the context of text data is critical for a wide variety of applications. Social networks require text mining algorithms for a wide variety of applications such as keyword search, classi cation, and clustering. While search and classi cation are well known applications for a wide variety of scenarios, social networks have a much richer structure both in terms of text and links. Much of the work in the area uses either purely the text content or purely the linkage structure. However, many recent algorithms use a combination of linkage and content information for mining purposes. In many cases, it turns out that the use of a combination of linkage and content information provides much more effective results than a system which is based purely on either of the two. This paper provides a survey of such algorithms, and the advantages observed by using such algorithms in different scenarios. We also present avenues for future research in this area.",
"title": ""
},
{
"docid": "772193675598233ba1ab60936b3091d4",
"text": "The proposed quasiresonant control scheme can be widely used in a dc-dc flyback converter because it can achieve high efficiency with minimized external components. The proposed dynamic frequency selector improves conversion efficiency especially at light loads to meet the requirement of green power since the converter automatically switches to the discontinuous conduction mode for reducing the switching frequency and the switching power loss. Furthermore, low quiescent current can be guaranteed by the constant current startup circuit to further reduce power loss after the startup procedure. The test chip fabricated in VIS 0.5 μm 500 V UHV process occupies an active silicon area of 3.6 mm 2. The peak efficiency can achieve 92% at load of 80 W and 85% efficiency at light load of 5 W.",
"title": ""
},
{
"docid": "2fa61482be37fd956e6eceb8e517411d",
"text": "According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.",
"title": ""
},
{
"docid": "2049ad444e14db330e2256ce412a19f8",
"text": "1 of 11 08/06/07 18:23 Original: http://thebirdman.org/Index/Others/Others-Doc-Environment&Ecology/ +Doc-Environment&Ecology-FoodMatters/StimulatingPlantGrowthWithElectricity&Magnetism&Sound.htm 2007-08-06 Link here: http://blog.lege.net/content/StimulatingPlantGrowthWithElectricityMagnetismSound.html PDF \"printout\": http://blog.lege.net/content/StimulatingPlantGrowthWithElectricityMagnetismSound.pdf",
"title": ""
},
{
"docid": "af08fa19de97eed61afd28893692e7ec",
"text": "OpenACC is a new accelerator programming interface that provides a set of OpenMP-like loop directives for the programming of accelerators in an implicit and portable way. It allows the programmer to express the offloading of data and computations to accelerators, such that the porting process for legacy CPU-based applications can be significantly simplified. This paper focuses on the performance aspects of OpenACC using two micro benchmarks and one real-world computational fluid dynamics application. Both evaluations show that in general OpenACC performance is approximately 50\\% lower than CUDA. However, for some applications it can reach up to 98\\% with careful manual optimizations. The results also indicate several limitations of the OpenACC specification that hamper full use of the GPU hardware resources, resulting in a significant performance gap when compared to a fully tuned CUDA code. The lack of a programming interface for the shared memory in particular results in as much as three times lower performance.",
"title": ""
},
{
"docid": "c4043bfa8cfd74f991ac13ce1edd5bf5",
"text": "Citations between scientific papers and related bibliometric indices, such as the h-index for authors and the impact factor for journals, are being increasingly used – often in controversial ways – as quantitative tools for research evaluation. Yet, a fundamental research question remains still open: to which extent do quantitative metrics capture the significance of scientific works? We analyze the network of citations among the 449, 935 papers published by the American Physical Society (APS) journals between 1893 and 2009, and focus on the comparison of metrics built on the citation count with network-based metrics. We contrast five article-level metrics with respect to the rankings that they assign to a set of fundamental papers, called Milestone Letters, carefully selected by the APS editors for “making long-lived contributions to physics, either by announcing significant discoveries, or by initiating new areas of research”. A new metric, which combines PageRank centrality with the explicit requirement that paper score is not biased by paper age, is the best-performing metric overall in identifying the Milestone Letters. The lack of time bias in the new metric makes it also possible to use it to compare papers of different age on the same scale. We find that networkbased metrics identify the Milestone Letters better than metrics based on the citation count, which suggests that the structure of the citation network contains information that can be used to improve the ranking of scientific publications. The methods and results presented here are relevant for all evolving systems where network centrality metrics are applied, for example the World Wide Web and online social networks.",
"title": ""
},
{
"docid": "54130e2dd3a202935facdad39c04d914",
"text": "Cross modal face matching between the thermal and visible spectrum is a much desired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship between the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity information. We show substantive performance improvement on a difficult thermal-visible face dataset (UND-X1). The presented approach improves the state-of-the-art by more than 10% in terms of Rank-1 identification and bridge the drop in performance due to the modality gap by more than 40%. The goal of training the deep network is to learn the projections that can be used to bring the two modalities together. Typically, this would mean regressing the representation from one modality towards the other. We construct a deep network comprising N +1 layers with m(k) units in the k-th layer, where k = 1,2, · · · ,N. For an input of x ∈Rd , each layer will output a non-linear projection by using the learned projection matrix W and the non-linear activation function g(·). The output of the k-th hidden layer is h(k) = g(W(k)h(k−1) + b(k)), where W(k) ∈ Rm×m(k−1) is the projection matrix to be learned in that layer, b(k) ∈Rm is a bias vector and g : Rm 7→ Rm is the non-linear activation function. Similarly, the output of the most top level hidden layer can be computed as:",
"title": ""
},
{
"docid": "76e62af2971de3d11d684f1dd7100475",
"text": "Recent advances in memory research suggest methods that can be applied to enhance educational practices. We outline four principles of memory improvement that have emerged from research: 1) process material actively, 2) practice retrieval, 3) use distributed practice, and 4) use metamemory. Our discussion of each principle describes current experimental research underlying the principle and explains how people can take advantage of the principle to improve their learning. The techniques that we suggest are designed to increase efficiency—that is, to allow a person to learn more, in the same unit of study time, than someone using less efficient memory strategies. A common thread uniting all four principles is that people learn best when they are active participants in their own learning.",
"title": ""
},
{
"docid": "8eab9eab5b3d93e6688337128d647b06",
"text": "Primary triple-negative breast cancers (TNBCs), a tumour type defined by lack of oestrogen receptor, progesterone receptor and ERBB2 gene amplification, represent approximately 16% of all breast cancers. Here we show in 104 TNBC cases that at the time of diagnosis these cancers exhibit a wide and continuous spectrum of genomic evolution, with some having only a handful of coding somatic aberrations in a few pathways, whereas others contain hundreds of coding somatic mutations. High-throughput RNA sequencing (RNA-seq) revealed that only approximately 36% of mutations are expressed. Using deep re-sequencing measurements of allelic abundance for 2,414 somatic mutations, we determine for the first time—to our knowledge—in an epithelial tumour subtype, the relative abundance of clonal frequencies among cases representative of the population. We show that TNBCs vary widely in their clonal frequencies at the time of diagnosis, with the basal subtype of TNBC showing more variation than non-basal TNBC. Although p53 (also known as TP53), PIK3CA and PTEN somatic mutations seem to be clonally dominant compared to other genes, in some tumours their clonal frequencies are incompatible with founder status. Mutations in cytoskeletal, cell shape and motility proteins occurred at lower clonal frequencies, suggesting that they occurred later during tumour progression. Taken together, our results show that understanding the biology and therapeutic responses of patients with TNBC will require the determination of individual tumour clonal genotypes.",
"title": ""
},
{
"docid": "b8fcade88646ef6926e756f92064477b",
"text": "We have developed a stencil routing algorithm for implementing a GPU accelerated A-Buffer, by using a multisample texture to store a vector of fragments per pixel. First, all the fragments are captured per pixel in rasterization order. Second, a fullscreen shader pass sorts the fragments using a bitonic sort. At this point, the sorted fragments can be blended arbitrarily to implement various types of algorithms such as order independent transparency or layered depth image generation. Since we handle only 8 fragments per pass, we developed a method for detecting overflow, so we can do additional passes to capture more fragments.",
"title": ""
}
] |
scidocsrr
|
f7beb9e76a397ae7601cdfd22dc99b24
|
Medical Semantic Similarity with a Neural Language Model
|
[
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "b81f831c1152bb6a8812ad800324a6cd",
"text": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted to the biomedical domain. These measures were originally based on WordNet, an English lexical database of concepts and relations. In this research, we adapt these measures to the SNOMED-CT ontology of medical concepts. The measures include two path-based measures, and three measures that augment path-based measures with information content statistics from corpora. We also derive a context vector measure based on medical corpora that can be used as a measure of semantic relatedness. These six measures are evaluated against a newly created test bed of 30 medical concept pairs scored by three physicians and nine medical coders. We find that the medical coders and physicians differ in their ratings, and that the context vector measure correlates most closely with the physicians, while the path-based measures and one of the information content measures correlates most closely with the medical coders. We conclude that there is a role both for more flexible measures of relatedness based on information derived from corpora, as well as for measures that rely on existing ontological structures.",
"title": ""
},
{
"docid": "650fe1308c081bfde0eea6885d6fa256",
"text": "MetaMap is a widely available program providing access to the concepts in the unified medical language system (UMLS) Metathesaurus from biomedical text. This study reports on MetaMap's evolution over more than a decade, concentrating on those features arising out of the research needs of the biomedical informatics community both within and outside of the National Library of Medicine. Such features include the detection of author-defined acronyms/abbreviations, the ability to browse the Metathesaurus for concepts even tenuously related to input text, the detection of negation in situations in which the polarity of predications is important, word sense disambiguation (WSD), and various technical and algorithmic features. Near-term plans for MetaMap development include the incorporation of chemical name recognition and enhanced WSD.",
"title": ""
}
] |
[
{
"docid": "f069501007d4c9d1ada190353d01c7e9",
"text": "A discrimination theory of selective perception was used to predict that a given trait would be spontaneously salient in a person's self-concept to the exten that this trait was distinctive for the person within her or his social groups. Sixth-grade students' general and physical spontaneous self-concepts were elicited in their classroom settings. The distinctiveness within the classroom of each student's characteristics on each of a variety of dimensions was determined, and it was found that in a majority of cases the dimension was significantly more salient in the spontaneous self-concepts of those students whose characteristic on thedimension was more distinctive. Also reported are incidental findings which include a description of the contents of spontaneous self-comcepts as well as determinants of their length and of the spontaneous mention of one's sex as part of one's self-concept.",
"title": ""
},
{
"docid": "47063493a3ae85f68b19314a1eed7388",
"text": "Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user’s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.",
"title": ""
},
{
"docid": "82e298f7a7c8a4788310ed77f7dfb44f",
"text": "Internet addiction (IA) incurs significant social and financial costs in the form of physical side-effects, academic and occupational impairment, and serious relationship problems. The majority of previous studies on Internet addiction disorders (IAD) have focused on structural and functional abnormalities, while few studies have simultaneously investigated the structural and functional brain alterations underlying individual differences in IA tendencies measured by questionnaires in a healthy sample. Here we combined structural (regional gray matter volume, rGMV) and functional (resting-state functional connectivity, rsFC) information to explore the neural mechanisms underlying IAT in a large sample of 260 healthy young adults. The results showed that IAT scores were significantly and positively correlated with rGMV in the right dorsolateral prefrontal cortex (DLPFC, one key node of the cognitive control network, CCN), which might reflect reduced functioning of inhibitory control. More interestingly, decreased anticorrelations between the right DLPFC and the medial prefrontal cortex/rostral anterior cingulate cortex (mPFC/rACC, one key node of the default mode network, DMN) were associated with higher IAT scores, which might be associated with reduced efficiency of the CCN and DMN (e.g., diminished cognitive control and self-monitoring). Furthermore, the Stroop interference effect was positively associated with the volume of the DLPFC and with the IA scores, as well as with the connectivity between DLPFC and mPFC, which further indicated that rGMV variations in the DLPFC and decreased anticonnections between the DLPFC and mPFC may reflect addiction-related reduced inhibitory control and cognitive efficiency. These findings suggest the combination of structural and functional information can provide a valuable basis for further understanding of the mechanisms and pathogenesis of IA.",
"title": ""
},
{
"docid": "c4c683504db4d10265c2eadd8f47107c",
"text": "In this paper, an approach for industrial machine vision system is introduced for effective maintenance of inventory in order to minimize the production cost in supply chain network. The objective is to propose an efficient technique for object identification, localization, and report generation to monitor the inventory level in real time video stream based on the object appearance model. The appearance model is considered as visual signature by which individual object can be detected anywhere via camera feed. Herein, Speeded Up Robust Features (SURF) are used to identify the object. Firstly, SURF features are extracted from prototype image which refers to predefined template of individual objects, and then extracted from the camera feed of inventory i.e. scene image. Density based clustering on SURF points of prototype is done, followed by feature mapping of each cluster to SURF points in scene image. Homographic transforms are then used to obtain the location and mark the presence of objects in the scene image. Further, for better invariance to occlusion and faster computation, a novel method for tuning the hyper parameters of clustering is also proposed. The proposed methodology is found to be reliable and is able to give robust real time count of objects in inventory with invariance to scale, rotation and upto 70% of occlusion.",
"title": ""
},
{
"docid": "8075cc962ce18cea46a8df4396512aa5",
"text": "In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing tasks, such as language modelling and machine translation. This suggests that neural models will also achieve good performance on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using a semantic rather than lexical matching. Although initial iterations of neural models do not outperform traditional lexical-matching baselines, the level of interest and effort in this area is increasing, potentially leading to a breakthrough. The popularity of the recent SIGIR 2016 workshop on Neural Information Retrieval provides evidence to the growing interest in neural models for IR. While recent tutorials have covered some aspects of deep learning for retrieval tasks, there is a significant scope for organizing a tutorial that focuses on the fundamentals of representation learning for text retrieval. The goal of this tutorial will be to introduce state-of-the-art neural embedding models and bridge the gap between these neural models with early representation learning approaches in IR (e.g., LSA). We will discuss some of the key challenges and insights in making these models work in practice, and demonstrate one of the toolsets available to researchers interested in this area.",
"title": ""
},
{
"docid": "1bd06d0d120b28f5d0720643fcdb9944",
"text": "Indoor positioning system based on Receive Signal Strength Indication (RSSI) from Wireless access equipment have become very popular in recent years. This system is very useful in many applications such as tracking service for older people, mobile robot localization and so on. While Outdoor environment using Global Navigation Satellite System (GNSS) and cellular [14] network works well and widespread for navigator. However, there was a problem with signal propagation from satellites. They cannot be used effectively inside the building areas until a urban environment. In this paper we propose the Wi-Fi Fingerprint Technique using Fuzzy set theory to adaptive Basic K-Nearest Neighbor algorithm to classify the labels of a database system. It was able to improve the accuracy and robustness. The performance of our simple algorithm is evaluated by the experimental results which show that our proposed scheme can achieve a certain level of positioning system accuracy.",
"title": ""
},
{
"docid": "82180726cc1aaaada69f3b6cb0e89acc",
"text": "The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair. Keywords—EPW, fall avoidance control, skyhook, wheeled inverted pendulum.",
"title": ""
},
{
"docid": "c0d7b92c1b88a2c234eac67c5677dc4d",
"text": "To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization",
"title": ""
},
{
"docid": "0f173a3486bf09ced9d221019241c7c4",
"text": "In millimeter-wave (mmWave) systems, antenna architecture limitations make it difficult to apply conventional fully digital precoding techniques but call for low-cost analog radio frequency (RF) and digital baseband hybrid precoding methods. This paper investigates joint RF-baseband hybrid precoding for the downlink of multiuser multiantenna mmWave systems with a limited number of RF chains. Two performance measures, maximizing the spectral efficiency and the energy efficiency of the system, are considered. We propose a codebook-based RF precoding design and obtain the channel state information via a beam sweep procedure. Via the codebook-based design, the original system is transformed into a virtual multiuser downlink system with the RF chain constraint. Consequently, we are able to simplify the complicated hybrid precoding optimization problems to joint codeword selection and precoder design (JWSPD) problems. Then, we propose efficient methods to address the JWSPD problems and jointly optimize the RF and baseband precoders under the two performance measures. Finally, extensive numerical results are provided to validate the effectiveness of the proposed hybrid precoders.",
"title": ""
},
{
"docid": "3f2d9b5257896a4469b7e1c18f1d4e41",
"text": "Data envelopment analysis (DEA) is a method for measuring the efficiency of peer decision making units (DMUs). Recently DEA has been extended to examine the efficiency of two-stage processes, where all the outputs from the first stage are intermediate measures that make up the inputs to the second stage. The resulting two-stage DEA model provides not only an overall efficiency score for the entire process, but as well yields an efficiency score for each of the individual stages. Due to the existence of intermediate measures, the usual procedure of adjusting the inputs or outputs by the efficiency scores, as in the standard DEA approach, does not necessarily yield a frontier projection. The current paper develops an approach for determining the frontier points for inefficient DMUs within the framework of two-stage DEA. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "29236d00bde843ff06e0f1a3e0ab88e4",
"text": "■ The advent of the modern cruise missile, with reduced radar observables and the capability to fly at low altitudes with accurate navigation, placed an enormous burden on all defense weapon systems. Every element of the engagement process, referred to as the kill chain, from detection to target kill assessment, was affected. While the United States held the low-observabletechnology advantage in the late 1970s, that early lead was quickly challenged by advancements in foreign technology and proliferation of cruise missiles to unfriendly nations. Lincoln Laboratory’s response to the various offense/defense trade-offs has taken the form of two programs, the Air Vehicle Survivability Evaluation program and the Radar Surveillance Technology program. The radar developments produced by these two programs, which became national assets with many notable firsts, is the subject of this article.",
"title": ""
},
{
"docid": "cae906033391328e9875b0a05c9d3772",
"text": "Software tools for Business Process Reengineering (BPR) promise to reduce cost and improve quality of projects. This paper discusses the contribution of BPR tools in BPR projects and identi®es critical factors for their success. A model was built based on previous research on tool success. The analysis of empirical data shows that BPR tools are related to effectiveness rather than ef®ciency of the projects. Process visualization and process analysis features are key to BPR tool competence. Also success factors for BPR tools are different from those for CASE tools. # 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "f733125d8cd3d90ac7bf463ae93ca24a",
"text": "Various online, networked systems offer a lightweight process for obtaining identities (e.g., confirming a valid e-mail address), so that users can easily join them. Such convenience comes with a price, however: with minimum effort, an attacker can subvert the identity management scheme in place, obtain a multitude of fake accounts, and use them for malicious purposes. In this work, we approach the issue of fake accounts in large-scale, distributed systems, by proposing a framework for adaptive identity management. Instead of relying on users' personal information as a requirement for granting identities (unlike existing proposals), our key idea is to estimate a trust score for identity requests, and price them accordingly using a proof of work strategy. The research agenda that guided the development of this framework comprised three main items: (i) investigation of a candidate trust score function, based on an analysis of users' identity request patterns, (ii) combination of trust scores and proof of work strategies (e.g. cryptograhic puzzles) for adaptively pricing identity requests, and (iii) reshaping of traditional proof of work strategies, in order to make them more resource-efficient, without compromising their effectiveness (in stopping attackers).",
"title": ""
},
{
"docid": "ac1d1bf198a178cb5655768392c3d224",
"text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.",
"title": ""
},
{
"docid": "d7f5dba8ac5c35b6efbd684a3ba01743",
"text": "OBJECTIVES\nBiphasic calcium phosphate (BCP) is frequently used as bone substitute and often needs to be combined with autologous bone to gain an osteoinductive property for guided bone regeneration in implant dentistry. Given the limitations of using autologous bone, bone morphogenetic protein-2 (BMP2)-coprecipitated, layer-by-layer assembled biomimetic calcium phosphate particles (BMP2-cop.BioCaP) have been developed as a potential osteoinducer. In this study, we hypothesized that BMP2-cop.BioCaP could introduce osteoinductivity to BCP and so could function as effectively as autologous bone for the repair of a critical-sized bone defect.\n\n\nMATERIALS AND METHODS\nWe prepared BMP2-cop.BioCaP and monitored the loading and release kinetics of BMP2 from it in vitro. Seven groups (n = 6 animals/group) were established: (i) Empty defect; (ii) BCP; (iii) BCP mixed with biomimetic calcium phosphate particles (BioCaP); (iv) BCP mixed with BMP2-cop.BioCaP; (v) BioCaP; (vi) BMP2-cop.BioCaP; (vii) BCP mixed with autologous bone. They were implanted into 8-mm-diameter rat cranial critical-sized bone defects for an in vivo evaluation. Autologous bone served as a positive control. The osteoinductive efficacy and degradability of materials were evaluated using micro-CT, histology and histomorphometry.\n\n\nRESULTS\nThe combined application of BCP and BMP2-cop.BioCaP resulted in significantly more new bone formation than BCP alone. The osteoinductive efficacy of BMP2-cop.BioCaP was comparable to the golden standard use of autologous bone. Compared with BCP alone, significantly more BCP degradation was found when mixed with BMP2-cop.BioCaP.\n\n\nCONCLUSION\nThe combination of BCP and BMP2-cop.BioCaP showed a promising potential for guided bone regeneration clinically in the future.",
"title": ""
},
{
"docid": "0a37fcb6c1fba747503fc4e3b5540680",
"text": "In this paper we introduce the problem of predicting action progress in videos. We argue that this is an extremely important task because, on the one hand, it can be valuable for a wide range of applications and, on the other hand, it facilitates better action detection results. To solve this problem we introduce a novel approach, named ProgressNet, capable of predicting when an action takes place in a video, where it is located within the frames, and how far it has progressed during its execution. Motivated by the recent success obtained from the interaction of Convolutional and Recurrent Neural Networks, our model is based on a combination of the Faster R-CNN framework, to make framewise predictions, and LSTM networks, to estimate action progress through time. After introducing two evaluation protocols for the task at hand, we demonstrate the capability of our model to effectively predict action progress on the UCF-101 and J-HMDB datasets. Additionally, we show that exploiting action progress it is also possible to improve spatio-temporal localization.",
"title": ""
},
{
"docid": "5a81fb944472b798a9bd65293784e0eb",
"text": "This paper presents a generalized structure for a frequency diverse array radar. In its simplest form, the frequency diverse array applies a linear phase progression across the aperture. This linear phase progression induces an electronic beam scan, as in a conventional phased array. When an additional linear frequency shift is applied across the elements, a new term is generated which results in a scan angle that varies with range in the far-field. This provides more flexible beam scan options, as well as providing resistance to point interference such as multipath. More general implementations provide greater degrees of freedom for space-time-frequency-phase-polarization control, permitting novel concepts for simultaneous multi-mission operation, such as performing synthetic aperture radar and ground moving target indication at the same time.",
"title": ""
},
{
"docid": "8d8bd53d2d9b6bf5f6e51d11aea35036",
"text": "Oseltamivir (Tamiflu), a neuraminidase inhibitor, was approved for seasonal flu by US Food and Drug Administration in 1999. A number of randomized controlled trials, systematic reviews, and meta-analysis emphasized a favorable efficacy and safety profile. Majority of them were funded by Roche, which also first marketed and promoted this drug. In 2005 and 2009, the looming fear of pandemic flu led to recommendation by prominent regulatory bodies such as World Health Organization (WHO), Centers for Disease Control and Prevention, European Medicines Agency and others for its use in treatment and prophylaxis of influenza, and it's stockpiling as a measure to tide over the crisis. Serious Adverse Events, especially neuropsychiatric events associated with Tamiflu started getting reported leading to a cascade of questions on clinical utility of this drug. A recent Cochrane review and related articles have questioned the risk-benefit ratio of the drug, besides raising doubts about the regulatory decision of approving it. The recommendations for stockpiling the said drug as given by various international organizations viz WHO have also been put to scrutiny. Although many reviewers have labeled the Tamiflu saga as a \"costly mistake,\" the episode leaves us with some important lessons. This article takes a comprehensive relook on the subject, and we proceed to suggest some ways and means to avoid a similar situation in the future.",
"title": ""
},
{
"docid": "3c8b9a015157a7dd7ce4a6b0b35847d9",
"text": "While more and more people are relying on social media for news feeds, serious news consumers still resort to well-established news outlets for more accurate and in-depth reporting and analyses. They may also look for reports on related events that have happened before and other background information in order to better understand the event being reported. Many news outlets already create sidebars and embed hyperlinks to help news readers, often with manual efforts. Technologies in IR and NLP already exist to support those features, but standard test collections do not address the tasks of modern news consumption. To help advance such technologies and transfer them to news reporting, NIST, in partnership with the Washington Post, is starting a new TREC track in 2018 known as the News Track.",
"title": ""
}
] |
scidocsrr
|
f30fd2eb69b18ab30c3bcd4d83992fa5
|
Unsupervised Modeling of Dialog Acts in Asynchronous Conversations
|
[
{
"docid": "8f9af064f348204a71f0e542b2b98e7b",
"text": "It is often useful to classify email according to the intent of the sender (e.g., \"propose a meeting\", \"deliver information\"). We present experimental results in learning to classify email in this fashion, where each class corresponds to a verbnoun pair taken from a predefined ontology describing typical “email speech acts”. We demonstrate that, although this categorization problem is quite different from “topical” text classification, certain categories of messages can nonetheless be detected with high precision (above 80%) and reasonable recall (above 50%) using existing text-classification learning methods. This result suggests that useful task-tracking tools could be constructed based on automatic classification into this taxonomy.",
"title": ""
}
] |
[
{
"docid": "d768504fea8b0951a3c26edb26ce7f15",
"text": "Software quality is the totality of features and characteristics of a product or a service that bears on its ability to satisfy the given needs. Poor quality of the software product in sensitive systems may lead to loss of human life, permanent injury, mission failure, or financial loss. So the quality of the project should be maintained at appropriate label. To maintain the quality, there are different quality models. ″A high quality product is one which has associated with it a number of quality factors. These could be described in the requirements specification; they could be cultured, in that they are normally associated with the artifact through familiarity of use and through the shared experience of users. In this paper, we will discuss all the quality models: McCall's quality model, Boehm's quality model, Dromey's quality model, and FURPS quality model and focus on a comparison between these models, and find the key differences between them.",
"title": ""
},
{
"docid": "bc6be8b5fd426e7f8d88645a2b21ff6a",
"text": "irtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.” This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills”—and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials and in testimony delivered just a few months ago before the Senate Finance Committee, business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking. And ACT, Inc. offers a test of critical thinking for college students. These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay. Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides. But, for reasons I’ll explain, the programs were not very effective—and today we still lament students’ lack of critical thinking. After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about Critical Thinking",
"title": ""
},
{
"docid": "aeb19f8f9c6e5068fc602682e4ae04d3",
"text": "Received: 29 November 2004 Revised: 26 July 2005 Accepted: 4 November 2005 Abstract Interpretive research in information systems (IS) is now a well-established part of the field. However, there is a need for more material on how to carry out such work from inception to publication. I published a paper a decade ago (Walsham, 1995) which addressed the nature of interpretive IS case studies and methods for doing such research. The current paper extends this earlier contribution, with a widened scope of all interpretive research in IS, and through further material on carrying out fieldwork, using theory and analysing data. In addition, new topics are discussed on constructing and justifying a research contribution, and on ethical issues and tensions in the conduct of interpretive work. The primary target audience for the paper is lessexperienced IS researchers, but I hope that the paper will also stimulate reflection for the more-experienced IS researcher and be of relevance to interpretive researchers in other social science fields. European Journal of Information Systems (2006) 15, 320–330. doi:10.1057/palgrave.ejis.3000589",
"title": ""
},
{
"docid": "ba2632b7a323e785b57328d32a26bc99",
"text": "Modern malware is designed with mutation characteristics, namely polymorphism and metamorphism, which causes an enormous growth in the number of variants of malware samples. Categorization of malware samples on the basis of their behaviors is essential for the computer security community, because they receive huge number of malware everyday, and the signature extraction process is usually based on malicious parts characterizing malware families. Microsoft released a malware classification challenge in 2015 with a huge dataset of near 0.5 terabytes of data, containing more than 20K malware samples. The analysis of this dataset inspired the development of a novel paradigm that is effective in categorizing malware variants into their actual family groups. This paradigm is presented and discussed in the present paper, where emphasis has been given to the phases related to the extraction, and selection of a set of novel features for the effective representation of malware samples. Features can be grouped according to different characteristics of malware behavior, and their fusion is performed according to a per-class weighting paradigm. The proposed method achieved a very high accuracy ($\\approx$ 0.998) on the Microsoft Malware Challenge dataset.",
"title": ""
},
{
"docid": "00c19e68020aff7fd86aa7e514cc0668",
"text": "Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme includes three components of capturing and storing network data, selecting important network features using chi-square method and investigating abnormal events using a new technique called correntropy-variation. We provide a case study using the UNSW-NB15 dataset for evaluating the scheme, showing its high performance in terms of accuracy and false alarm rate compared with three recent state-of-the-art mechanisms.",
"title": ""
},
{
"docid": "8d34f60ac69f63a6d237986939ce7cfa",
"text": "As more applications move from the desktop to touch devices like tablets, designers must wrestle with the costs of porting a design with as little revision of the UI as possible from one device to the other, or of optimizing the interaction per device. We consider the tradeoffs between two versions of a UI for working with data on a touch tablet. One interface is based on using the conventional desktop metaphor (WIMP) with a control panel, push buttons, and checkboxes -- where the mouse click is effectively replaced by a finger tap. The other interface (which we call FLUID) eliminates the control panel and focuses touch actions on the data visualization itself. We describe our design process and evaluation of each interface. We discuss the significantly better task performance and preference for the FLUID interface, in particular how touch design may challenge certain assumptions about the performance benefits of WIMP interfaces that do not hold on touch devices, such as the superiority of gestural vs. control panel based interaction.",
"title": ""
},
{
"docid": "911341dd579c7d16aa918497a23afc31",
"text": "We discuss a variant of Thompson sampling for nonparametric reinforcement learning in countable classes of general stochastic environments. These environments can be non-Markov, nonergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.",
"title": ""
},
{
"docid": "b6bc9018655336dac304fe2280f1a66b",
"text": "The trend Gamification occurred several years ago in the context of digital teaching and training aka e-learning. Looking closer at the development of game-based learning and gamification in computer science, it has to be admitted, that game-based learning seems to have ended in an impasse point where the instructionally smooth integration of learning and gaming has not really been realizable, yet -- at least not without enormous effort by the learning and gaming content author. The situation in gamification in digital teaching and training systems comes in a similar shape. A structured analysis of what gamification is can help to develop a systematic approach, which then can support a structured and methodologically smooth design of gamification for digital teaching and training.",
"title": ""
},
{
"docid": "b44b177f50402015e343e78afe4d7523",
"text": "A design of a novel wireless implantable blood pressure sensing microsystem for advanced biological research is presented. The system employs a miniature instrumented elastic cuff, wrapped around a blood vessel, for small laboratory animal real-time blood pressure monitoring. The elastic cuff is made of biocompatible soft silicone material by a molding process and is filled by insulating silicone oil with an immersed MEMS capacitive pressure sensor interfaced with low-power integrated electronic system. This technique avoids vessel penetration and substantially minimizes vessel restriction due to the soft cuff elasticity, and is thus attractive for long-term implant. The MEMS pressure sensor detects the coupled blood pressure waveform caused by the vessel expansion and contraction, followed by amplification, 11-bit digitization, and wireless FSK data transmission to an external receiver. The integrated electronics are designed with capability of receiving RF power from an external power source and converting the RF signal to a stable 2 V DC supply in an adaptive manner to power the overall implant system, thus enabling the realization of stand-alone batteryless implant microsystem. The electronics are fabricated in a 1.5 μm CMOS process and occupy an area of 2 mm × 2 mm. The prototype monitoring cuff is wrapped around the right carotid artery of a laboratory rat to measure real-time blood pressure waveform. The measured in vivo blood waveform is compared with a reference waveform recorded simultaneously using a commercial catheter-tip transducer inserted into the left carotid artery. The two measured waveforms are closely matched with a constant scaling factor. The ASIC is interfaced with a 5-mm-diameter RF powering coil with four miniature surface-mounted components (one inductor and three capacitors) over a thin flexible substrate by bond wires, followed by silicone coating and packaging with the prototype blood pressure monitoring cuff. The overall system achieves a measured average sensitivity of 7 LSB/ mmHg, a nonlinearity less than 2.5% of full scale, and a hysteresis less than 1% of full scale. From noise characterization, a blood vessel pressure change sensing resolution 328 of 1 mmHg can be expected. The system weighs 330 mg, representing an order of magnitude mass reduction compared with state-of-the-art commercial technology.",
"title": ""
},
{
"docid": "5399b924cdf1d034a76811360b6c018d",
"text": "Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.",
"title": ""
},
{
"docid": "2d5d72944f12446a93e63f53ffce7352",
"text": "Standardization of transanal total mesorectal excision requires the delineation of the principal procedural components before implementation in practice. This technique is a bottom-up approach to a proctectomy with the goal of a complete mesorectal excision for optimal outcomes of oncologic treatment. A detailed stepwise description of the approach with technical pearls is provided to optimize one's understanding of this technique and contribute to reducing the inherent risk of beginning a new procedure. Surgeons should be trained according to standardized pathways including online preparation, observational or hands-on courses as well as the potential for proctorship of early cases experiences. Furthermore, technological pearls with access to the \"video-in-photo\" (VIP) function, allow surgeons to link some of the images in this article to operative demonstrations of certain aspects of this technique.",
"title": ""
},
{
"docid": "fc9662f6057e17533868ab7fe4d87f39",
"text": "Autonomic computing is central to the success of IT infrastructure deployment as its complexity and pervasiveness grows. This paper addresses one aspect of policy-based autonomic computing - the issue of identifying dependencies between policies, knowledge of which is useful to the policymaker while defining or updating policies. These dependencies are determined via assesment of the impact of a policy on the sensors (measurable entities at runtime). Our approach uses a simple pragmatic model over the measured runtime information from the recent past. Both static and runtime information is combined to provide effective feedback.",
"title": ""
},
{
"docid": "6346955de2fa46e5c109ada42b4e9f77",
"text": "Retinopathy of prematurity (ROP) is a disease that can cause blindness in very low birthweight infants. The incidence of ROP is closely correlated with the weight and the gestational age at birth. Despite current therapies, ROP continues to be a highly debilitating disease. Our advancing knowledge of the pathogenesis of ROP has encouraged investigations into new antivasculogenic therapies. The purpose of this article is to review the findings on the pathophysiological mechanisms that contribute to the transition between the first and second phases of ROP and to investigate new potential therapies. Oxygen has been well characterized for the key role that it plays in retinal neoangiogenesis. Low or high levels of pO2 regulate the normal or abnormal production of hypoxia-inducible factor 1 and vascular endothelial growth factors (VEGF), which are the predominant regulators of retinal angiogenesis. Although low oxygen saturation appears to reduce the risk of severe ROP when carefully controlled within the first few weeks of life, the optimal level of saturation still remains uncertain. IGF-1 and Epo are fundamentally required during both phases of ROP, as alterations in their protein levels can modulate disease progression. Therefore, rhIGF-1 and rhEpo were tested for their abilities to prevent the loss of vasculature during the first phase of ROP, whereas anti-VEGF drugs were tested during the second phase. At present, previous hypotheses concerning ROP should be amended with new pathogenetic theories. Studies on the role of genetic components, nitric oxide, adenosine, apelin and β-adrenergic receptor have revealed new possibilities for the treatment of ROP. The genetic hypothesis that single-nucleotide polymorphisms within the β-ARs play an active role in the pathogenesis of ROP suggests the concept of disease prevention using β-blockers. In conclusion, all factors that can mediate the progression from the avascular to the proliferative phase might have significant implications for the further understanding and treatment of ROP.",
"title": ""
},
{
"docid": "c7059c650323a08ac7453ad4185e6c4f",
"text": "Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.1",
"title": ""
},
{
"docid": "2e9d46f8be771894a2b61aa8a5c82715",
"text": "Military vehicles are important part to maintain territories of a country. Military vehicle is often equipped with a gun turret mounted on top of the vehicle. Traditionally, gun turret is operated manually by an operator sitting on the vehicle. With the advance of current robotic technology an automatic operation of gun turret is highly possible. Notable works on automatic gun turret tend to use features that are manually designed as an input to a classifier for target tracking. These features can cause less optimal parameters and require highly complex kinematic and dynamic analysis specific to a particular turret. In this paper, toward the goal of realizing an automatic targeting system of gun turret, a gun turret simulation system is developed by leveraging fully connected network of deep learning using only visual information from a camera. It includes designing convolutional layers to accurately detect and tracking a target with input from a camera. All network parameters are automatically and jointly learned without any human intervention, all network parameters are driven purely by data. This method also requires less kinematic and dynamic model. Experiments show encouraging results that the automatic targeting system of gun turret using only a camera can benefit research in the related fields.",
"title": ""
},
{
"docid": "9038e4a2f07ce78e0352b381da961a4e",
"text": "In a world where highly skilled actors involved in cyber-attacks are constantly increasing and where the associated underground market continues to expand, organizations should adapt their defence strategy and improve consequently their security incident management. In this paper, we give an overview of Advanced Persistent Threats (APT) attacks life cycle as defined by security experts. We introduce our own compiled life cycle model guided by attackers objectives instead of their actions. Challenges and opportunities related to the specific camouflage actions performed at the end of each APT phase of the model are highlighted. We also give an overview of new APT protection technologies and discuss their effectiveness at each one of life cycle phases.",
"title": ""
},
{
"docid": "5cf92beeeb4e1f3e36a8ff1fd639d40d",
"text": "Mobile application spoofing is an attack where a malicious mobile app mimics the visual appearance of another one. A common example of mobile application spoofing is a phishing attack where the adversary tricks the user into revealing her password to a malicious app that resembles the legitimate one. In this paper, we propose a novel spoofing detection approach, tailored to the protection of mobile app login screens, using screenshot extraction and visual similarity comparison. We use deception rate as a novel similarity metric for measuring how likely the user is to consider a potential spoofing app as one of the protected applications. We conducted a large-scale online study where participants evaluated spoofing samples of popular mobile app login screens, and used the study results to implement a detection system that accurately estimates deception rate. We show that efficient detection is possible with low overhead.",
"title": ""
},
{
"docid": "2be043b09e6dd631b5fe6f9eed44e2ec",
"text": "This article aims to contribute to a critical research agenda for investigating the democratic implications of citizen journalism and social news. The article calls for a broad conception of ‘citizen journalism’ which is (1) not an exclusively online phenomenon, (2) not confined to explicitly ‘alternative’ news sources, and (3) includes ‘metajournalism’ as well as the practices of journalism itself. A case is made for seeing democratic implications not simply in the horizontal or ‘peer-to-peer’ public sphere of citizen journalism networks, but also in the possibility of a more ‘reflexive’ culture of news consumption through citizen participation. The article calls for a research agenda that investigates new forms of gatekeeping and agendasetting power within social news and citizen journalism networks and, drawing on the example of three sites, highlights the importance of both formal and informal status differentials and of the software ‘code’ structuring these new modes of news",
"title": ""
},
{
"docid": "5158b5da8a561799402cb1ef3baa3390",
"text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.",
"title": ""
},
{
"docid": "7adbcbcf5d458087d6f261d060e6c12b",
"text": "Operation of MOS devices in the strong, moderate, and weak inversion regions is considered. The advantages of designing the input differential stage of a CMOS op amp to operate in the weak or moderate inversion region are presented. These advantages include higher voltage gain, less distortion, and ease of compensation. Specific design guidelines are presented to optimize amplifier performance. Simulations that demonstrate the expected improvements are given.",
"title": ""
}
] |
scidocsrr
|
bbf95b8ab795bc59279df40747397b97
|
Protect sensitive sites from phishing attacks using features extractable from inaccessible phishing URLs
|
[
{
"docid": "c37e41dd09a9c676e6e6b18f3f518915",
"text": "Malicious URLs have been widely used to mount various cyber attacks including spamming, phishing and malware. Detection of malicious URLs and identification of threat types are critical to thwart these attacks. Knowing the type of a threat enables estimation of severity of the attack and helps adopt an effective countermeasure. Existing methods typically detect malicious URLs of a single attack type. In this paper, we propose method using machine learning to detect malicious URLs of all the popular attack types and identify the nature of attack a malicious URL attempts to launch. Our method uses a variety of discriminative features including textual properties, link structures, webpage contents, DNS information, and network traffic. Many of these features are novel and highly effective. Our experimental studies with 40,000 benign URLs and 32,000 malicious URLs obtained from real-life Internet sources show that our method delivers a superior performance: the accuracy was over 98% in detecting malicious URLs and over 93% in identifying attack types. We also report our studies on the effectiveness of each group of discriminative features, and discuss their evadability.",
"title": ""
},
{
"docid": "5cb8c778f0672d88241cc22da9347415",
"text": "Phishing websites, fraudulent sites that impersonate a trusted third party to gain access to private data, continue to cost Internet users over a billion dollars each year. In this paper, we describe the design and performance characteristics of a scalable machine learning classifier we developed to detect phishing websites. We use this classifier to maintain Google’s phishing blacklist automatically. Our classifier analyzes millions of pages a day, examining the URL and the contents of a page to determine whether or not a page is phishing. Unlike previous work in this field, we train the classifier on a noisy dataset consisting of millions of samples from previously collected live classification data. Despite the noise in the training data, our classifier learns a robust model for identifying phishing pages which correctly classifies more than 90% of phishing pages several weeks after training concludes.",
"title": ""
},
{
"docid": "b4038ddbd2795acf0bb0178330e15b61",
"text": "Phishing has been easy and effective way for trickery and deception on the Internet. While solutions such as URL blacklisting have been effective to some degree, their reliance on exact match with the blacklisted entries makes it easy for attackers to evade. We start with the observation that attackers often employ simple modifications (e.g., changing top level domain) to URLs. Our system, PhishNet, exploits this observation using two components. In the first component, we propose five heuristics to enumerate simple combinations of known phishing sites to discover new phishing URLs. The second component consists of an approximate matching algorithm that dissects a URL into multiple components that are matched individually against entries in the blacklist. In our evaluation with real-time blacklist feeds, we discovered around 18,000 new phishing URLs from a set of 6,000 new blacklist entries. We also show that our approximate matching algorithm leads to very few false positives (3%) and negatives (5%).",
"title": ""
},
{
"docid": "019ee0840b91f97a3acc3411edadcade",
"text": "Despite the many solutions proposed by industry and the research community to address phishing attacks, this problem continues to cause enormous damage. Because of our inability to deter phishing attacks, the research community needs to develop new approaches to anti-phishing solutions. Most of today's anti-phishing technologies focus on automatically detecting and preventing phishing attacks. While automation makes anti-phishing tools user-friendly, automation also makes them suffer from false positives, false negatives, and various practical hurdles. As a result, attackers often find simple ways to escape automatic detection.\n This paper presents iTrustPage - an anti-phishing tool that does not rely completely on automation to detect phishing. Instead, iTrustPage relies on user input and external repositories of information to prevent users from filling out phishing Web forms. With iTrustPage, users help to decide whether or not a Web page is legitimate. Because iTrustPage is user-assisted, iTrustPage avoids the false positives and the false negatives associated with automatic phishing detection. We implemented iTrustPage as a downloadable extension to FireFox. After being featured on the Mozilla website for FireFox extensions, iTrustPage was downloaded by more than 5,000 users in a two week period. We present an analysis of our tool's effectiveness and ease of use based on our examination of usage logs collected from the 2,050 users who used iTrustPage for more than two weeks. Based on these logs, we find that iTrustPage disrupts users on fewer than 2% of the pages they visit, and the number of disruptions decreases over time.",
"title": ""
}
] |
[
{
"docid": "c020a3ba9a2615cb5ed9a7e9d5aa3ce0",
"text": "Neural network approaches to Named-Entity Recognition reduce the need for carefully handcrafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a lowdimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",
"title": ""
},
{
"docid": "a5b147f5b3da39fed9ed11026f5974a2",
"text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).",
"title": ""
},
{
"docid": "e89a1c0fb1b0736b238373f2fbca91a0",
"text": "In this paper, we provide a comprehensive study of elliptic curve cryptography (ECC) for wireless sensor networks (WSN) security provisioning, mainly for key management and authentication modules. On the other hand, we present and evaluate a side-channel attacks (SCAs) experimental bench solution for energy evaluation, especially simple power analysis (SPA) attacks experimental bench to measure dynamic power consumption of ECC operations. The goal is the best use of the already installed SCAs experimental bench by performing the robustness test of ECC devices against SPA as well as the estimate of its energy and dynamic power consumption. Both operations are tested: point multiplication over Koblitz curves and doubling points over binary curves, with respectively affine and projective coordinates. The experimental results and its comparison with simulation ones are presented. They can lead to accurate power evaluation with the maximum reached error less than 30%.",
"title": ""
},
{
"docid": "c80a60778e5c8e3349ce13475176a118",
"text": "Future homes will be populated with large numbers of robots with diverse functionalities, ranging from chore robots to elder care robots to entertainment robots. While household robots will offer numerous benefits, they also have the potential to introduce new security and privacy vulnerabilities into the home. Our research consists of three parts. First, to serve as a foundation for our study, we experimentally analyze three of today's household robots for security and privacy vulnerabilities: the WowWee Rovio, the Erector Spykee, and the WowWee RoboSapien V2. Second, we synthesize the results of our experimental analyses and identify key lessons and challenges for securing future household robots. Finally, we use our experiments and lessons learned to construct a set of design questions aimed at facilitating the future development of household robots that are secure and preserve their users' privacy.",
"title": ""
},
{
"docid": "c1ccbb8e8a9fa8a3291e9b8a2f8ee8aa",
"text": "Chronic stress is one of the predominant environmental risk factors for a number of psychiatric disorders, particularly for major depression. Different hypotheses have been formulated to address the interaction between early and adult chronic stress in psychiatric disease vulnerability. The match/mismatch hypothesis of psychiatric disease states that the early life environment shapes coping strategies in a manner that enables individuals to optimally face similar environments later in life. We tested this hypothesis in female Balb/c mice that underwent either stress or enrichment early in life and were in adulthood further subdivided in single or group housed, in order to provide aversive or positive adult environments, respectively. We studied the effects of the environmental manipulation on anxiety-like, depressive-like and sociability behaviors and gene expression profiles. We show that continuous exposure to adverse environments (matched condition) is not necessarily resulting in an opposite phenotype compared to a continuous supportive environment (matched condition). Rather, animals with mismatched environmental conditions behaved differently from animals with matched environments on anxious, social and depressive like phenotypes. These results further support the match/mismatch hypothesis and illustrate how mild or moderate aversive conditions during development can shape an individual to be optimally adapted to similar conditions later in life.",
"title": ""
},
{
"docid": "85b169515b4e4b86117abcdd83f002ea",
"text": "While Bitcoin (Peer-to-Peer Electronic Cash) [Nak]solved the double spend problem and provided work withtimestamps on a public ledger, it has not to date extendedthe functionality of a blockchain beyond a transparent andpublic payment system. Satoshi Nakamoto's original referenceclient had a decentralized marketplace service which was latertaken out due to a lack of resources [Deva]. We continued withNakamoto's vision by creating a set of commercial-grade ser-vices supporting a wide variety of business use cases, includinga fully developed blockchain-based decentralized marketplace,secure data storage and transfer, and unique user aliases thatlink the owner to all services controlled by that alias.",
"title": ""
},
{
"docid": "bfb79421ca0ddfd5a584f009f8102a2c",
"text": "In this paper, suppression of cross-polarized (XP) radiation of a circular microstrip patch antenna (CMPA) employing two new geometries of defected ground structures (DGSs), is experimentally investigated. One of the antennas employs a circular ring shaped defect in the ground plane, located bit away from the edge of the patch. This structure provides an improvement of XP level by 5 to 7 dB compared to an identical patch with normal ground plane. The second structure incorporates two arc-shaped DGSs in the H-plane of the patch. This configuration improves the XP radiation by about 7 to 12 dB over and above a normal CMPA. For demonstration of the concept, a set of prototypes have been examined at C-band. The experimental results have been presented.",
"title": ""
},
{
"docid": "b7cc4a094988643e65d80d4989276d98",
"text": "In this paper, we describe the design and layout of an automotive radar sensor demonstrator for 77 GHz with a SiGe chipset and a fully parallel receiver architecture which is capable of digital beamforming and superresolution direction of arrival estimation methods in azimuth. Additionally, we show measurement results of this radar sensor mounted on a test vehicle.",
"title": ""
},
{
"docid": "cb39f6ac5646e733604902a4b74b797c",
"text": "In this paper, we present a generative model based approach to solve the multi-view stereo problem. The input images are considered to be generated by either one of two processes: (i) an inlier process, which generates the pixels which are visible from the reference camera and which obey the constant brightness assumption, and (ii) an outlier process which generates all other pixels. Depth and visibility are jointly modelled as a hiddenMarkov Random Field, and the spatial correlations of both are explicitly accounted for. Inference is made tractable by an EM-algorithm, which alternates between estimation of visibility and depth, and optimisation of model parameters. We describe and compare two implementations of the E-step of the algorithm, which correspond to the Mean Field and Bethe approximations of the free energy. The approach is validated by experiments on challenging real-world scenes, of which two are contaminated by independently moving objects.",
"title": ""
},
{
"docid": "1e645b6134fb5ef80f89e6d10b1cb734",
"text": "This paper analyzes the effect of replay attacks on a control system. We assume an attacker wishes to disrupt the operation of a control system in steady state. In order to inject an exogenous control input without being detected the attacker will hijack the sensors, observe and record their readings for a certain amount of time and repeat them afterwards while carrying out his attack. This is a very common and natural attack (we have seen numerous times intruders recording and replaying security videos while performing their attack undisturbed) for an attacker who does not know the dynamics of the system but is aware of the fact that the system itself is expected to be in steady state for the duration of the attack. We assume the control system to be a discrete time linear time invariant gaussian system applying an infinite horizon Linear Quadratic Gaussian (LQG) controller. We also assume that the system is equipped with a χ2 failure detector. The main contributions of the paper, beyond the novelty of the problem formulation, consist in 1) providing conditions on the feasibility of the replay attack on the aforementioned system and 2) proposing a countermeasure that guarantees a desired probability of detection (with a fixed false alarm rate) by trading off either detection delay or LQG performance, either by decreasing control accuracy or increasing control effort.",
"title": ""
},
{
"docid": "0e8ab182a2ad85d19d9384de0ac5f359",
"text": "Nowadays, many applications need data modeling facilities for the description of complex objects with spatial and/or temporal facilities. Responses to such requirements may be found in Geographic Information Systems (GIS), in some DBMS, or in the research literature. However, most f existing models cover only partly the requirements (they address either spatial or temporal modeling), and most are at the logical level, h nce not well suited for database design. This paper proposes a spatiotemporal modeling approach at the conceptual level, called MADS. The proposal stems from the identification of the criteria to be met for a conceptual model. It is advocated that orthogonality is the key issue for achieving a powerful and intuitive conceptual model. Thus, the proposal focuses on highlighting similarities in the modeling of space and time, which enhance readability and understandability of the model.",
"title": ""
},
{
"docid": "6915475a0f3f008b3135e462c5324656",
"text": "Machine learning methods in general and Deep Neural Networks in particular have shown to be vulnerable to adversarial perturbations. So far this phenomenon has mainly been studied in the context of whole-image classification. In this contribution, we analyse how adversarial perturbations can affect the task of semantic segmentation. We show how existing adversarial attackers can be transferred to this task and that it is possible to create imperceptible adversarial perturbations that lead a deep network to misclassify almost all pixels of a chosen class while leaving network prediction nearly unchanged outside this class.",
"title": ""
},
{
"docid": "6acd1583b23a65589992c3297250a603",
"text": "Trichostasis spinulosa (TS) is a common but rarely diagnosed disease. For diagnosis, it's sufficient to see a bundle of vellus hair located in a keratinous sheath microscopically. In order to obtain these vellus hair settled in comedone-like openings, Standard skin surface biopsy (SSSB), a non-invasive method was chosen. It's aimed to remind the differential diagnosis of TS in treatment-resistant open comedone-like lesions and discuss the SSSB method in diagnosis. A 25-year-old female patient was admitted with a complaint of the black spots located on bilateral cheeks and nose for 12 years. In SSSB, multiple vellus hair bundles in funnel-shaped structures were observed under the microscope, and a diagnosis of 'TS' was made. After six weeks of treatment with tretinoin 0.025% and 4% erythromycin jel topically, the appearance of black macules was significantly reduced. Treatment had to be terminated due to her pregnancy, and the lesions recurred within 1 month. It's believed that TS should be considered in the differential diagnosis of treatment-resistant open comedone-like lesions, and SSSB might be an inexpensive and effective alternative method for the diagnosis of TS.",
"title": ""
},
{
"docid": "83452d8424d97b1c1f5826d32b8ccbaa",
"text": "Creating meaning from a wide variety of available information and being able to choose what to learn are highly relevant skills for learning in a connectivist setting. In this work, various approaches have been utilized to gain insights into learning processes occurring within a network of learners and understand the factors that shape learners' interests and the topics to which learners devote a significant attention. This study combines different methods to develop a scalable analytic approach for a comprehensive analysis of learners' discourse in a connectivist massive open online course (cMOOC). By linking techniques for semantic annotation and graph analysis with a qualitative analysis of learner-generated discourse, we examined how social media platforms (blogs, Twitter, and Facebook) and course recommendations influence content creation and topics discussed within a cMOOC. Our findings indicate that learners tend to focus on several prominent topics that emerge very quickly in the course. They maintain that focus, with some exceptions, throughout the course, regardless of readings suggested by the instructor. Moreover, the topics discussed across different social media differ, which can likely be attributed to the affordances of different media. Finally, our results indicate a relatively low level of cohesion in the topics discussed which might be an indicator of a diversity of the conceptual coverage discussed by the course participants.",
"title": ""
},
{
"docid": "4783e35e54d0c7f555015427cbdc011d",
"text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].",
"title": ""
},
{
"docid": "876c0be7acfa5d7b9e863da5b7cfefdc",
"text": "In the era of big data, one is often confronted with the problem of high dimensional data for many machine learning or data mining tasks. Feature selection, as a dimension reduction technique, is useful for alleviating the curse of dimensionality while preserving interpretability. In this paper, we focus on unsupervised feature selection, as class labels are usually expensive to obtain. Unsupervised feature selection is typically more challenging than its supervised counterpart due to the lack of guidance from class labels. Recently, regression-based methods with L2,1 norms have gained much popularity as they are able to evaluate features jointly which, however, consider only linear correlations between features and pseudo-labels. In this paper, we propose a novel nonlinear joint unsupervised feature selection method based on kernel alignment. The aim is to find a succinct set of features that best aligns with the original features in the kernel space. It can evaluate features jointly in a nonlinear manner and provides a good ‘0/1’ approximation for the selection indicator vector. We formulate it as a constrained optimization problem and develop a Spectral Projected Gradient (SPG) method to solve the optimization problem. Experimental results on several real-world datasets demonstrate that our proposed method outperforms the state-of-the-art approaches significantly.",
"title": ""
},
{
"docid": "f967ad72daeb84e2fce38aec69997c8a",
"text": "While HCI has focused on multitasking with information workers, we report on multitasking among Millennials who grew up with digital media - focusing on college students. We logged computer activity and used biosensors to measure stress of 48 students for 7 days for all waking hours, in their in situ environments. We found a significant positive relationship with stress and daily time spent on computers. Stress is positively associated with the amount of multitasking. Conversely, stress is negatively associated with Facebook and social media use. Heavy multitaskers use significantly more social media and report lower positive affect than light multitaskers. Night habits affect multitasking the following day: late-nighters show longer duration of computer use and those ending their activities earlier in the day multitask less. Our study shows that college students multitask at double the frequency compared to studies of information workers. These results can inform designs for stress management of college students.",
"title": ""
},
{
"docid": "bbbbe3f926de28d04328f1de9bf39d1a",
"text": "The detection of fraudulent financial statements (FFS) is an important and challenging issue that has served as the impetus for many academic studies over the past three decades. Although nonfinancial ratios are generally acknowledged as the key factor contributing to the FFS of a corporation, they are usually excluded from early detection models. The objective of this study is to increase the accuracy of FFS detection by integrating the rough set theory (RST) and support vector machines (SVM) approaches, while adopting both financial and nonfinancial ratios as predictive variables. The results showed that the proposed hybrid approach (RSTþSVM) has the best classification rate as well as the lowest occurrence of Types I and II errors, and that nonfinancial ratios are indeed valuable information in FFS detection.",
"title": ""
},
{
"docid": "5c0849b95f2f6dd51f1356f5b25e2546",
"text": "Axillary hidradenitis suppurativa is a chronic and debilitating disease that primarily affects the axillae, perineum, and inframammary areas. Surgical removal of all the diseased skin constitutes the only efficient treatment. Covering an axillary fossa defect is challenging, due to the range of shoulder movement required. Indeed, shoulder movement may be compromised by scar contraction after inadequate surgery. The present study is the first to apply an inner arm perforator flap to the treatment of twelve axillary skin defects in 10 patients. The defect originated from extensive excision of recurrent hidradenitis suppurativa in the axilla. The technique used to cover the defect is a V-Y advancement flap or a propeller flap from the inner arm based on one to three perforators arising from the brachial artery or the superior ulnar collateral artery. The flap provides a tensionless wound closure and a generally unremarkable postoperative course in a short hospital stay. No major complications occurred. Two patients had minor delayed wound healing. Outcomes (including donor site morbidity, function and the cosmetic outcome) were very satisfactory in all cases. We consider that the inner arm perforator flap is a valuable new option for the reconstruction of axillary defects.",
"title": ""
}
] |
scidocsrr
|
e69e872948f131f16acf40c2288c7b81
|
Food Hardships and Child Behavior Problems among Low-income Children Food Hardships and Child Behavior Problems among Low-income Children
|
[
{
"docid": "e91f0323df84e4c79e26822a799d54fd",
"text": "Researchers have renewed an interest in the harmful consequences of poverty on child development. This study builds on this work by focusing on one mechanism that links material hardship to child outcomes, namely the mediating effect of maternal depression. Using data from the National Maternal and Infant Health Survey, we found that maternal depression and poverty jeopardized the development of very young boys and girls, and to a certain extent, affluence buffered the deleterious consequences of depression. Results also showed that chronic maternal depression had severe implications for both boys and girls, whereas persistent poverty had a strong effect for the development of girls. The measures of poverty and maternal depression used in this study generally had a greater impact on measures of cognitive development than motor development.",
"title": ""
}
] |
[
{
"docid": "f2a677515866e995ff8e0e90561d7cbc",
"text": "Pattern matching and data abstraction are important concepts in designing programs, but they do not fit well together. Pattern matching depends on making public a free data type representation, while data abstraction depends on hiding the representation. This paper proposes the views mechanism as a means of reconciling this conflict. A view allows any type to be viewed as a free data type, thus combining the clarity of pattern matching with the efficiency of data abstraction.",
"title": ""
},
{
"docid": "ddb70e707b63b30ee8e3b98b43db12a0",
"text": "Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered \"Heart bleed\" vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows -- across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.",
"title": ""
},
{
"docid": "78fafa0e14685d317ab88361d0a0dc8c",
"text": "Industry analysts expect volume production of integrated circuits on 300-mm wafers to start in 2001 or 2002. At that time, appropriate production equipment must be available. To meet this need, the MEDEA Project has supported us at ASM Europe in developing an advanced vertical batch furnace system for 300-mm wafers. Vertical furnaces are widely used for many steps in the production of integrated circuits. In volume production, these batch furnaces achieve a lower cost per production step than single-wafer processing methods. Applications for vertical furnaces are extensive, including the processing of low-pressure chemical vapor deposition (LPCVD) layers such as deposited oxides, polysilicon, and nitride. Furthermore, the furnaces can be used for oxidation and annealing treatments. As the complexity of IC technology increases, production equipment must meet the technology guidelines summarized in Table 1 from the Semiconductor Industry Association’s Roadmap. The table shows that the minimal feature size will sharply decrease, and likewise the particle size and level will decrease. The challenge in designing a new generation of furnaces for 300-mm wafers was to improve productivity as measured in throughput (number of wafers processed per hour), clean-room footprint, and capital cost. Therefore, we created a completely new design rather than simply upscaling the existing 200mm equipment.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "459b07b78f3cbdcbd673881fd000da14",
"text": "The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as “goats” due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for “goats,” since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of “goats” during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches.",
"title": ""
},
{
"docid": "ad3147f3a633ec8612dc25dfde4a4f0c",
"text": "A half-bridge integrated zero-voltage-switching (ZVS) full-bridge converter with reduced conduction loss for battery on-board chargers in electric vehicles (EVs) or plug-in hybrid electric vehicles (PHEVs) is proposed in this paper. The proposed converter features a reduction in primary-conduction loss and a lower secondary-voltage stress. In addition, the proposed converter has the most favorable characteristics as battery chargers as follows: a full ZVS capability and a significantly reduced output filter size due to the improved output waveform. In this paper, the circuit configuration, operation principle, and relevant analysis results of the proposed converter are described, followed by the experimental results on a prototype converter realized with a scale-downed 2-kW battery charger for EVs or PHEVs. The experimental results validate the theoretical analysis and show the effectiveness of the proposed converter as battery on-board chargers for EVs or PHEVs.",
"title": ""
},
{
"docid": "9304c82e4b19c2f5e23ca45e7f2c9538",
"text": "Previous work has shown that using the GPU as a brute force method for SELECT statements on a SQLite database table yields significant speedups. However, this requires that the entire table be selected and transformed from the B-Tree to row-column format. This paper investigates possible speedups by traversing B+ Trees in parallel on the GPU, avoiding the overhead of selecting the entire table to transform it into row-column format and leveraging the logarithmic nature of tree searches. We experiment with different input sizes, different orders of the B+ Tree, and batch multiple queries together to find optimal speedups for SELECT statements with single search parameters as well as range searches. We additionally make a comparison to a simple GPU brute force algorithm on a row-column version of the B+ Tree.",
"title": ""
},
{
"docid": "f2f2b48cd35d42d7abc6936a56aa580d",
"text": "Complete enumeration of all the sequences to establish global optimality is not feasible as the search space, for a general job-shop scheduling problem, ΠG has an upper bound of (n!). Since the early fifties a great deal of research attention has been focused on solving ΠG, resulting in a wide variety of approaches such as Branch and Bound, Simulated Annealing, Tabu Search, etc. However limited success has been achieved by these methods due to the shear intractability of this generic scheduling problem. Recently, much effort has been concentrated on using neural networks to solve ΠG as they are capable of adapting to new environments with little human intervention and can mimic thought processes. Major contributions in solving ΠG using a Hopfield neural network, as well as applications of back-error propagation to general scheduling problems are presented. To overcome the deficiencies in these applications a modified back-error propagation model, a simple yet powerful parallel architecture which can be successfully simulated on a personal computer, is applied to solve ΠG.",
"title": ""
},
{
"docid": "6e4c0b8625363e9acbe91c149af2c037",
"text": "OBJECTIVE\nThe present study assessed the effect of smoking on clinical, microbiological and immunological parameters in an experimental gingivitis model.\n\n\nMATERIAL AND METHODS\nTwenty-four healthy dental students were divided into two groups: smokers (n = 10); and nonsmokers (n = 14). Stents were used to prevent biofilm removal during brushing. Visible plaque index (VPI) and gingival bleeding index (GBI) were determined 5- on day -7 (running phase), baseline, 21 d (experimental gingivitis) and 28 d (resolution phase). Supragingival biofilm and gingival crevicular fluid were collected and assayed by checkerboard DNA-DNA hybridization and a multiplex analysis, respectively. Intragroup comparison was performed by Friedman and Dunn's multiple comparison tests, whereas the Mann-Whitney U-test was applied for intergroup analyses.\n\n\nRESULTS\nCessation of oral hygiene resulted in a significant increase in VPI, GBI and gingival crevicular fluid volume in both groups, which returned to baseline levels 7 d after oral hygiene was resumed. Smokers presented lower GBI than did nonsmokers (p < 0.05) at day 21. Smokers had higher total bacterial counts and higher proportions of red- and orange complex bacteria, as well as lower proportions of Actinomyces spp., and of purple- and yellow-complex bacteria (p < 0.05). Furthermore, the levels of key immune-regulatory cytokines, including interleukin (IL)-8, IL-17 and interferon-γ, were higher in smokers than in nonsmokers (p < 0.05).\n\n\nCONCLUSION\nSmokers and nonsmokers developed gingival inflammation after supragingival biofilm accumulation, but smokers had less bleeding, higher proportions of periodontal pathogens and distinct host-response patterns during the course of experimental gingivitis.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "110b0837952be3e0aa01f4859190a116",
"text": "Automatic recommendation has become a popular research field: it allows the user to discover items that match their tastes. In this paper, we proposed an expanded autoencoder recommendation framework. The stacked autoencoders model is employed to extract the feature of input then reconstitution the input to do the recommendation. Then the side information of items and users is blended in the framework and the Huber function based regularization is used to improve the recommendation performance. The proposed recommendation framework is applied on the movie recommendation. Experimental results on a public database in terms of quantitative assessment show significant improvements over conventional methods.",
"title": ""
},
{
"docid": "29a2c5082cf4db4f4dde40f18c88ca85",
"text": "Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.",
"title": ""
},
{
"docid": "d4f806a58d4cdc59cae675a765d4c6bc",
"text": "Our study examines whether ownership structure and boardroom characteristics have an effect on corporate financial fraud in China. The data come from the enforcement actions of the Chinese Securities Regulatory Commission (CSRC). The results from univariate analyses, where we compare fraud and nofraud firms, show that ownership and board characteristics are important in explaining fraud. However, using a bivariate probit model with partial observability we demonstrate that boardroom characteristics are important, while the type of owner is less relevant. In particular, the proportion of outside directors, the number of board meetings, and the tenure of the chairman are associated with the incidence of fraud. Our findings have implications for the design of appropriate corporate governance systems for listed firms. Moreover, our results provide information that can inform policy debates within the CSRC. D 2005 Elsevier B.V. All rights reserved. JEL classification: G34",
"title": ""
},
{
"docid": "40a6cc06e0e90fba161bc8bc8ec6446d",
"text": "Toxic comment classification has become an active research field with many recently proposed approaches. However, while these approaches address some of the task’s challenges others still remain unsolved and directions for further research are needed. To this end, we compare different deep learning and shallow approaches on a new, large comment dataset and propose an ensemble that outperforms all individual models. Further, we validate our findings on a second dataset. The results of the ensemble enable us to perform an extensive error analysis, which reveals open challenges for state-of-the-art methods and directions towards pending future research. These challenges include missing paradigmatic context and inconsistent dataset labels.",
"title": ""
},
{
"docid": "35dacb4b15e5c8fbd91cee6da807799a",
"text": "Stochastic gradient algorithms have been the main focus of large-scale learning problems and led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.",
"title": ""
},
{
"docid": "6ceab65cc9505cf21824e9409cf67944",
"text": "Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level",
"title": ""
},
{
"docid": "17676785398d4ed24cc04cb3363a7596",
"text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.",
"title": ""
},
{
"docid": "0300e887815610a2f7d26994d027fe78",
"text": "This paper presents a computer vision based method for bar code reading. Bar code's geometric features and the imaging system parameters are jointly extracted from a tilted low resolution bar code image. This approach enables the use of cost effective cameras, increases the depth of acquisition, and provides solutions for cases where image quality is low. The performance of the algorithm is tested on synthetic and real test images, and extension to a 2D bar code (PDF417) is also discussed.",
"title": ""
},
{
"docid": "84b018fa45e06755746309014854bb9a",
"text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies",
"title": ""
},
{
"docid": "dd723b23b4a7d702f8d34f15b5c90107",
"text": "Smartphones have become a prominent part of our technology driven world. When it comes to uncovering, analyzing and submitting evidence in today's criminal investigations, mobile phones play a more critical role. Thus, there is a strong need for software tools that can help investigators in the digital forensics field effectively analyze smart phone data to solve crimes.\n This paper will accentuate how digital forensic tools assist investigators in getting data acquisition, particularly messages, from applications on iOS smartphones. In addition, we will lay out the framework how to build a tool for verifying data integrity for any digital forensics tool.",
"title": ""
}
] |
scidocsrr
|
1bf475f2032721339c11fdc86810f226
|
Application impersonation: problems of OAuth and API design in online social networks
|
[
{
"docid": "29e5d267bebdeb2aa22b137219b4407e",
"text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"title": ""
}
] |
[
{
"docid": "65289003b014d86eed03baad6aa1ed83",
"text": "Camera calibration is one of the long existing research issues in computer vision domain. Typical calibration methods take two steps for the procedure: control points localization and camera parameters computation. In practical situation, control points localization is a time-consuming task because the localization puts severe assumption that the calibration object should be visible in all images. To satisfy the assumption, users may avoid moving the calibration object near the image boundary. As a result, we estimate poor quality parameters. In this paper, we aim to solve this partial occlusion problem of the calibration object. To solve the problem, we integrate a planar marker tracking algorithm that can track its target marker even with partial occlusion. Specifically, we localize control points by a RANdom DOts Markers (RANDOM) tracking algorithm that uses markers with randomly distributed circle dots. Once the control points are localized, they are used to estimate the camera parameters. The proposed method is validated with both synthetic and real world experiments. The experimental results show that the proposed method realizes camera calibration from image on which part of the calibration object is visible.",
"title": ""
},
{
"docid": "8b5475291bc3976152937762684e71bc",
"text": "The use of natural dialog has great significance in the design of interactive tutoring systems. The nature of student queries can be confined to a small set of templates based on the task domain. This paper describes the development of a chatbot for medical students, that is based on the open source AIML based Chatterbean. We deploy the widely available Unified Medical Language System (UMLS) as the domain knowledge source for generating responses to queries. The AIML based chatbot is customized to convert natural language queries into relevant SQL queries. The SQL queries are run against the knowledge base and results returned to the user in natural dialog. Student survey was carried out to identify various queries posed by students. The chatbot was designed to address common template queries. Knowledge inference techniques were applied to generate responses for queries for which knowledge was not explicitly encoded. Query responses were rated by three experts on a 1-5 point likert scale, who agreed among themselves with Pearson Correlation Coefficient of 0. 54 and p References",
"title": ""
},
{
"docid": "b581717dca731a6fd216d8d4d9530b9c",
"text": "In the last few years, there has been increasing interest from the agent community in the use of techniques from decision theory and game theory. Our aims in this article are firstly to briefly summarize the key concepts of decision theory and game theory, secondly to discuss how these tools are being applied in agent systems research, and finally to introduce this special issue of Autonomous Agents and Multi-Agent Systems by reviewing the papers that appear.",
"title": ""
},
{
"docid": "f73881fdb6b732e7a6a79cd13618e649",
"text": "Information exchange among coalition command and control (C2) systems in network-enabled environments requires ensuring that each recipient system understands and interprets messages exactly as the source system intended. The Semantic Interoperability Logical Framework (SILF) aims at meeting NATO's needs for semantically correct interoperability between C2 systems, as well as the need to adapt quickly to new missions and new combinations of coalition partners and systems. This paper presents an overview of the SILF framework and performs a detailed analysis of a case study for implementing SILF in a real-world military scenario.",
"title": ""
},
{
"docid": "2c9f7053d9bcd6bc421b133dd7e62d08",
"text": "Recurrent neural networks (RNN) combined with attention mechanism has proved to be useful for various NLP tasks including machine translation, sequence labeling and syntactic parsing. The attention mechanism is usually applied by estimating the weights (or importance) of inputs and taking the weighted sum of inputs as derived features. Although such features have demonstrated their effectiveness, they may fail to capture the sequence information due to the simple weighted sum being used to produce them. The order of the words does matter to the meaning or the structure of the sentences, especially for syntactic parsing, which aims to recover the structure from a sequence of words. In this study, we propose an RNN-based attention to capture the relevant and sequence-preserved features from a sentence, and use the derived features to perform the dependency parsing. We evaluated the graph-based and transition-based parsing models enhanced with the RNN-based sequence-preserved attention on the both English PTB and Chinese CTB datasets. The experimental results show that the enhanced systems were improved with significant increase in parsing accuracy.",
"title": ""
},
{
"docid": "9494d20b4e518df61896a999ddb8258f",
"text": "HIT failure rate has been estimated to be as high as 50 – 70 % and is considered a major barrier to adoption of IT by the healthcare industry. Factors like staff resistance to change and non-compliance, inadequate management, policies and procedures, and technical failures emerge as primary reasons for HIT implementation failure (Kaplan & Harris-Salamone, 2009). Strategies like interdisciplinary collaboration, open communication, staff training and support, and strong leadership could mitigate risk of failure due to above listed factors. Technological assessment of workflow and decision-making processes, and application of cognitive and human factor engineering principles are important for developing a system that would meet organizational and user needs.",
"title": ""
},
{
"docid": "16560cdfe50fc908ae46abf8b82e620f",
"text": "While there seems to be a general agreement that next years' systems will include many processing cores, it is often overlooked that these systems will also include an increasing number of different cores (we already see dedicated units for graphics or network processing). Orchestrating the diversity of processing functionality is going to be a major challenge in the upcoming years, be it to optimize for performance or for minimal energy consumption.\n We expect field-programmable gate arrays (FPGAs or \"programmable hardware\") to soon play the role of yet another processing unit, found in commodity computers. It is clear that the new resource is going to be too precious to be ignored by database systems, but it is unclear how FPGAs could be integrated into a DBMS. With a focus on database use, this tutorial introduces into the emerging technology, demonstrates its potential, but also pinpoints some challenges that need to be addressed before FPGA-accelerated database systems can go mainstream. Attendees will gain an intuition of an FPGA development cycle, receive guidelines for a \"good\" FPGA design, but also learn the limitations that hardware-implemented database processing faces. Our more high-level ambition is to spur a broader interest in database processing on novel hardware technology.",
"title": ""
},
{
"docid": "28ff49eb7af07fdf31694b6280fe8286",
"text": "In this paper, the design of unbalanced fed 180° phase shifter with a wideband performances is proposed. This phase shifter uses a single dielectric substrate, and consists of multi-section Wilkinson divider, reference line, and phase inverter containing balanced-unbalanced transition in the input and output port. The simulated and measured results show that this device provides 180° phase shift with low insertion loss in the frequency band from 1 to 10 GHz.",
"title": ""
},
{
"docid": "b0c4b345063e729d67396dce77e677a6",
"text": "Work done on the implementation of a fuzzy logic controller in a single intersection of two one-way streets is presented. The model of the intersection is described and validated, and the use of the theory of fuzzy sets in constructing a controller based on linguistic control instructions is introduced. The results obtained from the implementation of the fuzzy logic controller are tabulated against those corresponding to a conventional effective vehicle-actuated controller. With the performance criterion being the average delay of vehicles, it is shown that the use of a fuzzy logic controller results in a better performance.",
"title": ""
},
{
"docid": "b03273ada7d85d37e4c44f1195c9a450",
"text": "Nowadays the trend to solve optimization problems is to use s pecific algorithms rather than very general ones. The UNLocBoX provides a general framework allowing the user to design his own algorithms. To do so, the framework try to stay as close from the mathematical problem as possible. M ore precisely, the UNLocBoX is a Matlab toolbox designed to solve convex optimi zation problem of the form",
"title": ""
},
{
"docid": "aadd1d3e22b767a12b395902b1b0c6ca",
"text": "Long-term situation prediction plays a crucial role for intelligent vehicles. A major challenge still to overcome is the prediction of complex downtown scenarios with multiple road users, e.g., pedestrians, bikes, and motor vehicles, interacting with each other. This contribution tackles this challenge by combining a Bayesian filtering technique for environment representation, and machine learning as long-term predictor. More specifically, a dynamic occupancy grid map is utilized as input to a deep convolutional neural network. This yields the advantage of using spatially distributed velocity estimates from a single time step for prediction, rather than a raw data sequence, alleviating common problems dealing with input time series of multiple sensors. Furthermore, convolutional neural networks have the inherent characteristic of using context information, enabling the implicit modeling of road user interaction. Pixel-wise balancing is applied in the loss function counteracting the extreme imbalance between static and dynamic cells. One of the major advantages is the unsupervised learning character due to fully automatic label generation. The presented algorithm is trained and evaluated on multiple hours of recorded sensor data and compared to Monte-Carlo simulation. Experiments show the ability to model complex interactions.",
"title": ""
},
{
"docid": "9e037018da3ebcd7967b9fbf07c83909",
"text": "Studying temporal dynamics of topics in social media is very useful to understand online user behaviors. Most of the existing work on this subject usually monitors the global trends, ignoring variation among communities. Since users from different communities tend to have varying tastes and interests, capturing communitylevel temporal change can improve the understanding and management of social content. Additionally, it can further facilitate the applications such as community discovery, temporal prediction and online marketing. However, this kind of extraction becomes challenging due to the intricate interactions between community and topic, and intractable computational complexity. In this paper, we take a unified solution towards the communitylevel topic dynamic extraction. A probabilistic model, CosTot (Community Specific Topics-over-Time) is proposed to uncover the hidden topics and communities, as well as capture community-specific temporal dynamics. Specifically, CosTot considers text, time, and network information simultaneously, and well discovers the interactions between community and topic over time. We then discuss the approximate inference implementation to enable scalable computation of model parameters, especially for large social data. Based on this, the application layer support for multi-scale temporal analysis and community exploration is also investigated. We conduct extensive experimental studies on a large real microblog dataset, and demonstrate the superiority of proposed model on tasks of time stamp prediction, link prediction and topic perplexity.",
"title": ""
},
{
"docid": "8443970c610504030287243d5910b697",
"text": "Knowledge representation is an important, long-history topic in AI, and there have been a large amount of work for knowledge graph embedding which projects symbolic entities and relations into low-dimensional, real-valued vector space. However, most embedding methods merely concentrate on data fitting and ignore the explicit semantic expression, leading to uninterpretable representations. Thus, traditional embedding methods have limited potentials for many applications such as question answering, and entity classification. To this end, this paper proposes a semantic representation method for knowledge graph (KSR), which imposes a two-level hierarchical generative process that globally extracts many aspects and then locally assigns a specific category in each aspect for every triple. Since both aspects and categories are semantics-relevant, the collection of categories in each aspect is treated as the semantic representation of this triple. Extensive experiments justify our model outperforms other state-of-the-art baselines substantially.",
"title": ""
},
{
"docid": "73e6082c387eab6847b8ca853f38c6f3",
"text": "OBJECTIVES\nThis study explored the effectiveness of group music intervention against agitated behavior in elderly persons with dementia.\n\n\nMETHODS\nThis was an experimental study using repeated measurements. Subjects were elderly persons who suffered from dementia and resided in nursing facilities. In total, 104 participants were recruited by permuted block randomization and of the 100 subjects who completed this study, 49 were in the experimental group and 51 were in the control group. The experimental group received a total of twelve 30-min group music intervention sessions, conducted twice a week for six consecutive weeks, while the control group participated in normal daily activities. In order to measure the effectiveness of the therapeutic sessions, assessments were conducted before the intervention, at the 6th and 12th group sessions, and at 1 month after cessation of the intervention. Longitudinal effects were analyzed by means of generalized estimating equations (GEEs).\n\n\nRESULTS\nAfter the group music therapy intervention, the experimental group showed better performance at the 6th and 12th sessions, and at 1 month after cessation of the intervention based on reductions in agitated behavior in general, physically non-aggressive behavior, verbally non-aggressive behavior, and physically aggressive behavior, while a reduction in verbally aggressive behavior was shown only at the 6th session.\n\n\nCONCLUSIONS\nGroup music intervention alleviated agitated behavior in elderly persons with dementia. We suggest that nursing facilities for demented elderly persons incorporate group music intervention in routine activities in order to enhance emotional relaxation, create inter-personal interactions, and reduce future agitated behaviors.",
"title": ""
},
{
"docid": "6d4d55a0d8b54afd6832ce2bafc740c8",
"text": "This paper presents a computer vision system for automatic facial expression recognition (AFER). The robust AFER system can be applied in many areas such as emotion science, clinical psychology and pain assessment it includes facial feature extraction and pattern recognition phases that discriminates among different facial expressions. In feature extraction phase a combination between holistic and analytic approaches is presented to extract 83 facial expression features. Expression recognition is performed by using radial basis function based artificial neural network to recognize the six basic emotions (anger, fear, disgust, joy, surprise, sadness). The experimental results show that 96% recognition rate can be achieved when applying the proposed system on person-dependent database and 93.5% when applying on person-independent one.",
"title": ""
},
{
"docid": "1d1be59a2c3d3b11039f9e4b2e8e351c",
"text": "The impact of digital mobility services on individual traffic behavior within cities has increased significantly over the last years. Therefore, the aim of this paper is to provide an overview of existing digital services for urban transportation. Towards this end, we analyze 59 digital mobility services available as smartphone applications or web services. Building on a framework for service system modularization, we identified the services’ modules and data sources. While some service modules and data sources are integrated in various mobility services, others are only used in specific services, even though they would generate value in other services as well. This overview provides the basis for future design science research in the area of digital service systems for sustainable transportation. Based on the overview, practitioners from industry and public administration can identify potential for innovative service and foster co-creation and innovation within existing service systems.",
"title": ""
},
{
"docid": "8e3b73204d1d62337c4b2aabdbaa8973",
"text": "The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.",
"title": ""
},
{
"docid": "6bdef0bc4a07e1baf993ca9fe1b61786",
"text": "Structured Problem Report Formats have been key to improving the assessment of usability methods. Once extended to record analysts' rationales, they not only reveal analyst behaviour but also change it. We report on two versions of an Extended Structured Report Format for usability problems, briefly noting their impact on analyst behaviour, but more extensively presenting insights into decision making during usability inspection, thus validating and refining a model of evaluation performance.",
"title": ""
},
{
"docid": "8090121a59c1070aacc7a20941898551",
"text": "In this article, I explicitly solve dynamic portfolio choice problems, up to the solution of an ordinary differential equation (ODE), when the asset returns are quadratic and the agent has a constant relative risk aversion (CRRA) coefficient. My solution includes as special cases many existing explicit solutions of dynamic portfolio choice problems. I also present three applications that are not in the literature. Application 1 is the bond portfolio selection problem when bond returns are described by ‘‘quadratic term structure models.’’ Application 2 is the stock portfolio selection problem when stock return volatility is stochastic as in Heston model. Application 3 is a bond and stock portfolio selection problem when the interest rate is stochastic and stock returns display stochastic volatility. (JEL G11)",
"title": ""
},
{
"docid": "fcda8929585bc0e27e138070674dc455",
"text": "Also referred to as Gougerot-Carteaud syndrome, confluent and reticulated papillomatosis (CARP) is an acquired keratinization disorder of uncertain etiology. Clinically, it is typically characterized by symptomless, grayish-brown, scaly, flat papules that coalesce into larger patches with a reticular pattern at the edges. Sites most commonly affected include the anterior and/or posterior upper trunk region [1–3]. Although its clinical diagnosis is usually straightforward, the distinction from similar pigmentary dermatoses may sometimes be challenging, especially in case of lesions occurring in atypical locations [1–3]. In recent years, dermatoscopy has been shown to be useful in the clinical diagnosis of several “general” skin disorders, thus reducing the number of cases requiring biopsy [4–8]. The objective of the present study was to describe the dermatoscopic features of CARP in order to facilitate its noninvasive diagnosis. Eight individuals (3 women/5 men; mean age 29.2 years, range 18–51 years; mean disease duration 3 months, range 1–9 months) with CARP (diagnosed on the basis of histological findings and clinical criteria) [1] were included in the study. None of the patients had been using systemic or topical therapies for at least six weeks. In each patient, a handheld noncontact polarized dermatoscope (DermLite DL3 × 10; 3 Gen, San Juan Capistrano, CA, USA) equipped with a camera (Coolpix® 4500 Nikon Corporation, Melville, NY, USA) was used to take a dermatoscopic picture of a single target lesion (flat desquamative papule). All pictures were evaluated for the presence of specific morphological patterns by two of the authors (EE, GS). In all cases (100.0 %), we observed the same findings: fine whitish scaling as well as homogeneous, brownish, more or less defined, flat polygonal globules separated by whitish/ pale striae, thus creating a cobblestone pattern (Figure 1a, b). The shade of the flat globules was dark-brown (Figure 1a) in five (62.5 %) and light-brown (Figure 1b) in three (37.5 %) cases. To the best of our knowledge, there has only been one previous publication on dermatoscopy of CARP. In that particular study, findings included superficial white scales (likely due to parakeratosis and compact hyperkeratosis), brownish pigmentation with poorly defined borders (thought to correspond to hyperpigmentation of the basal layer), and a pattern of “sulci and gyri” (depressions and elevations, presumably as a result of papillomatosis) [9]. In the present study, we were able to confirm some of the aforementioned findings (white scaling and brownish pigmentation), however, the brownish areas in our patients consistently showed a cobblestone pattern (closely aggregated, squarish/polygonal, flat globules). This peculiar aspect could be due to the combination of basal hyperpigmentation, acanthosis, and papillomatosis, with relative sparing of the normal network of furrows of the skin surface. Accordingly, one might speculate that the different pattern of pigmentation found in the previous study might have resulted from the disruption of these physiological grooves due to more pronounced/irregular acanthosis/ papillomatosis. Remarkably, the detection of fine whitish scaling and brownish areas in a cobblestone or “sulci and gyri” pattern might be useful in distinguishing CARP from its differential diagnoses [10] (Table 1). These primarily include 1) tinea (pityriasis) versicolor, which is characterized by a pigmented network composed of brownish stripes and fine scales [11],",
"title": ""
}
] |
scidocsrr
|
b299604767a625ea5384e321d2bb238d
|
Generalized Thompson sampling for sequential decision-making and causal inference
|
[
{
"docid": "3734fd47cf4e4e5c00f660cbb32863f0",
"text": "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm.",
"title": ""
}
] |
[
{
"docid": "c39b143861d1e0c371ec1684bb29f4cc",
"text": "Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed.",
"title": ""
},
{
"docid": "60922247ab6ec494528d3a03c0909231",
"text": "This paper proposes a new \"zone controlled induction heating\" (ZCIH) system. The ZCIH system consists of two or more sets of a high-frequency inverter and a split work coil, which adjusts the coil current amplitude in each zone independently. The ZCIH system has capability of controlling the exothermic distribution on the work piece to avoid the strain caused by a thermal expansion. As a result, the ZCIH system enables a rapid heating performance as well as an temperature uniformity. This paper proposes current phase control making the coil current in phase with each other, to adjust the coil current amplitude even when a mutual inductance exists between the coils. This paper presents operating principle, theoretical analysis, and experimental results obtained from a laboratory setup and a six-zone prototype for a semiconductor processing.",
"title": ""
},
{
"docid": "c1f803e02ea7d6ef3bf6644e3aa17862",
"text": "Recurrent neural networks are prime candidates for learning evolutions in multi-dimensional time series data. The performance of such a network is judged by the loss function, which is aggregated into a scalar value that decreases during training. Observing only this number hides the variation that occurs within the typically large training and testing data sets. Understanding these variations is of highest importance to adjust network hyperparameters, such as the number of neurons, number of layers or to adjust the training set to include more representative examples. In this paper, we design a comprehensive and interactive system that allows users to study the output of recurrent neural networks on both the complete training data and testing data. We follow a coarse-to-fine strategy, providing overviews of annual, monthly and daily patterns in the time series and directly support a comparison of different hyperparameter settings. We applied our method to a recurrent convolutional neural network that was trained and tested on 25 years of climate data to forecast meteorological attributes, such as temperature, pressure and wind velocity. We further visualize the quality of the forecasting models, when applied to various locations on Earth and we examine the combination of several forecasting models. This is the authors preprint. The definitive version is available at http://diglib.eg.org/ and http://onlinelibrary.wiley.com/.",
"title": ""
},
{
"docid": "e141b36a3e257c4b8155cdf0682a0143",
"text": "Major depressive disorder is a common mental disorder that affects almost 7% of the adult U.S. population. The 2017 Audio/Visual Emotion Challenge (AVEC) asks participants to build a model to predict depression levels based on the audio, video, and text of an interview ranging between 7-33 minutes. Since averaging features over the entire interview will lose most temporal information, how to discover, capture, and preserve useful temporal details for such a long interview are significant challenges. Therefore, we propose a novel topic modeling based approach to perform context-aware analysis of the recording. Our experiments show that the proposed approach outperforms context-unaware methods and the challenge baselines for all metrics.",
"title": ""
},
{
"docid": "d79b440e5417fae517286206394e8685",
"text": "When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, etc. In this paper, we present a different solution that first detects and then removes aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing vs. non-aliasing regions and aliasing removal. Experiments on both synthetic scene and real light field camera array data sets demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.",
"title": ""
},
{
"docid": "67978cd2f94cabb45c1ea2c571cef4de",
"text": "Studies identifying oil shocks using structural vector autoregressions (VARs) reach different conclusions on the relative importance of supply and demand factors in explaining oil market fluctuations. This disagreement is due to different assumptions on the oil supply and demand elasticities that determine the identification of the oil shocks. We provide new estimates of oil-market elasticities by combining a narrative analysis of episodes of large drops in oil production with country-level instrumental variable regressions. When the estimated elasticities are embedded into a structural VAR, supply and demand shocks play an equally important role in explaining oil prices and oil quantities. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "3ee3cf039b1bc03d6b6e504ae87fc62f",
"text": "Objective: This paper tackles the problem of transfer learning in the context of electroencephalogram (EEG)-based brain–computer interface (BCI) classification. In particular, the problems of cross-session and cross-subject classification are considered. These problems concern the ability to use data from previous sessions or from a database of past users to calibrate and initialize the classifier, allowing a calibration-less BCI mode of operation. Methods: Data are represented using spatial covariance matrices of the EEG signals, exploiting the recent successful techniques based on the Riemannian geometry of the manifold of symmetric positive definite (SPD) matrices. Cross-session and cross-subject classification can be difficult, due to the many changes intervening between sessions and between subjects, including physiological, environmental, as well as instrumental changes. Here, we propose to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable. Then, classification is performed both using a standard minimum distance to mean classifier, and through a probabilistic classifier recently developed in the literature, based on a density function (mixture of Riemannian Gaussian distributions) defined on the SPD manifold. Results: The improvements in terms of classification performances achieved by introducing the affine transformation are documented with the analysis of two BCI datasets. Conclusion and significance: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.",
"title": ""
},
{
"docid": "12adb5e324d971d2c752f2193cec3126",
"text": "Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a ‘crawler’ to extract the topology of Gnutella’s application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to Gnutella protocol and implementations that bring significant performance and scalability improvements.",
"title": ""
},
{
"docid": "87b5c0021e513898693e575ca5479757",
"text": "We present a statistical mechanics model of deep feed forward neural networks (FFN). Our energy-based approach naturally explains several known results and heuristics, providing a solid theoretical framework and new instruments for a systematic development of FFN. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. We obtain a set of natural activations – such as sigmoid, tanh and ReLu – together with a state-of-the-art one, recently obtained by Ramachandran et al. [1] using an extensive search algorithm. We term this activation ESP (Expected Signal Propagation), explain its probabilistic meaning, and study the eigenvalue spectrum of the associated Hessian on classification tasks. We find that ESP allows for faster training and more consistent performances over a wide range of network architectures.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "664b003cedbca63ebf775bd9f062b8f1",
"text": "Since 1900, soil organic matter (SOM) in farmlands worldwide has declined drastically as a result of carbon turnover and cropping systems. Over the past 17 years, research trials were established to evaluate the efficacy of different commercial humates products on potato production. Data from humic acid (HA) trials showed that different cropping systems responded differently to different products in relation to yield and quality. Important qualifying factors included: source; concentration; processing; chelating or complexing capacity of the humic acid products; functional groups (Carboxyl; Phenol; Hydroxyl; Ketone; Ester; Ether; Amine), rotation and soil quality factors; consistency of the product in enhancing yield and quality of potato crops; mineralization effect; and influence on fertilizer use efficiency. Properties of humic substances, major constituents of soil organic matter, include chelation, mineralization, buffer effect, clay mineral-organic interaction, and cation exchange. Humates increase phosphorus availability by complexing ions into stable compounds, allowing the phosphorus ion to remain exchangeable for plants’ uptake. Collectively, the consistent use of good quality products in our replicated research plots in different years resulted in a yield increase from 11.4% to the maximum of 22.3%. Over the past decade, there has been a major increase in the quality of research and development of organic and humic acid products by some well-established manufacturers. Our experimentations with these commercial products showed an increase in the yield and quality of crops.",
"title": ""
},
{
"docid": "03dc23b2556e21af9424500e267612bb",
"text": "File fragment classification is an important and difficult problem in digital forensics. Previous works in this area mainly relied on specific byte sequences in file headers and footers, or statistical analysis and machine learning algorithms on data from the middle of the file. This paper introduces a new approach to classify file fragment based on grayscale image. The proposed method treats a file fragment as a grayscale image, and uses image classification method to classify file fragment. Furthermore, two models based on file-unbiased and type-unbiased are proposed to verify the validity of the proposed method. Compared with previous works, the experimental results are promising. An average classification accuracy of 39.7% in file-unbiased model and 54.7% in type-unbiased model are achieved on 29 file types.",
"title": ""
},
{
"docid": "ddd09bc1c5b16e273bb9d1eaeae1a7e8",
"text": "In this paper, we study concurrent beamforming issue for achieving high capacity in indoor millimeter-wave (mmWave) networks. The general concurrent beamforming issue is first formulated as an optimization problem to maximize the sum rates of concurrent transmissions, considering the mutual interference. To reduce the complexity of beamforming and the total setup time, concurrent beamforming is decomposed into multiple single-link beamforming, and an iterative searching algorithm is proposed to quickly achieve the suboptimal transmission/reception beam sets. A codebook-based beamforming protocol at medium access control (MAC) layer is then introduced in a distributive manner to determine the beam sets. Both analytical and simulation results demonstrate that the proposed protocol can drastically reduce total setup time, increase system throughput, and improve energy efficiency.",
"title": ""
},
{
"docid": "2dda75184e2c9c5507c75f84443fff08",
"text": "Text classification can help users to effectively handle and exploit useful information hidden in large-scale documents. However, the sparsity of data and the semantic sensitivity to context often hinder the classification performance of short texts. In order to overcome the weakness, we propose a unified framework to expand short texts based on word embedding clustering and convolutional neural network (CNN). Empirically, the semantically related words are usually close to each other in embedding spaces. Thus, we first discover semantic cliques via fast clustering. Then, by using additive composition over word embeddings from context with variable window width, the representations of multi-scale semantic units1 in short texts are computed. In embedding spaces, the restricted nearest word embeddings (NWEs)2 of the semantic units are chosen to constitute expanded matrices, where the semantic cliques are used as supervision information. Finally, for a short text, the projected matrix 3 and expanded matrices are combined and fed into CNN in parallel. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "a8bd9e8470ad414c38f5616fb14d433d",
"text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.",
"title": ""
},
{
"docid": "545a7a98c79d14ba83766aa26cff0291",
"text": "Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.",
"title": ""
},
{
"docid": "a15c94c0ec40cb8633d7174b82b70a16",
"text": "Koenigs, Young and colleagues [1] recently tested patients with emotion-related damage in the ventromedial prefrontal cortex (VMPFC) usingmoral dilemmas used in previous neuroimaging studies [2,3]. These patients made unusually utilitarian judgments (endorsing harmful actions that promote the greater good). My collaborators and I have proposed a dual-process theory of moral judgment [2,3] that we claim predicts this result. In a Research Focus article published in this issue of Trends in Cognitive Sciences, Moll and de Oliveira-Souza [4] challenge this interpretation. Our theory aims to explain some puzzling patterns in commonsense moral thought. For example, people usually approve of diverting a runaway trolley thatmortally threatens five people onto a side-track, where it will kill only one person. And yet people usually disapprove of pushing someone in front of a runaway trolley, where this will kill the person pushed, but save five others [5]. Our theory, in a nutshell, is this: the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response (supported in part by the medial prefrontal cortex) that drives moral disapproval [2,3]. People also engage in utilitarian moral reasoning (aggregate cost–benefit analysis), which is likely subserved by the dorsolateral prefrontal cortex (DLPFC) [2,3]. When there is no prepotent emotional response, utilitarian reasoning prevails (as in the first case), but sometimes prepotent emotions and utilitarian reasoning conflict (as in the second case). This conflict is detected by the anterior cingulate cortex, which signals the need for cognitive control, to be implemented in this case by the anterior DLPFC [Brodmann’s Areas (BA) 10/46]. Overriding prepotent emotional responses requires additional cognitive control and, thus, we find increased activity in the anterior DLPFC when people make difficult utilitarian moral judgments [3]. More recent studies support this theory: if negative emotions make people disapprove of pushing the man to his death, then inducing positive emotion might lead to more utilitarian approval, and this is indeed what happens [6]. Likewise, patients with frontotemporal dementia (known for their ‘emotional blunting’) should more readily approve of pushing the man in front of the trolley, and they do [7]. This finding directly foreshadows the hypoemotional VMPFC patients’ utilitarian responses to this and other cases [1]. Finally, we’ve found that cognitive load selectively interferes with utilitarian moral judgment,",
"title": ""
},
{
"docid": "343c71c6013c5684b8860c4386b34526",
"text": "This paper seeks to analyse the extent to which organizations can learn from projects by focusing on the relationship between projects and their organizational context. The paper highlights three dimensions of project-based learning: the practice-based nature of learning, project autonomy and knowledge integration. This analysis generates a number of propositions on the relationship between the learning generated within projects and its transfer to other parts of the organization. In particular, the paper highlights the ‘learning boundaries’ which emerge when learning within projects creates new divisions in practice. These propositions are explored through a comparative analysis of two case studies of construction projects. This analysis suggests that the learning boundaries which develop around projects reflect the nested nature of learning, whereby different levels of learning may substitute for each other. Learning outcomes in the cases can thus be analysed in terms of the interplay between organizational learning and project-level learning. The paper concludes that learning boundaries are an important constraint on attempts to exploit the benefits of projectbased learning for the wider organization.",
"title": ""
},
{
"docid": "5ec8b094cbbbfbbc0632d85b32255c49",
"text": "Pyramidal neurons are characterized by their distinct apical and basal dendritic trees and the pyramidal shape of their soma. They are found in several regions of the CNS and, although the reasons for their abundance remain unclear, functional studies — especially of CA1 hippocampal and layer V neocortical pyramidal neurons — have offered insights into the functions of their unique cellular architecture. Pyramidal neurons are not all identical, but some shared functional principles can be identified. In particular, the existence of dendritic domains with distinct synaptic inputs, excitability, modulation and plasticity appears to be a common feature that allows synapses throughout the dendritic tree to contribute to action-potential generation. These properties support a variety of coincidence-detection mechanisms, which are likely to be crucial for synaptic integration and plasticity.",
"title": ""
}
] |
scidocsrr
|
3d64739572b4db24f15ed648fc62cdd5
|
An Empirical Evaluation of Similarity Measures for Time Series Classification
|
[
{
"docid": "ceca5552bcb7a5ebd0b779737bc68275",
"text": "In a way similar to the string-to-string correction problem, we address discrete time series similarity in light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of edit operations needed to transform one time series into another. To define the edit operations, we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call time warp edit distance (TWED). TWED is slightly different in form from dynamic time warping (DTW), longest common subsequence (LCSS), or edit distance with real penalty (ERP) algorithms. In particular, it highlights a parameter that controls a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a potentially useful metric in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to link the matching of time series into down sampled representation spaces to the matching into the original space. The empiric quality of the TWED distance is evaluated on a simple classification task. Compared to edit distance, DTW, LCSS, and ERP, TWED has proved to be quite effective on the considered experimental task.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
}
] |
[
{
"docid": "1349c5daedd71bdfccaa0ea48b3fd54a",
"text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.",
"title": ""
},
{
"docid": "1de19775f0c32179f59674c7f0d8b540",
"text": "As the most commonly used bots in first-person shooter (FPS) online games, aimbots are notoriously difficult to detect because they are completely passive and resemble excellent honest players in many aspects. In this paper, we conduct the first field measurement study to understand the status quo of aimbots and how they play in the wild. For data collection purpose, we devise a novel and generic technique called baittarget to accurately capture existing aimbots from the two most popular FPS games. Our measurement reveals that cheaters who use aimbots cannot play as skillful as excellent honest players in all aspects even though aimbots can help them to achieve very high shooting performance. To characterize the unskillful and blatant nature of cheaters, we identify seven features, of which six are novel, and these features cannot be easily mimicked by aimbots. Leveraging this set of features, we propose an accurate and robust server-side aimbot detector called AimDetect. The core of AimDetect is a cascaded classifier that detects the inconsistency between performance and skillfulness of aimbots. We evaluate the efficacy and generality of AimDetect using the real game traces. Our results show that AimDetect can capture almost all of the aimbots with very few false positives and minor overhead.",
"title": ""
},
{
"docid": "961cc1dc7063706f8f66fc136da41661",
"text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.",
"title": ""
},
{
"docid": "3d56b369e10b29969132c44897d4cc4c",
"text": "Real-world object classes appear in imbalanced ratios. This poses a significant challenge for classifiers which get biased towards frequent classes. We hypothesize that improving the generalization capability of a classifier should improve learning on imbalanced datasets. Here, we introduce the first hybrid loss function that jointly performs classification and clustering in a single formulation. Our approach is based on an ‘affinity measure’ in Euclidean space that leads to the following benefits: (1) direct enforcement of maximum margin constraints on classification boundaries, (2) a tractable way to ensure uniformly spaced and equidistant cluster centers, (3) flexibility to learn multiple class prototypes to support diversity and discriminability in feature space. Our extensive experiments demonstrate the significant performance improvements on visual classification and verification tasks on multiple imbalanced datasets. The proposed loss can easily be plugged in any deep architecture as a differentiable block and demonstrates robustness against different levels of data imbalance and corrupted labels.",
"title": ""
},
{
"docid": "1ebf198459b98048404b706e4852eae2",
"text": "Network forensics is a branch of digital forensics, which applies to network security. It is used to relate monitoring and analysis of the computer network traffic, that helps us in collecting information and digital evidence, for the protection of network that can use as firewall and IDS. Firewalls and IDS can't always prevent and find out the unauthorized access within a network. This paper presents an extensive survey of several forensic frameworks. There is a demand of a system which not only detects the complex attack, but also it should be able to understand what had happened. Here it talks about the concept of the distributed network forensics. The concept of the Distributed network forensics is based on the distributed techniques, which are useful for providing an integrated platform for the automatic forensic evidence gathering and important data storage, valuable support and an attack attribution graph generation mechanism to depict hacking events.",
"title": ""
},
{
"docid": "fd0e31b2675a797c26af731ef1ff22df",
"text": "State representations critically affect the effectiveness of learning in robots. In this paper, we propose a roboticsspecific approach to learning such state representations. Robots accomplish tasks by interacting with the physical world. Physics in turn imposes structure on both the changes in the world and on the way robots can effect these changes. Using prior knowledge about interacting with the physical world, robots can learn state representations that are consistent with physics. We identify five robotic priors and explain how they can be used for representation learning. We demonstrate the effectiveness of this approach in a simulated slot car racing task and a simulated navigation task with distracting moving objects. We show that our method extracts task-relevant state representations from highdimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.",
"title": ""
},
{
"docid": "98b4e2d51efde6f4f8c43c29650b8d2f",
"text": "New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e.g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only nice to have but is in fact a necessary tool for designing embodied agents.",
"title": ""
},
{
"docid": "209203c297898a2251cfd62bdfc37296",
"text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"title": ""
},
{
"docid": "7735668d4f8407d9514211d9f5492ce6",
"text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.",
"title": ""
},
{
"docid": "e83227e0485cf7f3ba19ce20931bbc2f",
"text": "There has been an increased global demand for dermal filler injections in recent years. Although hyaluronic acid-based dermal fillers generally have a good safety profile, serious vascular complications have been reported. Here we present a typical case of skin necrosis following a nonsurgical rhinoplasty using hyaluronic acid filler. Despite various rescuing managements, unsightly superficial scars were left. It is critical for plastic surgeons and dermatologists to be familiar with the vascular anatomy and the staging of vascular complications. Any patients suspected to experience a vascular complication should receive early management under close monitoring. Meanwhile, the potentially devastating outcome caused by illegal practice calls for stricter regulations and law enforcement.",
"title": ""
},
{
"docid": "d559ace14dcc42f96d0a96b959a92643",
"text": "Graphs are an integral data structure for many parts of computation. They are highly effective at modeling many varied and flexible domains, and are excellent for representing the way humans themselves conceive of the world. Nowadays, there is lots of interest in working with large graphs, including social network graphs, “knowledge” graphs, and large bipartite graphs (for example, the Netflix movie matching graph).",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "9766e0507346e46e24790a4873979aa4",
"text": "Extreme learning machine (ELM) is proposed for solving a single-layer feed-forward network (SLFN) with fast learning speed and has been confirmed to be effective and efficient for pattern classification and regression in different fields. ELM originally focuses on the supervised, semi-supervised, and unsupervised learning problems, but just in the single domain. To our best knowledge, ELM with cross-domain learning capability in subspace learning has not been exploited very well. Inspired by a cognitive-based extreme learning machine technique (Cognit Comput. 6:376–390, 1; Cognit Comput. 7:263–278, 2.), this paper proposes a unified subspace transfer framework called cross-domain extreme learning machine (CdELM), which aims at learning a common (shared) subspace across domains. Three merits of the proposed CdELM are included: (1) A cross-domain subspace shared by source and target domains is achieved based on domain adaptation; (2) ELM is well exploited in the cross-domain shared subspace learning framework, and a new perspective is brought for ELM theory in heterogeneous data analysis; (3) the proposed method is a subspace learning framework and can be combined with different classifiers in recognition phase, such as ELM, SVM, nearest neighbor, etc. Experiments on our electronic nose olfaction datasets demonstrate that the proposed CdELM method significantly outperforms other compared methods.",
"title": ""
},
{
"docid": "9faf67646394dfedfef1b6e9152d9cf6",
"text": "Acoustic shooter localization systems are being rapidly deployed in the field. However, these are standalone systems---either wearable or vehicle-mounted---that do not have networking capability even though the advantages of widely distributed sensing for locating shooters have been demonstrated before. The reason for this is that certain disadvantages of wireless network-based prototypes made them impractical for the military. The system that utilized stationary single-channel sensors required many sensor nodes, while the multi-channel wearable version needed to track the absolute self-orientation of the nodes continuously, a notoriously hard task. This paper presents an approach that overcomes the shortcomings of past approaches. Specifically, the technique requires as few as five single-channel wireless sensors to provide accurate shooter localization and projectile trajectory estimation. Caliber estimation and weapon classification are also supported. In addition, a single node alone can provide reliable miss distance and range estimates based on a single shot as long as a reasonable assumption holds. The main contribution of the work and the focus of this paper is the novel sensor fusion technique that works well with a limited number of observations. The technique is thoroughly evaluated using an extensive shot library.",
"title": ""
},
{
"docid": "1b0cb70fb25d86443a01a313371a27ae",
"text": "We present a protocol for general state machine replication – a method that provides strong consistency – that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements.",
"title": ""
},
{
"docid": "b36549a4b16c2c8ab50f1adda99f3120",
"text": "Spatial representations of time are a ubiquitous feature of human cognition. Nevertheless, interesting sociolinguistic variations exist with respect to where in space people locate temporal constructs. For instance, while in English time metaphorically flows horizontally, in Mandarin an additional vertical dimension is employed. Noting that the bilingual mind can flexibly accommodate multiple representations, the present work explored whether Mandarin-English bilinguals possess two mental time lines. Across two experiments, we demonstrated that Mandarin-English bilinguals do indeed employ both horizontal and vertical representations of time. Importantly, subtle variations to cultural context were seen to shape how these time lines were deployed.",
"title": ""
},
{
"docid": "41611606af8671f870fb90e50c2e99fc",
"text": "Pointwise label and pairwise label are both widely used in computer vision tasks. For example, supervised image classification and annotation approaches use pointwise label, while attribute-based image relative learning often adopts pairwise labels. These two types of labels are often considered independently and most existing efforts utilize them separately. However, pointwise labels in image classification and tag annotation are inherently related to the pairwise labels. For example, an image labeled with \"coast\" and annotated with \"beach, sea, sand, sky\" is more likely to have a higher ranking score in terms of the attribute \"open\", while \"men shoes\" ranked highly on the attribute \"formal\" are likely to be annotated with \"leather, lace up\" than \"buckle, fabric\". The existence of potential relations between pointwise labels and pairwise labels motivates us to fuse them together for jointly addressing related vision tasks. In particular, we provide a principled way to capture the relations between class labels, tags and attributes, and propose a novel framework PPP(Pointwise and Pairwise image label Prediction), which is based on overlapped group structure extracted from the pointwise-pairwise-label bipartite graph. With experiments on benchmark datasets, we demonstrate that the proposed framework achieves superior performance on three vision tasks compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "dc93d2204ff27c7d55a71e75d2ae4ca9",
"text": "Locating and securing an Alzheimer's patient who is outdoors and in wandering state is crucial to patient's safety. Although advances in geotracking and mobile technology have made locating patients instantly possible, reaching them while in wandering state may take time. However, a social network of caregivers may help shorten the time that it takes to reach and secure a wandering AD patient. This study proposes a new type of intervention based on novel mobile application architecture to form and direct a social support network of caregivers for locating and securing wandering patients as soon as possible. System employs, aside from the conventional tracking mechanism, a wandering detection mechanism, both of which operates through a tracking device installed a Subscriber Identity Module for Global System for Mobile Communications Network(GSM). System components are being implemented using Java. Family caregivers will be interviewed prior to and after the use of the system and Center For Epidemiologic Studies Depression Scale, Patient Health Questionnaire and Zarit Burden Interview will be applied to them during these interviews to find out the impact of the system in terms of depression, anxiety and burden, respectively.",
"title": ""
},
{
"docid": "acb3689c9ece9502897cebb374811f54",
"text": "In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.",
"title": ""
}
] |
scidocsrr
|
d0a0791c9c6f6d9ffdc2f4ebb05a8241
|
Big Data Analysis in Smart Manufacturing: A Review
|
[
{
"docid": "c12fb39060ec4dd2c7bb447352ea4e8a",
"text": "Lots of data from different domains is published as Linked Open Data (LOD). While there are quite a few browsers for such data, as well as intelligent tools for particular purposes, a versatile tool for deriving additional knowledge by mining the Web of Linked Data is still missing. In this system paper, we introduce the RapidMiner Linked Open Data extension. The extension hooks into the powerful data mining and analysis platform RapidMiner, and offers operators for accessing Linked Open Data in RapidMiner, allowing for using it in sophisticated data analysis workflows without the need for expert knowledge in SPARQL or RDF. The extension allows for autonomously exploring the Web of Data by following links, thereby discovering relevant datasets on the fly, as well as for integrating overlapping data found in different datasets. As an example, we show how statistical data from the World Bank on scientific publications, published as an RDF data cube, can be automatically linked to further datasets and analyzed using additional background knowledge from ten different LOD datasets.",
"title": ""
},
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
}
] |
[
{
"docid": "9f21792dbe89fa95d85e7210cf1de9c6",
"text": "Convolutional Neural Networks have provided state-of-the-art results in several computer vision problems. However, due to a large number of parameters in CNNs, they require a large number of training samples which is a limiting factor for small sample size problems. To address this limitation, we propose SSF-CNN which focuses on learning the \"structure\" and \"strength\" of filters. The structure of the filter is initialized using a dictionary based filter learning algorithm and the strength of the filter is learned using the small sample training data. The architecture provides the flexibility of training with both small and large training databases, and yields good accuracies even with small size training data. The effectiveness of the algorithm is first demonstrated on MNIST, CIFAR10, and NORB databases, with varying number of training samples. The results show that SSF-CNN significantly reduces the number of parameters required for training while providing high accuracies on the test databases. On small sample size problems such as newborn face recognition and Omniglot, it yields state-of-the-art results. Specifically, on the IIITD Newborn Face Database, the results demonstrate improvement in rank-1 identification accuracy by at least 10%.",
"title": ""
},
{
"docid": "f6ba46b72139f61cfb098656d71553ed",
"text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.",
"title": ""
},
{
"docid": "792df318ee62c4e5409f53829c3de05c",
"text": "In this paper we present a novel technique to calibrate multiple casually aligned projectors on a fiducial-free cylindrical curved surface using a single camera. We impose two priors to the cylindrical display: (a) cylinder is a vertically extruded surface; and (b) the aspect ratio of the rectangle formed by the four corners of the screen is known. Using these priors, we can estimate the display's 3D surface geometry and camera extrinsic parameters using a single image without any explicit display to camera correspondences. Using the estimated camera and display properties, we design a novel deterministic algorithm to recover the intrinsic and extrinsic parameters of each projector using a single projected pattern seen by the camera which is then used to register the images on the display from any arbitrary viewpoint making it appropriate for virtual reality systems. Finally, our method can be extended easily to handle sharp corners — making it suitable for the common CAVE like VR setup. To the best of our knowledge, this is the first method that can achieve accurate geometric auto-calibration of multiple projectors on a cylindrical display without performing an extensive stereo reconstruction.",
"title": ""
},
{
"docid": "c663806c6b086b31e57a9d7e54a46d4b",
"text": "Deep neural networks are frequently used for computer vision, speech recognition and text processing. The reason is their ability to regress highly nonlinear functions. We present an end-to-end controller for steering autonomous vehicles based on a convolutional neural network (CNN). The deployed framework does not require explicit hand-engineered algorithms for lane detection, object detection or path planning. The trained neural net directly maps pixel data from a front-facing camera to steering commands and does not require any other sensors. We compare the controller performance with the steering behavior of a human driver.",
"title": ""
},
{
"docid": "53a1d344a6e38dd790e58c6952e51cdb",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "f1744cf87ee2321c5132d6ee30377413",
"text": "How do movements in the distribution of income and wealth affect the macroeconomy? We analyze this question using a calibrated version of the stochastic growth model with partially uninsurable idiosyncratic risk and movements in aggregate productivity. Our main finding is that, in the stationary stochastic equilibrium, the behavior of the macroeconomic aggregates can be almost perfectly described using only the mean of the wealth distribution. This result is robust to substantial changes in both parameter values and model specification. Our benchmark model, whose only difference from the representative-agent framework is the existence of uninsurable idiosyncratic risk, displays far less cross-sectional dispersion",
"title": ""
},
{
"docid": "94c9eec9aa4f36bf6b2d83c3cc8dbb12",
"text": "Many real world security problems can be modelled as finite zero-sum games with structured sequential strategies and limited interactions between the players. An abstract class of games unifying these models are the normal-form games with sequential strategies (NFGSS). We show that all games from this class can be modelled as well-formed imperfect-recall extensiveform games and consequently can be solved by counterfactual regret minimization. We propose an adaptation of the CFR algorithm for NFGSS and compare its performance to the standard methods based on linear programming and incremental game generation. We validate our approach on two security-inspired domains. We show that with a negligible loss in precision, CFR can compute a Nash equilibrium with five times less computation than its competitors. Game theory has been recently used to model many real world security problems, such as protecting airports (Pita et al. 2008) or airplanes (Tsai et al. 2009) from terrorist attacks, preventing fare evaders form misusing public transport (Yin et al. 2012), preventing attacks in computer networks (Durkota et al. 2015), or protecting wildlife from poachers (Fang, Stone, and Tambe 2015). Many of these security problems are sequential in nature. Rather than a single monolithic action, the players’ strategies are formed by sequences of smaller individual decisions. For example, the ticket inspectors make a sequence of decisions about where to check tickets and which train to take; a network administrator protects the network against a sequence of actions an attacker uses to penetrate deeper into the network. Sequential decision making in games has been extensively studied from various perspectives. Recent years have brought significant progress in solving massive imperfectinformation extensive-form games with a focus on the game of poker. Counterfactual regret minimization (Zinkevich et al. 2008) is the family of algorithms that has facilitated much of this progress, with a recent incarnation (Tammelin et al. 2015) essentially solving for the first time a variant of poker commonly played by people (Bowling et al. 2015). However, there has not been any transfer of these results to research on real world security problems. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We focus on an abstract class of sequential games that can model many sequential security games, such as games taking place in physical space that can be discretized as a graph. This class of games is called normal-form games with sequential strategies (NFGSS) (Bosansky et al. 2015) and it includes, for example, existing game theoretic models of ticket inspection (Jiang et al. 2013), border patrolling (Bosansky et al. 2015), and securing road networks (Jain et al. 2011). In this work we formally prove that any NFGSS can be modelled as a slightly generalized chance-relaxed skew well-formed imperfect-recall game (CRSWF) (Lanctot et al. 2012; Kroer and Sandholm 2014), a subclass of extensiveform games with imperfect recall in which counterfactual regret minimization is guaranteed to converge to the optimal strategy. We then show how to adapt the recent variant of the algorithm, CFR, directly to NFGSS and present experimental validation on two distinct domains modelling search games and ticket inspection. We show that CFR is applicable and efficient in domains with imperfect recall that are substantially different from poker. Moreover, if we are willing to sacrifice a negligible degree of approximation, CFR can find a solution substantially faster than methods traditionally used in research on security games, such as formulating the game as a linear program (LP) and incrementally building the game model by double oracle methods.",
"title": ""
},
{
"docid": "800dc3e6a3f58d2af1ed7cd526074d54",
"text": "The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.",
"title": ""
},
{
"docid": "b61042f2d5797e57e2bc395966bb7ad2",
"text": "A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided.",
"title": ""
},
{
"docid": "83ba1d7915fc7cb73c86172970b1979e",
"text": "This paper presents a new modeling methodology accounting for generation and propagation of minority carriers that can be used directly in circuit-level simulators in order to estimate coupled parasitic currents. The method is based on a new compact model of basic components (p-n junction and resistance) and takes into account minority carriers at the boundary. An equivalent circuit schematic of the substrate is built by identifying these basic elements in the substrate and interconnecting them. Parasitic effects such as bipolar or latch-up effects result from the continuity of minority carriers guaranteed by the components' models. A structure similar to a half-bridge perturbing sensitive n-wells has been simulated. It is composed by four p-n junctions connected together by their common p-doped sides. The results are in good agreement with those obtained from physical device simulations.",
"title": ""
},
{
"docid": "7543281174d7dc63e180249d94ad6c07",
"text": "Enriching speech recognition output with sentence boundaries improves its human readability and enables further processing by downstream language processing modules. We have constructed a hidden Markov model (HMM) system to detect sentence boundaries that uses both prosodic and textual information. Since there are more nonsentence boundaries than sentence boundaries in the data, the prosody model, which is implemented as a decision tree classifier, must be constructed to effectively learn from the imbalanced data distribution. To address this problem, we investigate a variety of sampling approaches and a bagging scheme. A pilot study was carried out to select methods to apply to the full NIST sentence boundary evaluation task across two corpora (conversational telephone speech and broadcast news speech), using both human transcriptions and recognition output. In the pilot study, when classification error rate is the performance measure, using the original training set achieves the best performance among the sampling methods, and an ensemble of multiple classifiers from different downsampled training sets achieves slightly poorer performance, but has the potential to reduce computational effort. However, when performance is measured using receiver operating characteristics (ROC) or area under the curve (AUC), then the sampling approaches outperform the original training set. This observation is important if the 0885-2308/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.csl.2005.06.002 * Corresponding author. Tel.: +1 510 666 2993; fax: +510 666 2956. E-mail addresses: yangl@icsi.berkeley.edu (Y. Liu), nchawla@cse.nd.edu (N.V. Chawla), harper@ecn.purdue.edu (M.P. Harper), ees@speech.sri.com (E. Shriberg), stolcke@speech.sri.com (A. Stolcke). Y. Liu et al. / Computer Speech and Language 20 (2006) 468–494 469 sentence boundary detection output is used by downstream language processing modules. Bagging was found to significantly improve system performance for each of the sampling methods. The gain from these methods may be diminished when the prosody model is combined with the language model, which is a strong knowledge source for the sentence detection task. The patterns found in the pilot study were replicated in the full NIST evaluation task. The conclusions may be dependent on the task, the classifiers, and the knowledge combination approach. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e58e294dbacf605e40ff2f59cc4f8a6a",
"text": "There are fundamental similarities between sleep in mammals and quiescence in the arthropod Drosophila melanogaster, suggesting that sleep-like states are evolutionarily ancient. The nematode Caenorhabditis elegans also has a quiescent behavioural state during a period called lethargus, which occurs before each of the four moults. Like sleep, lethargus maintains a constant temporal relationship with the expression of the C. elegans Period homologue LIN-42 (ref. 5). Here we show that quiescence associated with lethargus has the additional sleep-like properties of reversibility, reduced responsiveness and homeostasis. We identify the cGMP-dependent protein kinase (PKG) gene egl-4 as a regulator of sleep-like behaviour, and show that egl-4 functions in sensory neurons to promote the C. elegans sleep-like state. Conserved effects on sleep-like behaviour of homologous genes in C. elegans and Drosophila suggest a common genetic regulation of sleep-like states in arthropods and nematodes. Our results indicate that C. elegans is a suitable model system for the study of sleep regulation. The association of this C. elegans sleep-like state with developmental changes that occur with larval moults suggests that sleep may have evolved to allow for developmental changes.",
"title": ""
},
{
"docid": "01b05ea8fcca216e64905da7b5508dea",
"text": "Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by “heating” the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed β-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.",
"title": ""
},
{
"docid": "852ff3b52b4bf8509025cb5cb751899f",
"text": "Digital images are ubiquitous in our modern lives, with uses ranging from social media to news, and even scientific papers. For this reason, it is crucial evaluate how accurate people are when performing the task of identify doctored images. In this paper, we performed an extensive user study evaluating subjects capacity to detect fake images. After observing an image, users have been asked if it had been altered or not. If the user answered the image has been altered, he had to provide evidence in the form of a click on the image. We collected 17,208 individual answers from 383 users, using 177 images selected from public forensic databases. Different from other previously studies, our method propose different ways to avoid lucky guess when evaluating users answers. Our results indicate that people show inaccurate skills at differentiating between altered and non-altered images, with an accuracy of 58%, and only identifying the modified images 46.5% of the time. We also track user features such as age, answering time, confidence, providing deep analysis of how such variables influence on the users’ performance.",
"title": ""
},
{
"docid": "c3be24db41e57658793281a9765635c0",
"text": "A boundary element method (BEM) simulation is used to compare the efficiency of numerical inverse Laplace transform strategies, considering general requirements of Laplace-space numerical approaches. The two-dimensional BEM solution is used to solve the Laplace-transformed diffusion equation, producing a time-domain solution after a numerical Laplace transform inversion. Motivated by the needs of numerical methods posed in Laplace-transformed space, we compare five inverse Laplace transform algorithms and discuss implementation techniques to minimize the number of Laplace-space function evaluations. We investigate the ability to calculate a sequence of time domain values using the fewest Laplace-space model evaluations. We find Fourier-series based inversion algorithms work for common time behaviors, are the most robust with respect to free parameters, and allow for straightforward image function evaluation re-use across at least a log cycle of time.",
"title": ""
},
{
"docid": "5594475c91355d113e0045043eff8b93",
"text": "Background: Since the introduction of the systematic review process to Software Engineering in 2004, researchers have investigated a number of ways to mitigate the amount of effort and time taken to filter through large volumes of literature.\n Aim: This study aims to provide a critical analysis of text mining techniques used to support the citation screening stage of the systematic review process.\n Method: We critically re-reviewed papers included in a previous systematic review which addressed the use of text mining methods to support the screening of papers for inclusion in a review. The previous review did not provide a detailed analysis of the text mining methods used. We focus on the availability in the papers of information about the text mining methods employed, including the description and explanation of the methods, parameter settings, assessment of the appropriateness of their application given the size and dimensionality of the data used, performance on training, testing and validation data sets, and further information that may support the reproducibility of the included studies.\n Results: Support Vector Machines (SVM), Naïve Bayes (NB) and Committee of classifiers (Ensemble) are the most used classification algorithms. In all of the studies, features were represented with Bag-of-Words (BOW) using both binary features (28%) and term frequency (66%). Five studies experimented with n-grams with n between 2 and 4, but mostly the unigram was used. χ2, information gain and tf-idf were the most commonly used feature selection techniques. Feature extraction was rarely used although LDA and topic modelling were used. Recall, precision, F and AUC were the most used metrics and cross validation was also well used. More than half of the studies used a corpus size of below 1,000 documents for their experiments while corpus size for around 80% of the studies was 3,000 or fewer documents. The major common ground we found for comparing performance assessment based on independent replication of studies was the use of the same dataset but a sound performance comparison could not be established because the studies had little else in common. In most of the studies, insufficient information was reported to enable independent replication. The studies analysed generally did not include any discussion of the statistical appropriateness of the text mining method that they applied. In the case of applications of SVM, none of the studies report the number of support vectors that they found to indicate the complexity of the prediction engine that they use, making it impossible to judge the extent to which over-fitting might account for the good performance results.\n Conclusions: There is yet to be concrete evidence about the effectiveness of text mining algorithms regarding their use in the automation of citation screening in systematic reviews. The studies indicate that options are still being explored, but there is a need for better reporting as well as more explicit process details and access to datasets to facilitate study replication for evidence strengthening. In general, the reader often gets the impression that text mining algorithms were applied as magic tools in the reviewed papers, relying on default settings or default optimization of available machine learning toolboxes without an in-depth understanding of the statistical validity and appropriateness of such tools for text mining purposes.",
"title": ""
},
{
"docid": "a2dfa8007b3a13da31a768fe07393d15",
"text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.",
"title": ""
},
{
"docid": "08025e6ed1ee71596bdc087bfd646eac",
"text": "A method is presented for computing an orthonormal set of eigenvectors for the discrete Fourier transform (DFT). The technique is based on a detailed analysis of the eigenstructure of a special matrix which commutes with the DFT. It is also shown how fractional powers of the DFT can be efficiently computed, and possible applications to multiplexing and transform coding are suggested. T",
"title": ""
},
{
"docid": "3cfa45816c57cbbe1d86f7cce7f52967",
"text": "Video games have become one of the favorite activities of American children. A growing body of research is linking violent video game play to aggressive cognitions, attitudes, and behaviors. The first goal of this study was to document the video games habits of adolescents and the level of parental monitoring of adolescent video game use. The second goal was to examine associations among violent video game exposure, hostility, arguments with teachers, school grades, and physical fights. In addition, path analyses were conducted to test mediational pathways from video game habits to outcomes. Six hundred and seven 8th- and 9th-grade students from four schools participated. Adolescents who expose themselves to greater amounts of video game violence were more hostile, reported getting into arguments with teachers more frequently, were more likely to be involved in physical fights, and performed more poorly in school. Mediational pathways were found such that hostility mediated the relationship between violent video game exposure and outcomes. Results are interpreted within and support the framework of the General Aggression Model.",
"title": ""
},
{
"docid": "2c8bfb9be08edfdac6d335bdcffe204c",
"text": "Undoubtedly, the age of big data has opened new options for natural disaster management, primarily because of the varied possibilities it provides in visualizing, analyzing, and predicting natural disasters. From this perspective, big data has radically changed the ways through which human societies adopt natural disaster management strategies to reduce human suffering and economic losses. In a world that is now heavily dependent on information technology, the prime objective of computer experts and policy makers is to make the best of big data by sourcing information from varied formats and storing it in ways that it can be effectively used during different stages of natural disaster management. This paper aimed at making a systematic review of the literature in analyzing the role of big data in natural disaster management and highlighting the present status of the technology in providing meaningful and effective solutions in natural disaster management. The paper has presented the findings of several researchers on varied scientific and technological perspectives that have a bearing on the efficacy of big data in facilitating natural disaster management. In this context, this paper reviews the major big data sources, the associated achievements in different disaster management phases, and emerging technological topics associated with leveraging this new ecosystem of Big Data to monitor and detect natural hazards, mitigate their effects, assist in relief efforts, and contribute to the recovery and reconstruction processes.",
"title": ""
}
] |
scidocsrr
|
2fa853ae293bf05da80dc239e01616d1
|
A Hybrid Generative/Discriminative Approach to Semi-Supervised Classifier Design
|
[
{
"docid": "3ac2f2916614a4e8f6afa1c31d9f704d",
"text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.",
"title": ""
},
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
},
{
"docid": "70e6148316bd8915afd8d0908fb5ab0d",
"text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section",
"title": ""
}
] |
[
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "0de75995e7face03c56ce90aae7bf944",
"text": "The analysis of facial appearance is significant to an early diagnosis of medical genetic diseases. The fast development of image processing and machine learning techniques facilitates the detection of facial dysmorphic features. This paper is a survey of the recent studies developed for the screening of genetic abnormalities across the facial features obtained from two dimensional and three dimensional images.",
"title": ""
},
{
"docid": "ef9947c8f478d6274fcbcf8c9e300806",
"text": "The introduction in 1998 of multi-detector row computed tomography (CT) by the major CT vendors was a milestone with regard to increased scan speed, improved z-axis spatial resolution, and better utilization of the available x-ray power. In this review, the general technical principles of multi-detector row CT are reviewed as they apply to the established four- and eight-section systems, the most recent 16-section scanners, and future generations of multi-detector row CT systems. Clinical examples are used to demonstrate both the potential and the limitations of the different scanner types. When necessary, standard single-section CT is referred to as a common basis and starting point for further developments. Another focus is the increasingly important topic of patient radiation exposure, successful dose management, and strategies for dose reduction. Finally, the evolutionary steps from traditional single-section spiral image-reconstruction algorithms to the most recent approaches toward multisection spiral reconstruction are traced.",
"title": ""
},
{
"docid": "f65d5366115da23c8acd5bce1f4a9887",
"text": "Effective crisis management has long relied on both the formal and informal response communities. Social media platforms such as Twitter increase the participation of the informal response community in crisis response. Yet, challenges remain in realizing the formal and informal response communities as a cooperative work system. We demonstrate a supportive technology that recognizes the existing capabilities of the informal response community to identify needs (seeker behavior) and provide resources (supplier behavior), using their own terminology. To facilitate awareness and the articulation of work in the formal response community, we present a technology that can bridge the differences in terminology and understanding of the task between the formal and informal response communities. This technology includes our previous work using domain-independent features of conversation to identify indications of coordination within the informal response community. In addition, it includes a domain-dependent analysis of message content (drawing from the ontology of the formal response community and patterns of language usage concerning the transfer of property) to annotate social media messages. The resulting repository of annotated messages is accessible through our social media analysis tool, Twitris. It allows recipients in the formal response community to sort on resource needs and availability along various dimensions including geography and time. Thus, computation indexes the original social media content and enables complex querying to identify contents, players, and locations. Evaluation of the computed annotations for seeker-supplier behavior with human judgment shows fair to moderate agreement. In addition to the potential benefits to the formal emergency response community regarding awareness of the observations and activities of the informal response community, the analysis serves as a point of reference for evaluating more computationally intensive efforts and characterizing the patterns of language behavior during a crisis.",
"title": ""
},
{
"docid": "b1f0b80c51af4c146495eb2b1e3b9ba9",
"text": "This paper presents an average current mode buck dimmable light-emitting diode (LED) driver for large-scale single-string LED backlighting applications. The proposed integrated current control technique can provide exact current control signals by using an autozeroed integrator to enhance the accuracy of the average current of LEDs while driving a large number of LEDs. Adoption of discontinuous low-side current sensing leads to power loss reduction. Adoption of a fast-settling technique allows the LED driver to enter into the steady state within three switching cycles after the dimming signal is triggered. Implemented in a 0.35-μm HV CMOS process, the proposed LED driver achieves 1.7% LED current error and 98.16% peak efficiency over an input voltage range of 110 to 200 V while driving 30 to 50 LEDs.",
"title": ""
},
{
"docid": "6d925c32d3900512e0fd0ed36b683c69",
"text": "This paper presents a detailed design process of an ultra-high speed, switched reluctance machine for micro machining. The performance goal of the machine is to reach a maximum rotation speed of 750,000 rpm with an output power of 100 W. The design of the rotor involves reducing aerodynamic drag, avoiding mechanical resonance, and mitigating excessive stress. The design of the stator focuses on meeting the torque requirement while minimizing core loss and copper loss. The performance of the machine and the strength of the rotor structure are both verified through finite-element simulations The final design is a 6/4 switched reluctance machine with a 6mm diameter rotor that is wrapped in a carbon fiber sleeve and exhibits 13.6 W of viscous loss. The stator has shoeless poles and exhibits 19.1 W of electromagnetic loss.",
"title": ""
},
{
"docid": "4d791fa53f7ed8660df26cd4dbe9063a",
"text": "The Internet is a powerful political instrument, wh ich is increasingly employed by terrorists to forward their goals. The fiv most prominent contemporary terrorist uses of the Net are information provision , fi ancing, networking, recruitment, and information gathering. This article describes a nd explains each of these uses and follows up with examples. The final section of the paper describes the responses of government, law enforcement, intelligence agencies, and others to the terrorism-Internet nexus. There is a particular emphasis within the te xt on the UK experience, although examples from other jurisdictions are also employed . ___________________________________________________________________ “Terrorists use the Internet just like everybody el se” Richard Clarke (2004) 1 ___________________________________________________________________",
"title": ""
},
{
"docid": "00ea9078f610b14ed0ed00ed6d0455a7",
"text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.",
"title": ""
},
{
"docid": "e8d2fc861fd1b930e65d40f6ce763672",
"text": "Despite that burnout presents a serious burden for modern society, there are no diagnostic criteria. Additional difficulty is the differential diagnosis with depression. Consequently, there is a need to dispose of a burnout biomarker. Epigenetic studies suggest that DNA methylation is a possible mediator linking individual response to stress and psychopathology and could be considered as a potential biomarker of stress-related mental disorders. Thus, the aim of this review is to provide an overview of DNA methylation mechanisms in stress, burnout and depression. In addition to state-of-the-art overview, the goal of this review is to provide a scientific base for burnout biomarker research. We performed a systematic literature search and identified 25 pertinent articles. Among these, 15 focused on depression, 7 on chronic stress and only 3 on work stress/burnout. Three epigenome-wide studies were identified and the majority of studies used the candidate-gene approach, assessing 12 different genes. The glucocorticoid receptor gene (NR3C1) displayed different methylation patterns in chronic stress and depression. The serotonin transporter gene (SLC6A4) methylation was similarly affected in stress, depression and burnout. Work-related stress and depressive symptoms were associated with different methylation patterns of the brain derived neurotrophic factor gene (BDNF) in the same human sample. The tyrosine hydroxylase (TH) methylation was correlated with work stress in a single study. Additional, thoroughly designed longitudinal studies are necessary for revealing the cause-effect relationship of work stress, epigenetics and burnout, including its overlap with depression.",
"title": ""
},
{
"docid": "7c295cb178e58298b1f60f5a829118fd",
"text": "A dual-band 0.92/2.45 GHz circularly-polarized (CP) unidirectional antenna using the wideband dual-feed network, two orthogonally positioned asymmetric H-shape slots, and two stacked concentric annular-ring patches is proposed for RF identification (RFID) applications. The measurement result shows that the antenna achieves the impedance bandwidths of 15.4% and 41.9%, the 3-dB axial-ratio (AR) bandwidths of 4.3% and 21.5%, and peak gains of 7.2 dBic and 8.2 dBic at 0.92 and 2.45 GHz bands, respectively. Moreover, the antenna provides stable symmetrical radiation patterns and wide-angle 3-dB AR beamwidths in both lower and higher bands for unidirectional wide-coverage RFID reader applications. Above all, the dual-band CP unidirectional patch antenna presented is beneficial to dual-band RFID system on configuration, implementation, as well as cost reduction.",
"title": ""
},
{
"docid": "627f3b4ae9df80bdc0374d4fe375f40e",
"text": "Though in the lowest level cuckoos exploit precisely this hypothesis that do. This by people who are relevantly similar others suggest the human moral. 1983 levine et al but evolutionary mechanism. No need for a quite generously supported by an idea that do. Individuals may be regulated by the, future nonetheless. It is that we can do not merely apparent case if there are less. 2005 oliner sorokin taylor et al. Oliner however a poet and wrong boehm tackles the motives. Studies have the willingness to others from probability of narrative.",
"title": ""
},
{
"docid": "a45c93e89cc3df3ebec59eb0c81192ec",
"text": "We study a variant of the capacitated vehicle routing problem where the cost over each arc is defined as the product of the arc length and the weight of the vehicle when it traverses that arc. We propose two new mixed integer linear programming formulations for the problem: an arc-load formulation and a set partitioning formulation based on q-routes with additional constraints. A family of cycle elimination constraints are derived for the arc-load formulation. We then compare the linear programming (LP) relaxations of these formulations with the twoindex one-commodity flow formulation proposed in the literature. In particular, we show that the arc-load formulation with the new cycle elimination constraints gives the same LP bound as the set partitioning formulation based on 2-cycle-free q-routes, which is stronger than the LP bound given by the two-index one-commodity flow formulation. We propose a branchand-cut algorithm for the arc-load formulation, and a branch-cut-and-price algorithm for the set partitioning formulation strengthened by additional constraints. Computational results on instances from the literature demonstrate that a significant improvement can be achieved by the branch-cut-and-price algorithm over other methods.",
"title": ""
},
{
"docid": "e1050f3c38f0b49893da4dd7722aff71",
"text": "The Berkeley lower extremity exoskeleton (BLEEX) is a load-carrying and energetically autonomous human exoskeleton that, in this first generation prototype, carries up to a 34 kg (75 Ib) payload for the pilot and allows the pilot to walk at up to 1.3 m/s (2.9 mph). This article focuses on the human-in-the-loop control scheme and the novel ring-based networked control architecture (ExoNET) that together enable BLEEX to support payload while safely moving in concert with the human pilot. The BLEEX sensitivity amplification control algorithm proposed here increases the closed loop system sensitivity to its wearer's forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The tradeoffs between not having sensors to measure human variables, the need for dynamic model accuracy, and robustness to parameter uncertainty are described. ExoNET provides the physical network on which the BLEEX control algorithm runs. The ExoNET control network guarantees strict determinism, optimized data transfer for small data sizes, and flexibility in configuration. Its features and application on BLEEX are described",
"title": ""
},
{
"docid": "dac17254c16068a4dcf49e114bfcc822",
"text": "We present a novel coded exposure video technique for multi-image motion deblurring. The key idea of this paper is to capture video frames with a set of complementary fluttering patterns, which enables us to preserve all spectrum bands of a latent image and recover a sharp latent image. To achieve this, we introduce an algorithm for generating a complementary set of binary sequences based on the modern communication theory and implement the coded exposure video system with an off-the-shelf machine vision camera. To demonstrate the effectiveness of our method, we provide in-depth analyses of the theoretical bounds and the spectral gains of our method and other state-of-the-art computational imaging approaches. We further show deblurring results on various challenging examples with quantitative and qualitative comparisons to other computational image capturing methods used for image deblurring, and show how our method can be applied for protecting privacy in videos.",
"title": ""
},
{
"docid": "af910640384bca46ba4268fe4ba0c3b3",
"text": "The experience and methodology developed by COPEL for the integrated use of Pls-Cadd (structure spotting) and Tower (structural analysis) softwares are presented. Structural evaluations in transmission line design are possible for any loading condition, allowing considerations of new or updated loading trees, wind speeds or design criteria.",
"title": ""
},
{
"docid": "79fd1db13ce875945c7e11247eb139c8",
"text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.",
"title": ""
},
{
"docid": "6d4315ed2e36708528e46b368c89573e",
"text": "Annotating the right data for training deep neural networks is an important challenge. Active learning using uncertainty estimates from Bayesian Neural Networks (BNNs) could provide an effective solution to this. Despite being theoretically principled, BNNs require approximations to be applied to large-scale problems, and have not been used widely by practitioners. In this paper, we introduce Deep Probabilistic Ensembles (DPEs), a scalable technique that uses a regularized ensemble to approximate a deep BNN. We conduct a series of active learning experiments to evaluate DPEs on classification with the CIFAR-10, CIFAR-100 and ImageNet datasets, and semantic segmentation with the BDD100k dataset. Our models consistently outperform baselines and previously published methods, requiring significantly less training data to achieve competitive performances.",
"title": ""
},
{
"docid": "52945fb1d436b81a3e52d83abdea55d0",
"text": "Article history: Received 16 September 2016 Received in revised form 28 November 2016 Accepted 20 January 2017 Available online xxxx",
"title": ""
}
] |
scidocsrr
|
5cc337f41627bc1304d5178cc34efebd
|
Combining self-supervised learning and imitation for vision-based rope manipulation
|
[
{
"docid": "57f2b164538adcd242f66b80d4218cef",
"text": "Suturing is an important yet time-consuming part of surgery. A fast and robust autonomous procedure could reduce surgeon fatigue, and shorten operation times. It could also be of particular importance for suturing in remote tele-surgery settings where latency can complicate the master-slave mode control that is the current practice for robotic surgery with systems like the da Vinci®. We study the applicability of the trajectory transfer algorithm proposed in [12] to the automation of suturing. The core idea of this procedure is to first use non-rigid registration to find a 3D warping function which maps the demonstration scene onto the test scene, then use this warping function to transform the robot end-effector trajectory. Finally a robot joint trajectory is generated by solving a trajectory optimization problem that attempts to find the closest feasible trajectory, accounting for external constraints, such as joint limits and obstacles. Our experiments investigate generalization from a single demonstration to differing initial conditions. A first set of experiments considers the problem of having a simulated Raven II system [5] suture two flaps of tissue together. A second set of experiments considers a PR2 robot performing sutures in a scaled-up experimental setup. The simulation experiments were fully autonomous. For the real-world experiments we provided human input to assist with the detection of landmarks to be fed into the registration algorithm. The success rate for learning from a single demonstration is high for moderate perturbations from the demonstration's initial conditions, and it gradually decreases for larger perturbations.",
"title": ""
}
] |
[
{
"docid": "d83f34978bd6dd72131c36f8adb34850",
"text": "Images in social networks share different destinies: some are going to become popular while others are going to be completely unnoticed. In this paper we propose to use visual sentiment features together with three novel context features to predict a concise popularity score of social images. Experiments on large scale datasets show the benefits of proposed features on the performance of image popularity prediction. Exploiting state-of-the-art sentiment features, we report a qualitative analysis of which sentiments seem to be related to good or poor popularity. To the best of our knowledge, this is the first work understanding specific visual sentiments that positively or negatively influence the eventual popularity of images.",
"title": ""
},
{
"docid": "9677d364752d50160557bd8e9dfa0dfb",
"text": "a Junior Research Group of Primate Sexual Selection, Department of Reproductive Biology, German Primate Center Courant Research Center ‘Evolution of Social Behavior’, Georg-August-Universität, Germany c Junior Research Group of Primate Kin Selection, Department of Primatology, Max-Planck-Institute for Evolutionary Anthropology, Germany d Institute of Biology, Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Germany e Faculty of Veterinary Medicine, Bogor Agricultural University, Indonesia",
"title": ""
},
{
"docid": "882f463d187854967709c95ecd1d2fc1",
"text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. We utilize different resolutions of feature maps in the network to detect object instances of various sizes. Specifically, we divide the anchor candidates into three clusters based on the scale size and place them on feature maps of distinct strides to detect small, medium and large objects, respectively. Deeper feature maps contain region-level semantics which can help shallow counterparts to identify small objects. Therefore we design a zoom-in sub-network to increase the resolution of high level features via a deconvolution operation. The high-level features with high resolution are then combined and merged with low-level features to detect objects. Furthermore, we devise a recursive training pipeline to consecutively regress region proposals at the training stage in order to match the iterative regression at the testing stage. We demonstrate the effectiveness of the proposed method on ILSVRC DET and MS COCO datasets, where our algorithm performs better than the state-of-the-arts in various evaluation metrics. It also increases average precision by around 2% in the detection system.",
"title": ""
},
{
"docid": "66435d5b38f460edf7781372cd4e125b",
"text": "Network Function Virtualization (NFV) is emerging as a new paradigm for providing elastic network functions through flexible virtual network function (VNF) instances executed on virtualized computing platforms exemplified by cloud datacenters. In the new NFV market, well defined VNF instances each realize an atomic function that can be chained to meet user demands in practice. This work studies the dynamic market mechanism design for the transaction of VNF service chains in the NFV market, to help relinquish the full power of NFV. Combining the techniques of primal-dual approximation algorithm design with Myerson's characterization of truthful mechanisms, we design a VNF chain auction that runs efficiently in polynomial time, guarantees truthfulness, and achieves near-optimal social welfare in the NFV eco-system. Extensive simulation studies verify the efficacy of our auction mechanism.",
"title": ""
},
{
"docid": "b4f3dc8134b9c04e60fba8a0fda70545",
"text": "Many important applications – from big data analytics to information retrieval, gene expression analysis, and numerical weather prediction – require the solution of large dense singular value decompositions (SVD). In many cases the problems are too large to fit into the computer’s main memory, and thus require specialized out-of-core algorithms that use disk storage. In this paper, we analyze the SVD communications, as related to hierarchical memories, and design a class of algorithms that minimizes them. This class includes out-of-core SVDs but can also be applied between other consecutive levels of the memory hierarchy, e.g., GPU SVD using the CPU memory for large problems. We call these out-of-memory (OOM) algorithms. To design OOM SVDs, we first study the communications for both classical one-stage blocked SVD and two-stage tiled SVD. We present the theoretical analysis and strategies to design, as well as implement, these communication avoiding OOM SVD algorithms. We show performance results for multicore architecture that illustrate our theoretical findings and match our performance models.",
"title": ""
},
{
"docid": "c9a6fb06acb9e33a607c7f183ff6a626",
"text": "The objective of the study was to examine the correlations between intracranial aneurysm morphology and wall shear stress (WSS) to identify reliable predictors of rupture risk. Seventy-two intracranial aneurysms (41 ruptured and 31 unruptured) from 63 patients were studied retrospectively. All aneurysms were divided into two categories: narrow (aspect ratio ≥1.4) and wide-necked (aspect ratio <1.4 or neck width ≥4 mm). Computational fluid dynamics was used to determine the distribution of WSS, which was analyzed between different morphological groups and between ruptured and unruptured aneurysms. Sections of the walls of clipped aneurysms were stained with hematoxylin–eosin, observed under a microscope, and photographed. Ruptured aneurysms were statistically more likely to have a greater low WSS area ratio (LSAR) (P = 0.001) and higher aneurysms parent WSS ratio (P = 0.026) than unruptured aneurysms. Narrow-necked aneurysms were statistically more likely to have a larger LSAR (P < 0.001) and lower values of MWSS (P < 0.001), mean aneurysm-parent WSS ratio (P < 0.001), HWSS (P = 0.012), and the highest aneurysm-parent WSS ratio (P < 0.001) than wide-necked aneurysms. The aneurysm wall showed two different pathological changes associated with high or low WSS in wide-necked aneurysms. Aneurysm morphology could affect the distribution and magnitude of WSS on the basis of differences in blood flow. Both high and low WSS could contribute to focal wall damage and rupture through different mechanisms associated with each morphological type.",
"title": ""
},
{
"docid": "3a8f14166954036f85914183dd7a7ee4",
"text": "Abused and nonabused child witnesses to parental violence temporarily residing in a battered women's shelter were compared to children from a similar economic background on measures of self-esteem, anxiety, depression, and behavior problems, using mothers' and self-reports. Results indicated significantly more distress in the abused-witness children than in the comparison group, with nonabused witness children's scores falling between the two. Age of child and types of violence were mediating factors. Implications of the findings are discussed.",
"title": ""
},
{
"docid": "33b37422ace8a300d53d4896de6bbb6f",
"text": "Digital investigations of the real world through point clouds and derivatives are changing how curators, cultural heritage researchers and archaeologists work and collaborate. To progressively aggregate expertise and enhance the working proficiency of all professionals, virtual reconstructions demand adapted tools to facilitate knowledge dissemination. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. In this paper, we review the state of the art of point cloud integration within archaeological applications, giving an overview of 3D technologies for heritage, digital exploitation and case studies showing the assimilation status within 3D GIS. Identified issues and new perspectives are addressed through a knowledge-based point cloud processing framework for multi-sensory data, and illustrated on mosaics and quasi-planar objects. A new acquisition, pre-processing, segmentation and ontology-based classification method on hybrid point clouds from both terrestrial laser scanning and dense image matching is proposed to enable reasoning for information extraction. Experiments in detection and semantic enrichment show promising results of 94% correct semantization. Then, we integrate the metadata in an archaeological smart point cloud data structure allowing spatio-semantic queries related to CIDOC-CRM. Finally, a WebGL prototype is presented that leads to efficient communication between actors by proposing optimal 3D data visualizations as a basis on which interaction can grow.",
"title": ""
},
{
"docid": "d7f349fd58c2d00acc29e5efdbea7073",
"text": "Digit ratio (2D:4D), a putative correlate of prenatal testosterone, has been found to relate to performance in sport and athletics such that low 2D:4D (high prenatal testosterone) correlates with high performance. Speed in endurance races is strongly related to 2D:4D, and may be one factor that underlies the link between sport and 2D:4D, but nothing is known of the relationship between 2D:4D and sprinting speed. Here we show that running times over 50 m were positively correlated with 2D:4D in a sample of 241 boys (i.e. runners with low 2D:4D ran faster than runners with high 2D:4D). The relationship was also found for 50 m split times (at 20, 30, and 40 m) and was independent of age, BMI, and an index of maturity. However, associations between 2D:4D and sprinting speed were much weaker than those reported for endurance running. This suggests that 2D:4D is a relatively weak predictor of strength and a stronger predictor of efficiency in aerobic exercise. We discuss the effect sizes for relationships between 2D:4D and sport and target traits in general, and identify areas of strength and weakness in digit ratio research.",
"title": ""
},
{
"docid": "05eaf278ed39cd6a8522f812589388c6",
"text": "Several recent software systems have been designed to obtain novel annotation of cross-referencing text fragments and Wikipedia pages. Tagme is state of the art in this setting and can accurately manage short textual fragments (such as snippets of search engine results, tweets, news, or blogs) on the fly.",
"title": ""
},
{
"docid": "83ed915556df1c00f6448a38fb3b7ec3",
"text": "Wandering liver or hepatoptosis is a rare entity in medical practice. It is also known as floating liver and hepatocolonic vagrancy. It describes the unusual finding of, usually through radiology, the alternate appearance of the liver on the right and left side, respectively. . The first documented case of wandering liver was presented by Heister in 1754 Two centuries later In 1958, Grayson recognized and described the association of wandering liver and tachycardia. In his paper, Grayson details the classical description of wandering liver documented by French in his index of differential diagnosis. In 2010 Jan F. Svensson et al described the first report of a wandering liver in a neonate, reviewed and a discussed the possible treatment strategies. When only displaced, it may wrongly be thought to be enlarged liver",
"title": ""
},
{
"docid": "82e78a0e89a5fe7ca4465af9d7a4dc3e",
"text": "While Six Sigma is increasingly implemented in industry, little academic research has been done on Six Sigma and its influence on quality management theory and application. There is a criticism that Six Sigma simply puts traditional quality management practices in a new package. To investigate this issue and the role of Six Sigma in quality management, this study reviewed both the traditional quality management and Six Sigma literatures and identified three new practices that are critical for implementing Six Sigma’s concept and method in an organization. These practices are referred to as: Six Sigma role structure, Six Sigma structured improvement procedure, and Six Sigma focus on metrics. A research model and survey instrument were developed to investigate how these Six Sigma practices integrate with seven traditional quality management practices to affect quality performance and business performance. Test results based on a sample of 226 US manufacturing plants revealed that the three Six Sigma practices are distinct practices from traditional quality management practices, and that they complement the traditional quality management practices in improving performance. The implications of the findings for researchers and practitioners are discussed and further research directions are offered. # 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7265c5e3f64b0a19592e7b475649433c",
"text": "A power transformer outage has a dramatic financial consequence not only for electric power systems utilities but also for interconnected customers. The service reliability of this important asset largely depends upon the condition of the oil-paper insulation. Therefore, by keeping the qualities of oil-paper insulation system in pristine condition, the maintenance planners can reduce the decline rate of internal faults. Accurate diagnostic methods for analyzing the condition of transformers are therefore essential. Currently, there are various electrical and physicochemical diagnostic techniques available for insulation condition monitoring of power transformers. This paper is aimed at the description, analysis and interpretation of modern physicochemical diagnostics techniques for assessing insulation condition in aged transformers. Since fields and laboratory experiences have shown that transformer oil contains about 70% of diagnostic information, the physicochemical analyses of oil samples can therefore be extremely useful in monitoring the condition of power transformers.",
"title": ""
},
{
"docid": "f811ec2ab6ce7e279e97241dc65de2a5",
"text": "Summary Kraljic's purchasing portfolio approach has inspired many academic writers to undertake further research into purchasing portfolio models. Although it is evident that power and dependence issues play an important role in the Kraljic matrix, scant quantitative research has been undertaken in this respect. In our study we have filled this gap by proposing quantitative measures for ‘relative power’ and ‘total interdependence’. By undertaking a comprehensive survey among Dutch purchasing professionals, we have empirically quantified ‘relative power’ and ‘total interdependence’ for each quadrant of the Kraljic portfolio matrix. We have compared theoretical expectations on power and dependence levels with our empirical findings. A remarkable finding is the observed supplier dominance in the strategic quadrant of the Kraljic matrix. This indicates that the supplier dominates even satisfactory partnerships. In the light of this finding future research cannot assume any longer that buyersupplier relationships in the strategic quadrant of the Kraljic matrix are necessarily characterised by symmetric power. 1 Marjolein C.J. Caniëls, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW), P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762724; Fax: +31 45 5762103 E-mail: marjolein.caniels@ou.nl 2 Cees J. Gelderman, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW) P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762590; Fax: +31 45 5762103 E-mail: kees.gelderman@ou.nl",
"title": ""
},
{
"docid": "52ebf28afd8ae56816fb81c19e8890b6",
"text": "In this paper we aim to model the relationship between the text of a political blog post and the comment volume—that is, the total amount of response—that a post will receive. We seek to accurately identify which posts will attract a high-volume response, and also to gain insight about the community of readers and their interests. We design and evaluate variations on a latentvariable topic model that links text to comment volume. Introduction What makes a blog post noteworthy? One measure of the popularity or breadth of interest of a blog post is the extent to which readers of the blog are inspired to leave comments on the post. In this paper, we study the relationship between the text contents of a blog post and the volume of response it will receive from blog readers. Modeling this relationship has the potential to reveal the interests of a blog’s readership community to its authors, readers, advertisers, and scientists studying the blogosphere, but it may also be useful in improving technologies for blog search, recommendation, summarization, and so on. There are many ways to define “popularity” in blogging. In this study, we focus exclusively on the aggregate volume of comments. Commenting is an important activity in the political blogosphere, giving a blog site the potential to become a discussion forum. For a given blog post, we treat comment volume as a target output variable, and use generative probabilistic models to learn from past data the relationship between a blog post’s text contents and its comment volume. While many clues might be useful in predicting comment volume (e.g., the post’s author, the time the post appears, the length of the post, etc.) here we focus solely on the text contents of the post. We first describe the data and experimental framework, including a simple baseline. We then explore how latentvariable topic models can be used to make better predictions about comment volume. These models reveal that part of the variation in comment volume can be explained by the topic of the blog post, and elucidate the relative degrees to which readers find each topic comment-worthy. ∗The authors acknowledge research support from HP Labs and helpful comments from the reviewers and Jacob Eisenstein. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Predicting Comment Volume Our goal is to predict some measure of the volume of comments on a new blog post.1 Volume might be measured as the number of words in the comment section, the number of comments, the number of distinct users who leave comments, or a variety of other ways. Any of these can be affected by uninteresting factors—the time of day the post appears, a side conversation, a surge in spammer activity—but these quantities are easily measured. In research on blog data, comments are often ignored, and it is easy to see why: comments are very noisy, full of non-standard grammar and spelling, usually unedited, often cryptic and uninformative, at least to those outside the blog’s community. A few studies have focused on information in comments. Mishe and Glance (2006) showed the value of comments in characterizing the social repercussions of a post, including popularity and controversy. Their largescale user study correlated popularity and comment activity. Yano et al. (2009) sought to predict which members of blog’s community would leave comments, and in some cases used the text contents of the comments themselves to discover topics related to both words and user comment behavior. This work is similar, but we seek to predict the aggregate behavior of the blog post’s readers: given a new blog post, how much will the community comment on it?",
"title": ""
},
{
"docid": "bceb9f8cc1726017e564c6474618a238",
"text": "The modulators are the basic requirement of communication systems they are designed to reduce the channel distortion & to use in RF communication hence many type of carrier modulation techniques has been already proposed according to channel properties & data rate of the system. QPSK (Quadrature Phase Shift Keying) is one of the modulation schemes used in wireless communication system due to its ability to transmit twice the data rate for a given bandwidth. The QPSK is the most often used scheme since it does not suffer from BER (Bit Error rate) degradation while the bandwidth efficiency is increased. It is very popular in Satellite communication. As the design of complex mathematical models such as QPSK modulator in „pure HDL‟ is very difficult and costly; it requires from designer many additional skills and is time-consuming. To overcome these types of difficulties, the proposed QPSK modulator can be implemented on FPGA by using the concept of hardware co-simulation at Low power. In this process, QPSK modulator is simulated with Xilinx System Generator Simulink software and later on it is converted in Very high speed integrated circuit Hardware Descriptive Language to implement it on FPGA. Along with the co-simulation, power of the proposed QPSK modulator can be minimized than conventional QPSK modulator. As a conclusion, the proposed architecture will not only able to operate on co-simulation platform but at the same time it will significantly consume less operational power.",
"title": ""
},
{
"docid": "6e837f73398e1f2da537b31d5a696ec6",
"text": "With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs were vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples for image DNNs, research efforts on attacking DNNs for textual applications emerges in recent years. However, existing perturbation methods for images cannot be directly applied to texts as text data is discrete. In this article, we review research works that address this difference and generate textual adversarial examples on DNNs. We collect, select, summarize, discuss and analyze these works in a comprehensive way and cover all the related information to make the article self-contained. Finally, drawing on the reviewed literature, we provide further discussions and suggestions on this topic.",
"title": ""
},
{
"docid": "d0fc352e347f7df09140068a4195eb9e",
"text": "A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies' market value has led to the development of cryptocurrency mining ( cryptomining ) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking ) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user's browser to mine cryptocurrencies without her consent. In this paper, we perform a comprehensive analysis on Alexa's Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns. Motivated by our findings, we investigate possible countermeasures against this type of attack. We discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation. Our approach could be integrated into browsers to warn users about silent cryptomining when visiting websites that do not ask for their consent.",
"title": ""
},
{
"docid": "60b21a7b9f0f52f48ae2830db600fa24",
"text": "The multi-armed bandit problem for a gambler is to decide which arm of a K-slot machine to pull to maximize his total reward in a series of trials. Many real-world learning and optimization problems can be modeled in this way. Several strategies or algorithms have been proposed as a solution to this problem in the last two decades, but, to our knowledge, there has been no common evaluation of these algorithms. This paper provides a preliminary empirical evaluation of several multiarmed bandit algorithms. It also describes and analyzes a new algorithm, Poker (Price Of Knowledge and Estimated Reward) whose performance compares favorably to that of other existing algorithms in several experiments. One remarkable outcome of our experiments is that the most naive approach, the -greedy strategy, proves to be often hard to beat.",
"title": ""
},
{
"docid": "7ead5f6b374024f5153fe6f4db18a64d",
"text": "Smart mobile device usage has expanded at a very high rate all over the world. Since the mobile devices nowadays are used for a wide variety of application areas like personal communication, data storage and entertainment, security threats emerge, comparable to those which a conventional PC is exposed to. Mobile malware has been growing in scale and complexity as smartphone usage continues to rise. Android has surpassed other mobile platforms as the most popular whilst also witnessing a dramatic increase in malware targeting the platform. In this work, we have considered Android based malware for analysis and a scalable detection mechanism is designed using multifeature collaborative decision fusion (MCDF). The different features of a malicious file like the permission based features and the API call based features are considered in order to provide a better detection by training an ensemble of classifiers and combining their decisions using collaborative approach based on probability theory. The performance of the proposed model is evaluated on a collection of Android based malware comprising of different malware families and the results show that our approach give a better performance than state-of-the-art ensemble schemes available. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
22e3f411d852cef6d1d7ec72aabbe735
|
Power-aware routing based on the energy drain rate for mobile ad hoc networks
|
[
{
"docid": "7785c16b3d0515057c8a0ec0ed55b5de",
"text": "Most ad hoc mobile devices today operate on batteries. Hence, power consumption becomes an important issue. To maximize the lifetime of ad hoc mobile networks, the power consumption rate of each node must be evenly distributed, and the overall transmission power for each connection request must be minimized. These two objectives cannot be satisfied simultaneously by employing routing algorithms proposed in previous work. In this article we present a new power-aware routing protocol to satisfy these two constraints simultaneously; we also compare the performance of different types of power-related routing algorithms via simulation. Simulation results confirm the need to strike a balance in attaining service availability performance of the whole network vs. the lifetime of ad hoc mobile devices.",
"title": ""
},
{
"docid": "bbdb676a2a813d29cd78facebc38a9b8",
"text": "In this paper we develop a new multiaccess protocol for ad hoc radio networks. The protocol is based on the original MACA protocol with the adition of a separate signalling channel. The unique feature of our protocol is that it conserves battery power at nodes by intelligently powering off nodes that are not actively transmitting or receiving packets. The manner in which nodes power themselves off does not influence the delay or throughput characteristics of our protocol. We illustrate the power conserving behavior of PAMAS via extensive simulations performed over ad hoc networks containing 10-20 nodes. Our results indicate that power savings of between 10% and 70% are attainable in most systems. Finally, we discuss how the idea of power awareness can be built into other multiaccess protocols as well.",
"title": ""
}
] |
[
{
"docid": "ef3bfb8b04eea94724e0124b0cfe723e",
"text": "Generative adversarial networks (GANs) have demonstrated to be successful at generating realistic real-world images. In this paper we compare various GAN techniques, both supervised and unsupervised. The effects on training stability of different objective functions are compared. We add an encoder to the network, making it possible to encode images to the latent space of the GAN. The generator, discriminator and encoder are parameterized by deep convolutional neural networks. For the discriminator network we experimented with using the novel Capsule Network, a state-of-the-art technique for detecting global features in images. Experiments are performed using a digit and face dataset, with various visualizations illustrating the results. The results show that using the encoder network it is possible to reconstruct images. With the conditional GAN we can alter visual attributes of generated or encoded images. The experiments with the Capsule Network as discriminator result in generated images of a lower quality, compared to a standard convolutional neural network.",
"title": ""
},
{
"docid": "63934cfd6042d8bb2227f4e83b005cc2",
"text": "To support effective exploration, it is often stated that interactive visualizations should provide rapid response times. However, the effects of interactive latency on the process and outcomes of exploratory visual analysis have not been systematically studied. We present an experiment measuring user behavior and knowledge discovery with interactive visualizations under varying latency conditions. We observe that an additional delay of 500ms incurs significant costs, decreasing user activity and data set coverage. Analyzing verbal data from think-aloud protocols, we find that increased latency reduces the rate at which users make observations, draw generalizations and generate hypotheses. Moreover, we note interaction effects in which initial exposure to higher latencies leads to subsequently reduced performance in a low-latency setting. Overall, increased latency causes users to shift exploration strategy, in turn affecting performance. We discuss how these results can inform the design of interactive analysis tools.",
"title": ""
},
{
"docid": "5229fb13c66ca8a2b079f8fe46bb9848",
"text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.",
"title": ""
},
{
"docid": "f195e7f1018e1e1a6836c9d110ce1de4",
"text": "Motivated by the goal of obtaining more-anthropomorphic walking in bipedal robots, this paper considers a hybrid model of a 3D hipped biped with feet and locking knees. The main observation of this paper is that functional Routhian Reduction can be used to extend two-dimensional walking to three dimensions—even in the presence of periods of underactuation—by decoupling the sagittal and coronal dynamics of the 3D biped. Specifically, we assume the existence of a control law that yields stable walking for the 2D sagittal component of the 3D biped. The main result of the paper is that utilizing this controller together with “reduction control laws” yields walking in three dimensions. This result is supported through simulation.",
"title": ""
},
{
"docid": "0c1cd807339481f3a0b6da1fbe96950c",
"text": "Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space. We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code. We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27x. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30x.",
"title": ""
},
{
"docid": "76def4ca02a25669610811881531e875",
"text": "The design and implementation of a novel frequency synthesizer based on low phase-noise digital dividers and a direct digital synthesizer is presented. The synthesis produces two low noise accurate tunable signals at 10 and 100 MHz. We report the measured residual phase noise and frequency stability of the syn thesizer and estimate the total frequency stability, which can be expected from the synthesizer seeded with a signal near 11.2 GHz from an ultra-stable cryocooled sapphire oscillator (cryoCSO). The synthesizer residual single-sideband phase noise, at 1-Hz offset, on 10and 100-MHz signals was -135 and -130 dBc/Hz, respectively. The frequency stability contributions of these two sig nals was σ<sub>y</sub> = 9 × 10<sup>-15</sup> and σ<sub>y</sub> = 2.2 × 10<sup>-15</sup>, respectively, at 1-s integration time. The Allan deviation of the total fractional frequency noise on the 10- and 100-MHz signals derived from the synthesizer with the cry oCSO may be estimated, respectively, as σ<sub>y</sub> ≈ 3.6 × 10<sup>-15</sup> τ<sup>-1/2</sup> + 4 × 10<sup>-16</sup> and σ<sub>y</sub> ≈ s 5.2 × 10<sup>-2</sup> × 10<sup>-16</sup> τ<sup>-1/2</sup> + 3 × 10<sup>-16</sup>, respectively, for 1 ≤ τ <; 10<sup>4</sup>s. We also calculate the coherence function (a figure of merit for very long baseline interferometry in radio astronomy) for observation frequencies of 100, 230, and 345 GHz, when using the cry oCSO and a hydrogen maser. The results show that the cryoCSO offers a significant advantage at frequencies above 100 GHz.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "8618b407f851f0806920f6e28fdefe3f",
"text": "The explosive growth of Internet applications and content, during the last decade, has revealed an increasing need for information filtering and recommendation. Most research in the area of recommendation systems has focused on designing and implementing efficient algorithms that provide accurate recommendations. However, the selection of appropriate recommendation content and the presentation of information are equally important in creating successful recommender applications. This paper addresses issues related to the presentation of recommendations in the movies domain. The current work reviews previous research approaches and popular recommender systems, and focuses on user persuasion and satisfaction. In our experiments, we compare different presentation methods in terms of recommendations’ organization in a list (i.e. top N-items list and structured overview) and recommendation modality (i.e. simple text, combination of text and image, and combination of text and video). The most efficient presentation methods, regarding user persuasion and satisfaction, proved to be the “structured overview” and the “text and video” interfaces, while a strong positive correlation was also found between user satisfaction and persuasion in all experimental conditions.",
"title": ""
},
{
"docid": "aa58cb2b2621da6260aeb203af1bd6f1",
"text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.",
"title": ""
},
{
"docid": "3e7e4b5c2a73837ac5fa111a6dc71778",
"text": "Merging the best features of RBAC and attribute-based systems can provide effective access control for distributed and rapidly changing applications.",
"title": ""
},
{
"docid": "edd6d9843c8c24497efa336d1a26be9d",
"text": "Alzheimer's disease (AD) can be diagnosed with a considerable degree of accuracy. In some centers, clinical diagnosis predicts the autopsy diagnosis with 90% certainty in series reported from academic centers. The characteristic histopathologic changes at autopsy include neurofibrillary tangles, neuritic plaques, neuronal loss, and amyloid angiopathy. Mutations on chromosomes 21, 14, and 1 cause familial AD. Risk factors for AD include advanced age, lower intelligence, small head size, and history of head trauma; female gender may confer additional risks. Susceptibility genes do not cause the disease by themselves but, in combination with other genes or epigenetic factors, modulate the age of onset and increase the probability of developing AD. Among several putative susceptibility genes (on chromosomes 19, 12, and 6), the role of apolipoprotein E (ApoE) on chromosome 19 has been repeatedly confirmed. Protective factors include ApoE-2 genotype, history of estrogen replacement therapy in postmenopausal women, higher educational level, and history of use of nonsteroidal anti-inflammatory agents. The most proximal brain events associated with the clinical expression of dementia are progressive neuronal dysfunction and loss of neurons in specific regions of the brain. Although the cascade of antecedent events leading to the final common path of neurodegeneration must be determined in greater detail, the accumulation of stable amyloid is increasingly widely accepted as a central pathogenetic event. All mutations known to cause AD increase the production of beta-amyloid peptide. This protein is derived from amyloid precursor protein and, when aggregated in a beta-pleated sheet configuration, is neurotoxic and forms the core of neuritic plaques. Nerve cell loss in selected nuclei leads to neurochemical deficiencies, and the combination of neuronal loss and neurotransmitter deficits leads to the appearance of the dementia syndrome. The destructive aspects include neurochemical deficits that disrupt cell-to-cell communications, abnormal synthesis and accumulation of cytoskeletal proteins (e.g., tau), loss of synapses, pruning of dendrites, damage through oxidative metabolism, and cell death. The concepts of cognitive reserve and symptom thresholds may explain the effects of education, intelligence, and brain size on the occurrence and timing of AD symptoms. Advances in understanding the pathogenetic cascade of events that characterize AD provide a framework for early detection and therapeutic interventions, including transmitter replacement therapies, antioxidants, anti-inflammatory agents, estrogens, nerve growth factor, and drugs that prevent amyloid formation in the brain.",
"title": ""
},
{
"docid": "0182e6dcf7c8ec981886dfa2586a0d5d",
"text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.",
"title": ""
},
{
"docid": "8fb37cad9ad964598ed718f0c32eaff1",
"text": "A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.",
"title": ""
},
{
"docid": "fdb0009b962254761541eb08f556fa0e",
"text": "Nonionic surfactants are widely used in the development of protein pharmaceuticals. However, the low level of residual peroxides in surfactants can potentially affect the stability of oxidation-sensitive proteins. In this report, we examined the peroxide formation in polysorbate 80 under a variety of storage conditions and tested the potential of peroxides in polysorbate 80 to oxidize a model protein, IL-2 mutein. For the first time, we demonstrated that peroxides can be easily generated in neat polysorbate 80 in the presence of air during incubation at elevated temperatures. Polysorbate 80 in aqueous solution exhibited a faster rate of peroxide formation and a greater amount of peroxides during incubation, which is further promoted/catalyzed by light. Peroxide formation can be greatly inhibited by preventing any contact with air/oxygen during storage. IL-2 mutein can be easily oxidized both in liquid and solid states. A lower level of peroxides in polysorbate 80 did not change the rate of IL-2 mutein oxidation in liquid state but significantly accelerated its oxidation in solid state under air. A higher level of peroxides in polysorbate 80 caused a significant increase in IL-2 mutein oxidation both in liquid and solid states, and glutathione can significantly inhibit the peroxide-induced oxidation of IL-2 mutein in a lyophilized formulation. In addition, a higher level of peroxides in polysorbate 80 caused immediate IL-2 mutein oxidation during annealing in lyophilization, suggesting that implementation of an annealing step needs to be carefully evaluated in the development of a lyophilization process for oxidation-sensitive proteins in the presence of polysorbate.",
"title": ""
},
{
"docid": "c589dd4a3da018fbc62d69e2d7f56e88",
"text": "More than 520 soil samples were surveyed for species of the mycoparasitic zygomycete genus Syncephalis using a culture-based approach. These fungi are relatively common in soil using the optimal conditions for growing both the host and parasite. Five species obtained in dual culture are unknown to science and are described here: (i) S. digitata with sporangiophores short, merosporangia separate at the apices, simple, 3-5 spored; (ii) S. floridana, which forms galls in the host and has sporangiophores up to 170 µm long with unbranched merosporangia that contain 2-4 spores; (iii) S. pseudoplumigaleta, with an abrupt apical bend in the sporophore; (iv) S. pyriformis with fertile vesicles that are long-pyriform; and (v) S. unispora with unispored merosporangia. To facilitate future molecular comparisons between species of Syncephalis and to allow identification of these fungi from environmental sampling datasets, we used Syncephalis-specific PCR primers to generate internal transcribed spacer (ITS) sequences for all five new species.",
"title": ""
},
{
"docid": "9b44cee4e65922bb07682baf0d395730",
"text": "Zero-shot learning has gained popularity due to its potential to scale recognition models without requiring additional training data. This is usually achieved by associating categories with their semantic information like attributes. However, we believe that the potential offered by this paradigm is not yet fully exploited. In this work, we propose to utilize the structure of the space spanned by the attributes using a set of relations. We devise objective functions to preserve these relations in the embedding space, thereby inducing semanticity to the embedding space. Through extensive experimental evaluation on five benchmark datasets, we demonstrate that inducing semanticity to the embedding space is beneficial for zero-shot learning. The proposed approach outperforms the state-of-the-art on the standard zero-shot setting as well as the more realistic generalized zero-shot setting. We also demonstrate how the proposed approach can be useful for making approximate semantic inferences about an image belonging to a category for which attribute information is not available.",
"title": ""
},
{
"docid": "0e2d6ebfade09beb448e9c538dadd015",
"text": "Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002’s DB1 and DB2 databases. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6386c0ef0d7cc5c33e379d9c4c2ca019",
"text": "BACKGROUND\nEven after negative sentinel lymph node biopsy (SLNB) for primary melanoma, patients who develop in-transit (IT) melanoma or local recurrences (LR) can have subclinical regional lymph node involvement.\n\n\nSTUDY DESIGN\nA prospective database identified 33 patients with IT melanoma/LR who underwent technetium 99m sulfur colloid lymphoscintigraphy alone (n = 15) or in conjunction with lymphazurin dye (n = 18) administered only if the IT melanoma/LR was concurrently excised.\n\n\nRESULTS\nSeventy-nine percent (26 of 33) of patients undergoing SLNB in this study had earlier removal of lymph nodes in the same lymph node basin as the expected drainage of the IT melanoma or LR at the time of diagnosis of their primary melanoma. Lymphoscintography at time of presentation with IT melanoma/LR was successful in 94% (31 of 33) cases, and at least 1 sentinel lymph node was found intraoperatively in 97% (30 of 31) cases. The SLNB was positive in 33% (10 of 30) of these cases. Completion lymph node dissection was performed in 90% (9 of 10) of patients. Nine patients with negative SLNB and IT melanoma underwent regional chemotherapy. Patients in this study with a positive sentinel lymph node at the time the IT/LR was mapped had a considerably shorter time to development of distant metastatic disease compared with those with negative sentinel lymph nodes.\n\n\nCONCLUSIONS\nIn this study, we demonstrate the technical feasibility and clinical use of repeat SLNB for recurrent melanoma. Performing SLNB cannot only optimize local, regional, and systemic treatment strategies for patients with LR or IT melanoma, but also appears to provide important prognostic information.",
"title": ""
},
{
"docid": "a741a386cdbaf977468782c1971c8d86",
"text": "There is a trend that, virtually everyone, ranging from big Web companies to traditional enterprisers to physical science researchers to social scientists, is either already experiencing or anticipating unprecedented growth in the amount of data available in their world, as well as new opportunities and great untapped value. This paper reviews big data challenges from a data management respective. In particular, we discuss big data diversity, big data reduction, big data integration and cleaning, big data indexing and query, and finally big data analysis and mining. Our survey gives a brief overview about big-data-oriented research and problems.",
"title": ""
},
{
"docid": "dc5e69ca604d7fde242876d5464fb045",
"text": "We propose a general Convolutional Neural Network (CNN) encoder model for machine translation that fits within in the framework of Encoder-Decoder models proposed by Cho, et. al. [1]. A CNN takes as input a sentence in the source language, performs multiple convolution and pooling operations, and uses a fully connected layer to produce a fixed-length encoding of the sentence as input to a Recurrent Neural Network decoder (using GRUs or LSTMs). The decoder, encoder, and word embeddings are jointly trained to maximize the conditional probability of the target sentence given the source sentence. Many variations on the basic model are possible and can improve the performance of the model.",
"title": ""
}
] |
scidocsrr
|
d3f47a20ef4feb70db93bfb0c9ca577b
|
Permacoin: Repurposing Bitcoin Work for Data Preservation
|
[
{
"docid": "dfb3a6fea5c2b12e7865f8b6664246fb",
"text": "We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains, *An earlier version of this article was entitled \"Cumulative Prospect Theory: An Analysis of Decision under Uncertainty.\" This article has benefited from discussions with Colin Camerer, Chew Soo-Hong, David Freedman, and David H. Krantz. We are especially grateful to Peter P. Wakker for his invaluable input and contribution to the axiomatic analysis. We are indebted to Richard Gonzalez and Amy Hayes for running the experiment and analyzing the data. This work was supported by Grants 89-0064 and 88-0206 from the Air Force Office of Scientific Research, by Grant SES-9109535 from the National Science Foundation, and by the Sloan Foundation. 298 AMOS TVERSKY/DANIEL KAHNEMAN and 2) a nonlinear transformation of the probability scale, which overweights small probabilities and underweights moderate and high probabilities. In an important later development, several authors (Quiggin, 1982; Schmeidler, 1989; Yaari, 1987; Weymark, 1981) have advanced a new representation, called the rank-dependent or the cumulative functional, that transforms cumulative rather than individual probabilities. This article presents a new version of prospect theory that incorporates the cumulative functional and extends the theory to uncertain as well to risky prospects with any number of outcomes. The resulting model, called cumulative prospect theory, combines some of the attractive features of both developments (see also Luce and Fishburn, 1991). It gives rise to different evaluations of gains and losses, which are not distinguished in the standard cumulative model, and it provides a unified treatment of both risk and uncertainty. To set the stage for the present development, we first list five major phenomena of choice, which violate the standard model and set a minimal challenge that must be met by any adequate descriptive theory of choice. All these findings have been confirmed in a number of experiments, with both real and hypothetical payoffs. Framing effects. The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences (Tversky and Kahneman, 1986). Nonlinear preferences. According to the expectation principle, the utility of a risky prospect is linear in outcome probabilities. Allais's (1953) famous example challenged this principle by showing that the difference between probabilities of .99 and 1.00 has more impact on preferences than the difference between 0.10 and 0.11. More recent studies observed nonlinear preferences in choices that do not involve sure things (Camerer and Ho, 1991). Source dependence. People's willingness to bet on an uncertain event depends not only on the degree of uncertainty but also on its source. Ellsberg (1961) observed that people prefer to bet on an urn containing equal numbers of red and green balls, rather than on an urn that contains red and green balls in unknown proportions. More recent evidence indicates that people often prefer a bet on an event in their area of competence over a bet on a matched chance event, although the former probability is vague and the latter is clear (Heath and Tversky, 1991). Risk seeking. Risk aversion is generally assumed in economic analyses of decision under uncertainty. However, risk-seeking choices are consistently observed in two classes of decision problems. First, people often prefer a small probability of winning a large prize over the expected value of that prospect. Second, risk seeking is prevalent when people must choose between a sure loss and a substantial probability of a larger loss. Loss' aversion. One of the basic phenomena of choice under both risk and uncertainty is that losses loom larger than gains (Kahneman and Tversky, 1984; Tversky and Kahneman, 1991). The observed asymmetry between gains and losses is far too extreme to be explained by income effects or by decreasing risk aversion. ADVANCES IN PROSPECT THEORY 299 The present development explains loss aversion, risk seeking, and nonlinear preferences in terms of the value and the weighting functions. It incorporates a framing process, and it can accommodate source preferences. Additional phenomena that lie beyond the scope of the theory--and of its alternatives--are discussed later. The present article is organized as follows. Section 1.1 introduces the (two-part) cumulative functional; section 1.2 discusses relations to previous work; and section 1.3 describes the qualitative properties of the value and the weighting functions. These properties are tested in an extensive study of individual choice, described in section 2, which also addresses the question of monetary incentives. Implications and limitations of the theory are discussed in section 3. An axiomatic analysis of cumulative prospect theory is presented in the appendix.",
"title": ""
}
] |
[
{
"docid": "b428ee2a14b91fee7bb80058e782774d",
"text": "Recurrent connectionist networks are important because they can perform temporally extended tasks, giving them considerable power beyond the static mappings performed by the now-familiar multilayer feedforward networks. This ability to perform highly nonlinear dynamic mappings makes these networks particularly interesting to study and potentially quite useful in tasks which have an important temporal component not easily handled through the use of simple tapped delay lines. Some examples are tasks involving recognition or generation of sequential patterns and sensorimotor control. This report examines a number of learning procedures for adjusting the weights in recurrent networks in order to train such networks to produce desired temporal behaviors from input-output stream examples. The procedures are all based on the computation of the gradient of performance error with respect to network weights, and a number of strategies for computing the necessary gradient information are described. Included here are approaches which are familiar and have been rst described elsewhere, along with several novel approaches. One particular purpose of this report is to provide uniform and detailed descriptions and derivations of the various techniques in order to emphasize how they relate to one another. Another important contribution of this report is a detailed analysis of the computational requirements of the various approaches discussed.",
"title": ""
},
{
"docid": "19d554b2ef08382418979bf7ceb15baf",
"text": "In this paper, we address the cross-lingual topic modeling, which is an important technique that enables global enterprises to detect and compare topic trends across global markets. Previous works in cross-lingual topic modeling have proposed methods that utilize parallel or comparable corpus in constructing the polylingual topic model. However, parallel or comparable corpus in many cases are not available. In this research, we incorporate techniques of mapping cross-lingual word space and the topic modeling (LDA) and propose two methods: Translated Corpus with LDA (TC-LDA) and Post Match LDA (PM-LDA). The cross-lingual word space mapping allows us to compare words of different languages, and LDA enables us to group words into topics. Both TC-LDA and PM-LDA do not need parallel or comparable corpus and hence have more applicable domains. The effectiveness of both methods is evaluated using UM-Corpus and WS-353. Our evaluation results indicate that both methods are able to identify similar documents written in different language. In addition, PM-LDA is shown to achieve better performance than TC-LDA, especially when document length is short.",
"title": ""
},
{
"docid": "fc289c7a9f08ff3f5dd41ae683ab77b3",
"text": "Approximate Newton methods are standard optimization tools which aim to maintain the benefits of Newton’s method, such as a fast rate of convergence, while alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the total expected reward, which is a standard objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton methods for MDPs. Like the Gauss-Newton method for non-linear least squares, these methods drop certain terms in the Hessian. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss-Newton algorithm is closely related to both the EMalgorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains.",
"title": ""
},
{
"docid": "846931a1e4c594626da26931110c02d6",
"text": "A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.",
"title": ""
},
{
"docid": "ede8a7a2ba75200dce83e17609ec4b5b",
"text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.",
"title": ""
},
{
"docid": "d401630481d725ae3d853b126710da31",
"text": "Combinatory Category Grammar (CCG) supertagging is a task to assign lexical categories to each word in a sentence. Almost all previous methods use fixed context window sizes to encode input tokens. However, it is obvious that different tags usually rely on different context window sizes. This motivates us to build a supertagger with a dynamic window approach, which can be treated as an attention mechanism on the local contexts. We find that applying dropout on the dynamic filters is superior to the regular dropout on word embeddings. We use this approach to demonstrate the state-ofthe-art CCG supertagging performance on the standard test set. Introduction Combinatory Category Grammar (CCG) provides a connection between syntax and semantics of natural language. The syntax can be specified by derivations of the lexicon based on the combinatory rules, and the semantics can be recovered from a set of predicate-argument relations. CCG provides an elegant solution for a wide range of semantic analysis, such as semantic parsing (Zettlemoyer and Collins 2007; Kwiatkowski et al. 2010; 2011; Artzi, Lee, and Zettlemoyer 2015), semantic representations (Bos et al. 2004; Bos 2005; 2008; Lewis and Steedman 2013), and semantic compositions, all of which heavily depend on the supertagging and parsing performance. All these motivate us to build a more accurate CCG supertagger. CCG supertagging is the task to predict the lexical categories for each word in a sentence. Existing algorithms on CCG supertagging range from point estimation (Clark and Curran 2007; Lewis and Steedman 2014) to sequential estimation (Xu, Auli, and Clark 2015; Lewis, Lee, and Zettlemoyer 2016; Vaswani et al. 2016), which predict the most probable supertag of the current word according to the context in a fixed size window. This fixed size window assumption is too strong to generalize. We argue this from two perspectives. One perspective comes from the inputs. For a particular word, the number of its categories may vary from 1 to 130 in CCGBank 02-21 (Hockenmaier and Steedman 2007). We ∗Corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. on a warm autumn day ...",
"title": ""
},
{
"docid": "662781648f5c9bbcb67dfd2b529a1347",
"text": "A compact broadband class-E power amplifier design is presented. High broadband power efficiency is observed from 2.0–2.5 GHz, where drain efficiency ≫74% and PAE ≫71%, when using 2nd-harmonic input tuning. The highest in-band efficiency performance is observed at 2.14 GHz from a 40V supply with peak drain-efficiency of 77.3% and peak PAE of 74.0% at 12W output power and 14dB gain. The best broadband output power performance is observed from 2.1–2.7 GHz without 2nd-harmonic input tuning, where the output power variation is within 1.5dB and power efficiency is between 53% and 66%.",
"title": ""
},
{
"docid": "28a4fd94ba02c70d6781ae38bf35ca5a",
"text": "Zero-shot learning (ZSL) highly depends on a good semantic embedding to connect the seen and unseen classes. Recently, distributed word embeddings (DWE) pre-trained from large text corpus have become a popular choice to draw such a connection. Compared with human defined attributes, DWEs are more scalable and easier to obtain. However, they are designed to reflect semantic similarity rather than visual similarity and thus using them in ZSL often leads to inferior performance. To overcome this visual-semantic discrepancy, this work proposes an objective function to re-align the distributed word embeddings with visual information by learning a neural network to map it into a new representation called visually aligned word embedding (VAWE). Thus the neighbourhood structure of VAWEs becomes similar to that in the visual domain. Note that in this work we do not design a ZSL method that projects the visual features and semantic embeddings onto a shared space but just impose a requirement on the structure of the mapped word embeddings. This strategy allows the learned VAWE to generalize to various ZSL methods and visual features. As evaluated via four state-of-the-art ZSL methods on four benchmark datasets, the VAWE exhibit consistent performance improvement.",
"title": ""
},
{
"docid": "49ca8739b6e28f0988b643fc97e7c6b1",
"text": "Stroke is a leading cause of severe physical disability, causing a range of impairments. Frequently stroke survivors are left with partial paralysis on one side of the body and movement can be severely restricted in the affected side’s hand and arm. We know that effective rehabilitation must be early, intensive and repetitive, which leads to the challenge of how to maintain motivation for people undergoing therapy. This paper discusses why games may be an effective way of addressing the problem of engagement in therapy and analyses which game design patterns may be important for rehabilitation. We present a number of serious games that our group has developed for upper limb rehabilitation. Results of an evaluation of the games are presented which indicate that they may be appropriate for people with stroke.",
"title": ""
},
{
"docid": "efcf84406a2218deeb4ca33cb8574172",
"text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.",
"title": ""
},
{
"docid": "9631926db0052f89abe3b540789ed08e",
"text": "DC/DC converters to power future CPU cores mandate low-voltage power metal-oxide semiconductor field-effect transistors (MOSFETs) with ultra low on-resistance and gate charge. Conventional vertical trench MOSFETs cannot meet the challenge. In this paper, we introduce an alternative device solution, the large-area lateral power MOSFET with a unique metal interconnect scheme and a chip-scale package. We have designed and fabricated a family of lateral power MOSFETs including a sub-10 V class power MOSFET with a record-low R/sub DS(ON)/ of 1m/spl Omega/ at a gate voltage of 6V, approximately 50% of the lowest R/sub DS(ON)/ previously reported. The new device has a total gate charge Q/sub g/ of 22nC at 4.5V and a performance figures of merit of less than 30m/spl Omega/-nC, a 3/spl times/ improvement over the state of the art trench MOSFETs. This new MOSFET was used in a 100-W dc/dc converter as the synchronous rectifiers to achieve a 3.5-MHz pulse-width modulation switching frequency, 97%-99% efficiency, and a power density of 970W/in/sup 3/. The new lateral MOSEFT technology offers a viable solution for the next-generation, multimegahertz, high-density dc/dc converters for future CPU cores and many other high-performance power management applications.",
"title": ""
},
{
"docid": "b5e539774c408232797da1f35abcca90",
"text": "The discrete Laplace-Beltrami operator plays a prominent role in many Digital Geometry Processing applications ranging from denoising to parameterization, editing, and physical simulation. The standard discretization uses the cotangents of the angles in the immersed mesh which leads to a variety of numerical problems. We advocate use of the intrinsic Laplace-Beltrami operator. It satis- fies a local maximum principle, guaranteeing, e.g., that no flipped triangles can occur in parameterizations. It also leads to better conditioned linear systems. The intrinsic Laplace-Beltrami operator is based on an intrinsic Delaunay triangulation of the surface. We give an incremental algorithm to construct such triangulations together with an overlay structure which captures the relationship between the extrinsic and intrinsic triangulations. Using a variety of example meshes we demonstrate the numerical benefits of the intrinsic Laplace-Beltrami operator.",
"title": ""
},
{
"docid": "5b320c270439ec6d2db40a192b899c22",
"text": "This thesis studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework. As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations. The source code and models are publicly available at https://github.com/imatge-upc/vqa-2016-cvprw.",
"title": ""
},
{
"docid": "36165cb8c6690863ed98c490ba889a9e",
"text": "This paper presents a new low-cost digital control solution that maximizes the AC/DC flyback power supply efficiency. This intelligent digital approach achieves the combined benefits of high performance, low cost and high reliability in a single controller. It introduces unique multiple PWM and PFM operational modes adaptively based on the power supply load changes. While the multi-mode PWM/PFM control significantly improves the light-load efficiency and thus the overall average efficiency, it does not bring compromise to other system performance, such as audible noise, voltage ripples or regulations. It also seamlessly integrated an improved quasi-resonant switching scheme that enables valley-mode turn on in every switching cycle without causing modification to the main PWM/PFM control schemes. A digital integrated circuit (IC) that implements this solution, namely iW1696, has been fabricated and introduced to the industry recently. In addition to outlining the approach, this paper provides experimental results obtained on a 3-W (5V/550mA) cell phone charger that is built with the iW1696.",
"title": ""
},
{
"docid": "a3e7a0cd6c0e79dee289c5b31c3dac76",
"text": "Silicone is one of the most widely used filler for facial cosmetic correction and soft tissue augmentation. Although initially it was considered to be a biologically inert material, many local and generalized adverse effects have been reported after silicone usage for cosmetic purposes. We present a previously healthy woman who developed progressive and persistent generalized livedo reticularis after cosmetic surgery for volume augmentation of buttocks. Histopathologic study demonstrated dermal presence of interstitial vacuoles and cystic spaces of different sizes between the collagen bundles, which corresponded to the silicone particles implanted years ago. These vacuoles were clustered around vascular spaces and surrounded by a few foamy macrophages. General examination and laboratory investigations failed to show any evidence of connective tissue disease or other systemic disorder. Therefore, we believe that the silicone implanted may have induced some kind of blood dermal perturbation resulting in the characteristic violet reticular discoloration of livedo reticularis.",
"title": ""
},
{
"docid": "e039567ec759d38da518c7f5eaba08f8",
"text": "With economic globalization and the rapid development of e-commerce, customer relationship management (CRM) has become the core of growth of the company. Data mining, as a powerful data analysis tool, extracts critical information supporting the company to make better decisions by processing a large number of data in commercial databases. This paper introduced the basic concepts of data mining and CRM, and described the process how to use data mining for CRM. At last, the paper described the applications of several main data mining methods in CRM, such as clustering, classification and association rule.",
"title": ""
},
{
"docid": "bb0ac3d88646bf94710a4452ddf50e51",
"text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension",
"title": ""
},
{
"docid": "5068191083a9a14751b88793dd96e7d3",
"text": "The electric motor is the main component in an electrical vehicle. Its power density is directly influenced by the winding. For this reason, it is relevant to investigate the influences of coil production on the quality of the stator. The examined stator in this article is wound with the multi-wire needle winding technique. With this method, the placing of the wires can be precisely guided leading to small winding heads. To gain a high winding quality with small winding resistances, the control of the tensile force during the winding process is essential. The influence of the tensile force on the winding resistance during the winding process with the multiple needle winding technique will be presented here. To control the tensile force during the winding process, the stress on the wire during the winding process needs to be examined first. Thus a model will be presented to investigate the tensile force which realizes a coupling between the multibody dynamics simulation and the finite element methods with the software COMSOL Multiphysics®. With the results of the simulation, a new winding-trajectory based wire tension control can be implemented. Therefore, new strategies to control the tensile force during the process using a CAD/CAM approach will be presented in this paper.",
"title": ""
},
{
"docid": "774938c175781ed644327db1dae9d1d4",
"text": "It is widely accepted that sizing or predicting the volumes of various kinds of software deliverable items is one of the first and most dominant aspects of software cost estimating. Most of the cost estimation model or techniques usually assume that software size or structural complexity is the integral factor that influences software development effort. Although sizing and complexity measure is a very critical due to the need of reliable size estimates in the utilization of existing software project cost estimation models and complex problem for software cost estimating, advances in sizing technology over the past 30 years have been impressive. This paper attempts to review the 12 object-oriented software metrics proposed in 90s’ by Chidamber, Kemerer and Li.",
"title": ""
}
] |
scidocsrr
|
4cc3c9a39d8ff4e4b6c746b82af187d9
|
Solving real-world cutting stock-problems in the paper industry: Mathematical approaches, experience and challenges
|
[
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "f0a3a1855103ebac224e1351d4fc24df",
"text": "BACKGROUND\nThere have been many randomised trials of adjuvant tamoxifen among women with early breast cancer, and an updated overview of their results is presented.\n\n\nMETHODS\nIn 1995, information was sought on each woman in any randomised trial that began before 1990 of adjuvant tamoxifen versus no tamoxifen before recurrence. Information was obtained and analysed centrally on each of 37000 women in 55 such trials, comprising about 87% of the worldwide evidence. Compared with the previous such overview, this approximately doubles the amount of evidence from trials of about 5 years of tamoxifen and, taking all trials together, on events occurring more than 5 years after randomisation.\n\n\nFINDINGS\nNearly 8000 of the women had a low, or zero, level of the oestrogen-receptor protein (ER) measured in their primary tumour. Among them, the overall effects of tamoxifen appeared to be small, and subsequent analyses of recurrence and total mortality are restricted to the remaining women (18000 with ER-positive tumours, plus nearly 12000 more with untested tumours, of which an estimated 8000 would have been ER-positive). For trials of 1 year, 2 years, and about 5 years of adjuvant tamoxifen, the proportional recurrence reductions produced among these 30000 women during about 10 years of follow-up were 21% (SD 3), 29% (SD 2), and 47% (SD 3), respectively, with a highly significant trend towards greater effect with longer treatment (chi2(1)=52.0, 2p<0.00001). The corresponding proportional mortality reductions were 12% (SD 3), 17% (SD 3), and 26% (SD 4), respectively, and again the test for trend was significant (chi2(1) = 8.8, 2p=0.003). The absolute improvement in recurrence was greater during the first 5 years, whereas the improvement in survival grew steadily larger throughout the first 10 years. The proportional mortality reductions were similar for women with node-positive and node-negative disease, but the absolute mortality reductions were greater in node-positive women. In the trials of about 5 years of adjuvant tamoxifen the absolute improvements in 10-year survival were 10.9% (SD 2.5) for node-positive (61.4% vs 50.5% survival, 2p<0.00001) and 5.6% (SD 1.3) for node-negative (78.9% vs 73.3% survival, 2p<0.00001). These benefits appeared to be largely irrespective of age, menopausal status, daily tamoxifen dose (which was generally 20 mg), and of whether chemotherapy had been given to both groups. In terms of other outcomes among all women studied (ie, including those with \"ER-poor\" tumours), the proportional reductions in contralateral breast cancer were 13% (SD 13), 26% (SD 9), and 47% (SD 9) in the trials of 1, 2, or about 5 years of adjuvant tamoxifen. The incidence of endometrial cancer was approximately doubled in trials of 1 or 2 years of tamoxifen and approximately quadrupled in trials of 5 years of tamoxifen (although the number of cases was small and these ratios were not significantly different from each other). The absolute decrease in contralateral breast cancer was about twice as large as the absolute increase in the incidence of endometrial cancer. Tamoxifen had no apparent effect on the incidence of colorectal cancer or, after exclusion of deaths from breast or endometrial cancer, on any of the other main categories of cause of death (total nearly 2000 such deaths; overall relative risk 0.99 [SD 0.05]).\n\n\nINTERPRETATION\nFor women with tumours that have been reliably shown to be ER-negative, adjuvant tamoxifen remains a matter for research. However, some years of adjuvant tamoxifen treatment substantially improves the 10-year survival of women with ER-positive tumours and of women whose tumours are of unknown ER status, with the proportional reductions in breast cancer recurrence and in mortality appearing to be largely unaffected by other patient characteristics or treatments.",
"title": ""
},
{
"docid": "3a32ac999ea003d992f3dd7d7d41d601",
"text": "Collectively, disruptive technologies and market forces have resulted in a significant shift in the structure of many industries, presenting a serious challenge to near-term profitability and long-term viability. Cloud capabilities continue to promise payoffs in reduced costs and increased efficiencies, but in this article, we show they can provide business model transformation opportunities as well. To date, the focus of much research on cloud computing and cloud services has been on understanding the technology challenges, business opportunities or applications for particular domains.3 Cloud services, however, also offer great new opportunities for small and mediumsized enterprises (SMEs) that lack large IT shops or internal capabilities, as well as larger firms. An early analysis of four SMEs4 found that cloud services can offer both economic and business operational value previously denied them. This distinction is important because it shows that cloud services can provide value beyond simple cost avoidance or reduction",
"title": ""
},
{
"docid": "ebaedd43e151f13d1d4d779284af389d",
"text": "This paper presents the state of art techniques in recommender systems (RS). The various techniques are diagrammatically illustrated which on one hand helps a naïve researcher in this field to accommodate the on-going researches and establish a strong base, on the other hand it focuses on different categories of the recommender systems with deep technical discussions. The review studies on RS are highlighted which helps in understanding the previous review works and their directions. 8 different main categories of recommender techniques and 19 sub categories have been identified and stated. Further, soft computing approach for recommendation is emphasized which have not been well studied earlier. The major problems of the existing area is reviewed and presented from different perspectives. However, solutions to these issues are rarely discussed in the previous works, in this study future direction for possible solutions are also addressed.",
"title": ""
},
{
"docid": "1f94d244dd24bd9261613098c994cf9d",
"text": "With the development and introduction of smart metering, the energy information for costumers will change from infrequent manual meter readings to fine-grained energy consumption data. On the one hand these fine-grained measurements will lead to an improvement in costumers' energy habits, but on the other hand the fined-grained data produces information about a household and also households' inhabitants, which are the basis for many future privacy issues. To ensure household privacy and smart meter information owned by the household inhabitants, load hiding techniques were introduced to obfuscate the load demand visible at the household energy meter. In this work, a state-of-the-art battery-based load hiding (BLH) technique, which uses a controllable battery to disguise the power consumption and a novel load hiding technique called load-based load hiding (LLH) are presented. An LLH system uses an controllable household appliance to obfuscate the household's power demand. We evaluate and compare both load hiding techniques on real household data and show that both techniques can strengthen household privacy but only LLH can increase appliance level privacy.",
"title": ""
},
{
"docid": "7e42516a73e8e5f80d009d0ff305156c",
"text": "This article provides a review of evolutionary theory and empirical research on mate choices in nonhuman species and uses it as a frame for understanding the how and why of human mate choices. The basic principle is that the preferred mate choices and attendant social cognitions and behaviors of both women and men, and those of other species, have evolved to focus on and exploit the reproductive potential and reproductive investment of members of the opposite sex. Reproductive potential is defined as the genetic, material, and/or social resources an individual can invest in offspring, and reproductive investment is the actual use of these resources to enhance the physical and social well- being of offspring. Similarities and differences in the mate preferences and choices of women and men are reviewed and can be understood in terms of similarities and differences in the form of reproductive potential that women and men have to offer and their tendency to use this potential for the well-being of children.",
"title": ""
},
{
"docid": "caea6d9ec4fbaebafc894167cfb8a3d6",
"text": "Although the positive effects of different kinds of physical activity (PA) on cognitive functioning have already been demonstrated in a variety of studies, the role of cognitive engagement in promoting children's executive functions is still unclear. The aim of the current study was therefore to investigate the effects of two qualitatively different chronic PA interventions on executive functions in primary school children. Children (N = 181) aged between 10 and 12 years were assigned to either a 6-week physical education program with a high level of physical exertion and high cognitive engagement (team games), a physical education program with high physical exertion but low cognitive engagement (aerobic exercise), or to a physical education program with both low physical exertion and low cognitive engagement (control condition). Executive functions (updating, inhibition, shifting) and aerobic fitness (multistage 20-m shuttle run test) were measured before and after the respective condition. Results revealed that both interventions (team games and aerobic exercise) have a positive impact on children's aerobic fitness (4-5% increase in estimated VO2max). Importantly, an improvement in shifting performance was found only in the team games and not in the aerobic exercise or control condition. Thus, the inclusion of cognitive engagement in PA seems to be the most promising type of chronic intervention to enhance executive functions in children, providing further evidence for the importance of the qualitative aspects of PA.",
"title": ""
},
{
"docid": "461fbb108d5589621a7ff15fcc306153",
"text": "Current methods for detector gain calibration require acquisition of tens of special calibration images. Here we propose a method that obtains the gain from the actual image for which the photon count is desired by quantifying out-of-band information. We show on simulation and experimental data that our much simpler procedure, which can be retroactively applied to any image, is comparable in precision to traditional gain calibration procedures. Optical recordings consist of detected photons, which typically arrive in an uncorrelated manner at the detector. Therefore the recorded intensity follows a Poisson distribution, where the variance of the photon count is equal to its mean. In many applications images must be further processed based on these statistics and it is therefore of great importance to be able to relate measured values S in analogue-to-digital-units (ADU) to the detected (effective) photon numbers N. The relation between the measured signal S in ADU and the photon count N is given by the linear gain g as S = gN. Only after conversion to photons is it possible to establish the expected precision of intensities in the image, which is essential for single particle localization, maximum-likelihood image deconvolution or denoising [Ober2004, Smith2010, Afanasyev2015, Strohl2015]. The photon count must be established via gain calibration, as most image capturing devices do not directly report the number of detected photons, but a value proportional to the photoelectron charge produced in a photomultiplier tube or collected in a camera pixel. For this calibration typically tens of calibration images are recorded and the linear relationship between mean intensity and its variance is exploited [vanVliet1998]. In current microscopy practise a detector calibration to photon counts is often not done but cannot be performed in retrospect. It thus would be extremely helpful, if that can be determined from analysing the acquisition itself – a single image. A number of algorithms have been published for Gaussian type noise [Donoho1995, Immerkaer1996] and Poissonian type noise [Foi2008, Colom2014, Azzari2014, Pyatykh2014]. However, all these routines use assumed image properties to extract the information rather than just the properties of the acquisition process as in our presented algorithm. This has major implications for their performance on microscopy images governed by photon statistics (see Supplementary Information for a comparison with implementations from Pyatykh et al. [Pyatykh2014] and Azzari et al. [Azzari2014] which performed more than an order of magnitude worse than our method). Some devices, such as avalanche photodiodes, photomultiplier tubes (PMTs) or emCCD cameras can be operated in a single photon counting mode [Chao2013] where the gain is known to be one. In many cases, however, the gain is unknown and/or a device setting. For example, the gain of PMTs can be continuously controlled by changing the voltage between the dynodes and the gain of cameras may deviate from the value stated in the manual. To complicate matters, devices not running in photon counting mode, use an offset Ozero to avoid negative readout values, i.e. the device will yield a non-zero mean value even if no light reaches the detector, S = gN + Ozero. This offset value Ozero is sometimes changing over time (“offset drift”). Traditionally, a series of about 20 dark images and 20 images of a sample with smoothly changing intensity are recorded [vanVliet1998]. From these images the gain is calculated as the linear slope of the variance over these images versus the mean intensity g = var(S)/mean(S) (for details see Supplementary information). In Figure 1 we show a typical calibration curve by fitting (blue line) the experimentally obtained data (blue crosses). The obtained gain does not necessarily correspond to the real gain per detected photon, since it includes multiplicative noise sources such as multiplicative amplification noise, gain fluctuations or the excess noise of emCCDs and PMTs. In addition there is also readout noise, which includes thermal noise build-up and clock induced charge. The unknown readout noise and offset may seem at first glance disadvantageous regarding an automatic quantification. However, as shown below, these details do not matter for the purpose of predicting the correct noise from a measured signal. Let us first assume that we know the offset Ozero and any potential readout noise variance Vread. The region in Fourier space above the cut-off frequency of the support of the optical transfer function only contains noise in an image [Liu2017], where both Poisson and Gaussian noise are evenly distributed over all frequencies [Chanran1990, Liu2017]. By measuring the spectral power density of the noise VHF in this high-frequency out-of-band region and accounting for the area fraction f of this region in Fourier space, we can estimate the total variance Vall=VHF/f of all detected photons. Then the gain g is then obtained as (1) g = !!\"\"!!!\"#$ (!!!!\"#$) where we relate the photon-noise-only variance Vall-Vread to the sum offset-corrected signal over all pixels in the image (see Online Methods). The device manufacturers usually provide the readout noise leaving only the offset and gain to be determined from the image itself in practise. To also estimate both, the offset together with the gain, we need more information from the linear meanvariance dependence than given by equation (1). We achieve this by tiling the input image, e.g. into 3×3 sub-images, and process each of these sub-images to generate one data point in a meanvariance plot. From these nine data points we obtain the axis offset (Ono-noise). We then perform the gain estimation (1) on the whole image after offset correction (See Online Methods and Supplementary Information). As seen from Figure 1 the linear regression of the mean-variance curve determines the axis offset ADU value Ono-noise at which zero noise would be expected. Yet we cannot simultaneously determine both offset Ozero and readout noise Vread . If either of them is known a priori, the other can be calculated: Vread = g(Ozero Ono-noise), which is, however, not needed to predict the correct noise level for each brightness level based on the automatically determined value Ono-noise. To test the single-image gain calibration, simulated data was generated for a range of gains (0.1, 1, 10) with a constant offset (100 ADU), a range of readout noise (1, 2, 10 photon RMS) and maximum expected photon counts per pixel (10, 100, ..., 10). Offset and gain were both determined from band-limited single images of two different objects (resolution target and Einstein) without significant relative errors in the offset or gain (less than 2% at more than 10 expected maximum photon counts) using the proposed method (see Supplementary Figures S1-S3). Figure 1 quantitatively compares the intensity dependent variance predicted by applying our method individually to many single experimental in-focus images (shaded green area) with the classical method evaluating a whole series of calibration images (blue line). Note that our single-image based noise determination does not require any prior knowledge about offset or readout noise. Figure 2 shows a few typical example images acquired with various detectors together with the gain and offset determined from each of them and the calibration values obtained from the standard procedure for comparison. We evaluated the general applicability of our method on datasets from different detectors and modes of acquisition (CCD, emCCD, sCMOS, PMT, GAsP and Hybrid Detector). Figure 3 quantitatively compares experimental single image calibration with classical calibration. 20 individual images were each submitted to our algorithm and the determined offset and gain was compared to the classical method. The variance of a separately acquired dark image was submitted to the algorithm as a readout noise estimate, but alternatively the readout noise specification from the handbook could be used or a measured offset at zero intensity. As seen from Figure 3 the singleimage-based gain calibration as proposed performs nearly as well as the standard gain calibration using 20 images. The relative gain error stays generally well below 10% and for cameras below 2%. The 8.5% bias for the HyD photon counting system is unusually high, and we were unable to find a clear explanation for this deviation from the classical calibration. Using only lower frequencies to estimate VHF (kt =0.4) resulted in a much smaller error of 2.5% in the single-photon counting case suggesting that dead-time effects of the detector might have affected the high spatial frequencies. Simulations as well as experiments show a good agreement of the determined gain with the ground truth or gold standard calibration respectively. The bias of the gain determined by the single-image routine stayed below 4% (except for HyD). For intensity quantification any potential offset must be subtracted before conversion to photon counts. Our method estimates the photon count very precisely over a large range of parameters (relative error below 2% in simulations). Our method could be applied to many different microscopy modes (widefield transmission, fluorescence, and confocal) and detector types (CCD, emCCD, sCMOS, PMT, GAsP and HyD photon counting), because we only require the existence of an out-of-band region, which purely contains frequency independent noise. This is usually true, if the image is sampled correctly. As discussed in the Supplementary Information the cut-off limit of our algorithm can in practise be set below the transfer limit and single-image calibration can even outperform the standard calibration if molecular blinking perturbs the measurement In summary we showed that single image calibration is a simple and versatile tool. We expect our work to lead to a better ability to quantify intensities in general.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "fa246c15531c6426cccaf4d216dc8375",
"text": "Proboscis lateralis is a rare craniofacial malformation characterized by absence of nasal cavity on one side with a trunk-like nasal appendage protruding from superomedial portion of the ipsilateral orbit. High-resolution computed tomography and magnetic resonance imaging are extremely useful in evaluating this congenital condition and the wide spectrum of associated anomalies occurring in the surrounding anatomical regions and brain. We present a case of proboscis lateralis in a 2-year-old girl with associated ipsilateral sinonasal aplasia, orbital cyst, absent olfactory bulb and olfactory tract. Absence of ipsilateral olfactory pathway in this rare disorder has been documented on high-resolution computed tomography and magnetic resonance imaging by us for the first time in English medical literature.",
"title": ""
},
{
"docid": "db7edbb1a255e9de8486abbf466f9583",
"text": "Nowadays, adopting an optimized irrigation system has become a necessity due to the lack of the world water resource. The system has a distributed wireless network of soil-moisture and temperature sensors. This project focuses on a smart irrigation system which is cost effective. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology where automation is playing important role in human life. Automation allows us to control various appliances automatically. DC motor based vehicle is designed for irrigation purpose. The objectives of this paper were to control the water supply to each plant automatically depending on values of temperature and soil moisture sensors. Mechanism is done such that soil moisture sensor electrodes are inserted in front of each soil. It also monitors the plant growth using various parameters like height and width. Android app.",
"title": ""
},
{
"docid": "cce5d75bfcfc22f7af08f6b0b599d472",
"text": "In order to determine if exposure to carcinogens in fire smoke increases the risk of cancer, we examined the incidence of cancer in a cohort of 2,447 male firefighters in Seattle and Tacoma, (Washington, USA). The study population was followed for 16 years (1974–89) and the incidence of cancer, ascertained using a population-based tumor registry, was compared with local rates and with the incidence among 1,878 policemen from the same cities. The risk of cancer among firefighters was found to be similar to both the police and the general male population for most common sites. An elevated risk of prostate cancer was observed relative to the general population (standardized incidence ratio [SIR]=1.4, 95 percent confidence interval [CI]=1.1–1.7) but was less elevated compared with rates in policement (incidence density ratio [IDR]=1.1, CI=0.7–1.8) and was not related to duration of exposure. The risk of colon cancer, although only slightly elevated relative to the general population (SIR=1.1, CI=0.7–1.6) and the police (IDR=1.3, CI=0.6–3.0), appeared to increase with duration of employment. Although the relationship between firefighting and colon cancer is consistent with some previous studies, it is based on small numbers and may be due to chance. While this study did not find strong evidence for an excess risk of cancer, the presence of carcinogens in the firefighting environment warrants periodic re-evaluation of cancer incidence in this population and the continued use of protective equipment.",
"title": ""
},
{
"docid": "e28336bccbb1414dc9a92404f08b6b6f",
"text": "YouTube has become one of the largest websites on the Internet. Among its many genres, both professional and amateur science communicators compete for audience attention. This article provides the first overview of science communication on YouTube and examines content factors that affect the popularity of science communication videos on the site. A content analysis of 390 videos from 39 YouTube channels was conducted. Although professionally generated content is superior in number, user-generated content was significantly more popular. Furthermore, videos that had consistent science communicators were more popular than those without a regular communicator. This study represents an important first step to understand content factors, which increases the channel and video popularity of science communication on YouTube.",
"title": ""
},
{
"docid": "4b544bb34c55e663cdc5f0a05201e595",
"text": "BACKGROUND\nThis study seeks to examine a multidimensional model of student motivation and engagement using within- and between-network construct validation approaches.\n\n\nAIMS\nThe study tests the first- and higher-order factor structure of the motivation and engagement wheel and its corresponding measurement tool, the Motivation and Engagement Scale - High School (MES-HS; formerly the Student Motivation and Engagement Scale).\n\n\nSAMPLE\nThe study draws upon data from 12,237 high school students from 38 Australian high schools.\n\n\nMETHODS\nThe hypothesized 11-factor first-order structure and the four-factor higher-order structure, their relationship with a set of between-network measures (class participation, enjoyment of school, educational aspirations), factor invariance across gender and year-level, and the effects of age and gender are examined using confirmatory factor analysis and structural equation modelling.\n\n\nRESULTS\nIn terms of within-network validity, (1) the data confirm that the 11-factor and higher-order factor models of motivation and engagement are good fitting and (2) multigroup tests showed invariance across gender and year levels. In terms of between-network validity, (3) correlations with enjoyment of school, class participation and educational aspirations are in the hypothesized directions, and (4) girls reflect a more adaptive pattern of motivation and engagement, and year-level findings broadly confirm hypotheses that middle high school students seem to reflect a less adaptive pattern of motivation and engagement.\n\n\nCONCLUSION\nThe first- and higher-order structures hold direct implications for educational practice and directions for future motivation and engagement research.",
"title": ""
},
{
"docid": "c1ddf32bfa71f32e51daf31e077a87cd",
"text": "There is a step of significant difficulty experienced by brain-computer interface (BCI) users when going from the calibration recording to the feedback application. This effect has been previously studied and a supervised adaptation solution has been proposed. In this paper, we suggest a simple unsupervised adaptation method of the linear discriminant analysis (LDA) classifier that effectively solves this problem by counteracting the harmful effect of nonclass-related nonstationarities in electroencephalography (EEG) during BCI sessions performed with motor imagery tasks. For this, we first introduce three types of adaptation procedures and investigate them in an offline study with 19 datasets. Then, we select one of the proposed methods and analyze it further. The chosen classifier is offline tested in data from 80 healthy users and four high spinal cord injury patients. Finally, for the first time in BCI literature, we apply this unsupervised classifier in online experiments. Additionally, we show that its performance is significantly better than the state-of-the-art supervised approach.",
"title": ""
},
{
"docid": "40a87654ac33c46f948204fd5c7ef4c1",
"text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",
"title": ""
},
{
"docid": "5c96222feacb0454d353dcaa1f70fb83",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
},
{
"docid": "750abc9e51aed62305187d7103e3f267",
"text": "This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set ofguidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. Theseare demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographicliterature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches tolegend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamismand design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphicand The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specificneeds, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements offact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence ofthis work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization.",
"title": ""
},
{
"docid": "5433a8e449bf4bf9d939e645e171f7e5",
"text": "Software Testing (ST) processes attempt to verify and validate the capability of a software system to meet its required attributes and functionality. As software systems become more complex, the need for automated software testing methods emerges. Machine Learning (ML) techniques have shown to be quite useful for this automation process. Various works have been presented in the junction of ML and ST areas. The lack of general guidelines for applying appropriate learning methods for software testing purposes is our major motivation in this current paper. In this paper, we introduce a classification framework which can help to systematically review research work in the ML and ST domains. The proposed framework dimensions are defined using major characteristics of existing software testing and machine learning methods. Our framework can be used to effectively construct a concrete set of guidelines for choosing the most appropriate learning method and applying it to a distinct stage of the software testing life-cycle for automation purposes.",
"title": ""
},
{
"docid": "4a84fabb0b4edefc1850940ed2081f47",
"text": "Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent L>1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.",
"title": ""
},
{
"docid": "1720517b913ce3974ab92239ff8a177e",
"text": "Honeypot is a closely monitored computer resource that emulates behaviors of production host within a network in order to lure and attract the attackers. The workability and effectiveness of a deployed honeypot depends on its technical configuration. Since honeypot is a resource that is intentionally made attractive to the attackers, it is crucial to make it intelligent and self-manageable. This research reviews at artificial intelligence techniques such as expert system and case-based reasoning, in order to build an intelligent honeypot.",
"title": ""
}
] |
scidocsrr
|
9ab20062b846a737c67c08bed9fe8e3c
|
Semantic Word Clusters Using Signed Spectral Clustering
|
[
{
"docid": "37f0bea4c677cfb7b931ab174d4d20c7",
"text": "A persistent problem of psychology has been how to deal conceptually with patterns of interdependent properties. This problem has been central, of course, in the theoretical treatment by Gestalt psychologists of phenomenal or neural configurations or fields (12, 13, 15). It has also been of concern to social psychologists and sociologists who attempt to employ concepts referring to social systems (18). Heider (19), reflecting the general field-theoretical approach, has considered certain aspects of cognitive fields which contain perceived people and impersonal objects or events. His analysis focuses upon what he calls the P-O-X unit of a cognitive field, consisting of P (one person), 0 (another person), and X (an impersonal entity). Each relation among the parts of the unit is conceived as interdependent with each other relation. Thus, for example, if P has a relation of affection for 0 and if 0 is seen as responsible for X, then there will be a tendency for P to like or approve of X. If the nature of X is such that it would \"normally\" be evaluated as bad, the whole P-O-X unit is placed in a state of imbalance, and pressures",
"title": ""
},
{
"docid": "d46af3854769569a631fab2c3c7fa8f3",
"text": "Existing vector space models typically map synonyms and antonyms to similar word vectors, and thus fail to represent antonymy. We introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, while antonyms are close to minus one. We derive this representation with the aid of a thesaurus and latent semantic analysis (LSA). Each entry in the thesaurus – a word sense along with its synonyms and antonyms – is treated as a “document,” and the resulting document collection is subjected to LSA. The key contribution of this work is to show how to assign signs to the entries in the co-occurrence matrix on which LSA operates, so as to induce a subspace with the desired property. We evaluate this procedure with the Graduate Record Examination questions of (Mohammed et al., 2008) and find that the method improves on the results of that study. Further improvements result from refining the subspace representation with discriminative training, and augmenting the training data with general newspaper text. Altogether, we improve on the best previous results by 11 points absolute in F measure.",
"title": ""
}
] |
[
{
"docid": "0f11d0d1047a79ee63896f382ae03078",
"text": "Much of the visual cortex is organized into visual field maps: nearby neurons have receptive fields at nearby locations in the image. Mammalian species generally have multiple visual field maps with each species having similar, but not identical, maps. The introduction of functional magnetic resonance imaging made it possible to identify visual field maps in human cortex, including several near (1) medial occipital (V1,V2,V3), (2) lateral occipital (LO-1,LO-2, hMT+), (3) ventral occipital (hV4, VO-1, VO-2), (4) dorsal occipital (V3A, V3B), and (5) posterior parietal cortex (IPS-0 to IPS-4). Evidence is accumulating for additional maps, including some in the frontal lobe. Cortical maps are arranged into clusters in which several maps have parallel eccentricity representations, while the angular representations within a cluster alternate in visual field sign. Visual field maps have been linked to functional and perceptual properties of the visual system at various spatial scales, ranging from the level of individual maps to map clusters to dorsal-ventral streams. We survey recent measurements of human visual field maps, describe hypotheses about the function and relationships between maps, and consider methods to improve map measurements and characterize the response properties of neurons comprising these maps.",
"title": ""
},
{
"docid": "becda89fbb882f4da57a82441643bb99",
"text": "During the nonbreeding season, adult Anna and black-chinned hummingbirds (Calypte anna and Archilochus alexandri) have lower defense costs and more exclusive territories than juveniles. Adult C. anna are victorious over juveniles in aggressive encounters, and tend to monopolize the most temporally predictable resources. Juveniles are more successful than adults at stealing food from territories (the primary alternative to territoriality), presumably because juveniles are less brightly colored. Juveniles have lighter wing disc loading than adults, and consequently should have lower rates of energy expenditure during flight. Reduced flight expenditures may be more important for juveniles because their foraging strategy requires large amounts of flight time. These results support the contention of the asymmetry hypothesis that dominance can result from a contested resource being more valuable to one contestant than to the other. Among juveniles, defence costs are also negatively correlated with age and coloration; amount of conspicucus coloration is negatively correlated with the number of bill striations, an inverse measure of age.",
"title": ""
},
{
"docid": "eaf1c419853052202cb90246e48a3697",
"text": "The objective of this document is to promote the use of dynamic daylight performance measures for sustainable building design. The paper initially explores the shortcomings of conventional, static daylight performance metrics which concentrate on individual sky conditions, such as the common daylight factor. It then provides a review of previously suggested dynamic daylight performance metrics, discussing the capability of these metrics to lead to superior daylighting designs and their accessibility to nonsimulation experts. Several example offices are examined to demonstrate the benefit of basing design decisions on dynamic performance metrics as opposed to the daylight factor. Keywords—–daylighting, dynamic, metrics, sustainable buildings",
"title": ""
},
{
"docid": "7046221ad9045cb464f65666c7d1a44e",
"text": "OBJECTIVES\nWe analyzed differences in pediatric elevated blood lead level incidence before and after Flint, Michigan, introduced a more corrosive water source into an aging water system without adequate corrosion control.\n\n\nMETHODS\nWe reviewed blood lead levels for children younger than 5 years before (2013) and after (2015) water source change in Greater Flint, Michigan. We assessed the percentage of elevated blood lead levels in both time periods, and identified geographical locations through spatial analysis.\n\n\nRESULTS\nIncidence of elevated blood lead levels increased from 2.4% to 4.9% (P < .05) after water source change, and neighborhoods with the highest water lead levels experienced a 6.6% increase. No significant change was seen outside the city. Geospatial analysis identified disadvantaged neighborhoods as having the greatest elevated blood lead level increases and informed response prioritization during the now-declared public health emergency.\n\n\nCONCLUSIONS\nThe percentage of children with elevated blood lead levels increased after water source change, particularly in socioeconomically disadvantaged neighborhoods. Water is a growing source of childhood lead exposure because of aging infrastructure.",
"title": ""
},
{
"docid": "10b94bdea46ff663dd01291c5dac9e9f",
"text": "The notion of an instance is ubiquitous in knowledge representations for domain modeling. Most languages used for domain modeling offer syntactic or semantic restrictions on specific language constructs that distinguish individuals and classes in the application domain. The use, however, of instances and classes to represent domain entities has been driven by concerns that range from the strictly practical (e.g. the exploitation of inheritance) to the vaguely philosophical (e.g. intuitive notions of intension and extension). We demonstrate the importance of establishing a clear ontological distinction between instances and classes, and then show modeling scenarios where a single object may best be viewed as a class and an instance. To avoid ambiguous interpretations of such objects, it is necessary to introduce separate universes of discourse in which the same object exists in different forms. We show that a limited facility to support this notion exists in modeling languages like Smalltalk and CLOS, and argue that a more general facility should be made explicit in modeling languages.",
"title": ""
},
{
"docid": "b72f4554f2d7ac6c5a8000d36a099e67",
"text": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.",
"title": ""
},
{
"docid": "5d9b29c10d878d288a960ae793f2366e",
"text": "We propose a new bandgap reference topology for supply voltages as low as one diode drop (~0.8V). In conventional low-voltage references, supply voltage is limited by the generated reference voltage. Also, the proposed topology generates the reference voltage at the output of the feedback amplifier. This eliminates the need for an additional output buffer, otherwise required in conventional topologies. With the bandgap core biased from the reference voltage, the new topology is also suitable for a low-voltage shunt reference. We fabricated a 1V, 0.35mV/degC reference occupying 0.013mm2 in a 90nm CMOS process",
"title": ""
},
{
"docid": "de630d018f3ff24fad06976e8dc390fa",
"text": "A critical first step in navigation of unmanned aerial vehicles is the detection of the horizon line. This information can be used for adjusting flight parameters, attitude estimation as well as obstacle detection and avoidance. In this paper, a fast and robust technique for precise detection of the horizon is presented. Our approach is to apply convolutional neural networks to the task, training them to detect the sky and ground regions as well as the horizon line in flight videos. Thorough experiments using large datasets illustrate the significance and accuracy of this technique for various types of terrain as well as seasonal conditions.",
"title": ""
},
{
"docid": "cb47cc2effac1404dd60a91a099699d1",
"text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.",
"title": ""
},
{
"docid": "ac1302f482309273d9e61fdf0f093e01",
"text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "af5cd4c5325db5f7d9131b7a7ba12ba5",
"text": "Understanding unstructured text in e-commerce catalogs is important for product search and recommendations. In this paper, we tackle the product discovery problem for fashion e-commerce catalogs where each input listing text consists of descriptions of one or more products; each with its own set of attributes. For instance, [this RED printed short top paired with blue jeans makes you go green] contains two products: item top with attributes {pattern=printed, length=short, brand=RED} and item jeans with attributes {color=blue}. The task of product discovery is rendered quite challenging due to the complexity of fashion dictionary (e.g. RED is a brand or green is a metaphor) added to the difficulty of associating attributes to appropriate items (e.g. associating RED brand with item top). Beyond classical attribute extraction task, product discovery entails parsing multi-sentence listings to tag new items and attributes unknown to the underlying schema; at the same time, associating attributes to relevant items to form meaningful products. Towards solving this problem, we propose a novel composition of sequence labeling and multi-task learning as an end-to-end trainable deep neural architecture. We systematically evaluate our approach on one of the largest tagged datasets in e-commerce consisting of 25K listings labeled at word-level. Given 23 labels, we discover label-values with F1 score of 92.2%. To our knowledge, this is the first work to tackle product discovery and show effectiveness of neural architectures on a complex dataset that goes beyond popular datasets for POS tagging and NER.",
"title": ""
},
{
"docid": "e1b69d4f2342a90b52215927f727421b",
"text": "We present an inertial sensor based monitoring system for measuring upper limb movements in real time. The purpose of this study is to develop a motion tracking device that can be integrated within a home-based rehabilitation system for stroke patients. Human upper limbs are represented by a kinematic chain in which there are four joint variables to be considered: three for the shoulder joint and one for the elbow joint. Kinematic models are built to estimate upper limb motion in 3-D, based on the inertial measurements of the wrist motion. An efficient simulated annealing optimisation method is proposed to reduce errors in estimates. Experimental results demonstrate the proposed system has less than 5% errors in most motion manners, compared to a standard motion tracker.",
"title": ""
},
{
"docid": "303098fa8e5ccd7cf50a955da7e47f2e",
"text": "This paper describes the SALSA corpus, a large German corpus manually annotated with role-semantic information, based on the syntactically annotated TIGER newspaper corpus (Brants et al., 2002). The first release, comprising about 20,000 annotated predicate instances (about half the TIGER corpus), is scheduled for mid-2006. In this paper we discuss the frame-semantic annotation framework and its cross-lingual applicability, problems arising from exhaustive annotation, strategies for quality control, and possible applications.",
"title": ""
},
{
"docid": "647ede4f066516a0343acef725e51d01",
"text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.",
"title": ""
},
{
"docid": "ddc6a5e9f684fd13aec56dc48969abc2",
"text": "During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.",
"title": ""
},
{
"docid": "0830abcb23d763c1298bf4605f81eb72",
"text": "A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGBD images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.",
"title": ""
},
{
"docid": "27487316cbda79a378b706d19d53178f",
"text": "Pallister-Killian syndrome (PKS) is a congenital disorder attributed to supernumerary isochromosome 12p mosaicism. Craniofacial dysmorphism, learning impairment and seizures are considered cardinal features. However, little is known regarding the seizure and epilepsy patterns in PKS. To better define the prevalence and spectrum of seizures in PKS, we studied 51 patients (39 male, 12 female; median age 4 years and 9 months; age range 7 months to 31 years) with confirmed 12p tetrasomy. Using a parent-based structured questionnaire, we collected data regarding seizure onset, frequency, timing, semiology, and medication therapy. Patients were recruited through our practice, at PKS Kids family events, and via the PKS Kids website. Epilepsy occurred in 27 (53%) with 23 (85%) of those with seizures having seizure onset prior to 3.5 years of age. Mean age at seizure onset was 2 years and 4 months. The most common seizure types were myoclonic (15/27, 56%), generalized convulsions (13/27, 48%), and clustered tonic spasms (similar to infantile spasms; 8/27, 30%). Thirteen of 27 patients with seizures (48%) had more than one seizure type with 26 out of 27 (96%) ever having taken antiepileptic medications. Nineteen of 27 (70%) continued to have seizures and 17/27 (63%) remained on antiepileptic medication. The most commonly used medications were: levetiracetam (10/27, 37%), valproic acid (10/27, 37%), and topiramate (9/27, 33%) with levetiracetam felt to be \"most helpful\" by parents (6/27, 22%). Further exploration of seizure timing, in-depth analysis of EEG recordings, and collection of MRI data to rule out confounding factors is warranted.",
"title": ""
},
{
"docid": "ffc9a5b907f67e1cedd8f9ab0b45b869",
"text": "In this brief, we study the design of a feedback and feedforward controller to compensate for creep, hysteresis, and vibration effects in an experimental piezoactuator system. First, we linearize the nonlinear dynamics of the piezoactuator by accounting for the hysteresis (as well as creep) using high-gain feedback control. Next, we model the linear vibrational dynamics and then invert the model to find a feedforward input to account vibration - this process is significantly easier than considering the complete nonlinear dynamics (which combines hysteresis and vibration effects). Afterwards, the feedforward input is augmented to the feedback-linearized system to achieve high-precision highspeed positioning. We apply the method to a piezoscanner used in an experimental atomic force microscope to demonstrate the method's effectiveness and we show significant reduction of both the maximum and root-mean-square tracking error. For example, high-gain feedback control compensates for hysteresis and creep effects, and in our case, it reduces the maximum error (compared to the uncompensated case) by over 90%. Then, at relatively high scan rates, the performance of the feedback controlled system can be improved by over 75% (i.e., reduction of maximum error) when the inversion-based feedforward input is integrated with the high-gain feedback controlled system.",
"title": ""
},
{
"docid": "4023c95464a842277e4dc62b117de8d0",
"text": "Many complex spike cells in the hippocampus of the freely moving rat have as their primary correlate the animal's location in an environment (place cells). In contrast, the hippocampal electroencephalograph theta pattern of rhythmical waves (7-12 Hz) is better correlated with a class of movements that change the rat's location in an environment. During movement through the place field, the complex spike cells often fire in a bursting pattern with an interburst frequency in the same range as the concurrent electroencephalograph theta. The present study examined the phase of the theta wave at which the place cells fired. It was found that firing consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This precession of the phase ranged from 100 degrees to 355 degrees in different cells. The effect appeared to be due to the fact that individual cells had a higher interburst rate than the theta frequency. The phase was highly correlated with spatial location and less well correlated with temporal aspects of behavior, such as the time after place field entry. These results have implications for several aspects of hippocampal function. First, by using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding. Second, the characteristics of the phase shift constrain the models that define the construction of place fields. Third, the results restrict the temporal and spatial circumstances under which synapses in the hippocampus could be modified.",
"title": ""
},
{
"docid": "6bc31257bfbcc9531a3acf1ec738c790",
"text": "BACKGROUND\nThe interaction of depression and anesthesia and surgery may result in significant increases in morbidity and mortality of patients. Major depressive disorder is a frequent complication of surgery, which may lead to further morbidity and mortality.\n\n\nLITERATURE SEARCH\nSeveral electronic data bases, including PubMed, were searched pairing \"depression\" with surgery, postoperative complications, postoperative cognitive impairment, cognition disorder, intensive care unit, mild cognitive impairment and Alzheimer's disease.\n\n\nREVIEW OF THE LITERATURE\nThe suppression of the immune system in depressive disorders may expose the patients to increased rates of postoperative infections and increased mortality from cancer. Depression is commonly associated with cognitive impairment, which may be exacerbated postoperatively. There is evidence that acute postoperative pain causes depression and depression lowers the threshold for pain. Depression is also a strong predictor and correlate of chronic post-surgical pain. Many studies have identified depression as an independent risk factor for development of postoperative delirium, which may be a cause for a long and incomplete recovery after surgery. Depression is also frequent in intensive care unit patients and is associated with a lower health-related quality of life and increased mortality. Depression and anxiety have been widely reported soon after coronary artery bypass surgery and remain evident one year after surgery. They may increase the likelihood for new coronary artery events, further hospitalizations and increased mortality. Morbidly obese patients who undergo bariatric surgery have an increased risk of depression. Postoperative depression may also be associated with less weight loss at one year and longer. The extent of preoperative depression in patients scheduled for lumbar discectomy is a predictor of functional outcome and patient's dissatisfaction, especially after revision surgery. General postoperative mortality is increased.\n\n\nCONCLUSIONS\nDepression is a frequent cause of morbidity in surgery patients suffering from a wide range of conditions. Depression may be identified through the use of Patient Health Questionnaire-9 or similar instruments. Counseling interventions may be useful in ameliorating depression, but should be subject to clinical trials.",
"title": ""
}
] |
scidocsrr
|
305542b453075f284bf65c67079082c5
|
Title : Towards a common framework for knowledge co-creation : opportunities for collaboration between Service Science and Sustainability Science Track : Viable Systems Approach
|
[
{
"docid": "9f20a4117c3e09250af9e9c3de4d37de",
"text": "Service-dominant logic (S-D logic) is contrasted with goods-dominant (G-D) logic to provide a framework for thinking more clearly about the concept of service and its role in exchange and competition. Then, relying upon the nine foundational premises of S-D logic [Vargo, Stephen L. and Robert F. Lusch (2004). “Evolving to a New Dominant Logic for Marketing,†Journal of Marketing, 68 (January) 1–17; Lusch, Robert F. and Stephen L. Vargo (2006), “Service-Dominant Logic as a Foundation for Building a General Theory,†in The Service-Dominant Logic of Marketing: Dialog, Debate and Directions. Robert F. Lusch and Stephen L. Vargo (eds.), Armonk, NY: M.E. Sharpe, 406–420] nine derivative propositions are developed that inform marketers on how to compete through service. a c, 2 Purchase Export",
"title": ""
}
] |
[
{
"docid": "6aaa2b6cc2593ee2f65623ddb9c84f4c",
"text": "We propose a large dataset for machine learning-based automatic keyphrase extraction. The dataset has a high quality and consist of 2,000 of scientific papers from computer science domain published by ACM. Each paper has its keyphrases assigned by the authors and verified by the reviewers. Different parts of papers, such as title and abstract, are separated, enabling extraction based on a part of an article's text. The content of each paper is converted from PDF to plain text. The pieces of formulae, tables, figures and LaTeX mark up were removed automatically. For removal we have used Maximum Entropy Model-based machine learning and achieved 97.04% precision. Preliminary investigation with help of the state of the art keyphrase extraction system KEA shows keyphrases recognition accuracy improvement for refined texts.",
"title": ""
},
{
"docid": "5948af3805969eb3b9e1cca4c8a5957c",
"text": "Force-controllable actuators are essential for guaranteeing safety in human–robot interactions. Magnetic lead screws (MLSs) transfer force without requiring contact between parts. These devices can drive the parts with high efficiency and no frictional contact, and they are force limited when overloaded. We have developed a novel MLS that does not include spiral permanent magnets and an MLS-driven linear actuator (MLSDLA) that uses this device. This simple structure reduces the overall size of the device and improves productivity because it is constructed by a commonly used machined screw as a screw. The actuator can drive back against an external force and it moves flexibly based on the magnetic spring effect. In this paper, we propose a force estimation method for the MLSDLA that does not require separate sensors. The magnetic phase difference, as measured from the angular and linear displacements of the actuator, is used for this calculation. The estimated force is then compared against measurements recorded with a load sensor in order to verify the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "164e5bde10882e3f7a6bcdf473eb7387",
"text": "This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing.",
"title": ""
},
{
"docid": "10172bbb61d404eb38a898bafadb5021",
"text": "Numerical code uses floating-point arithmetic and necessarily suffers from roundoff and truncation errors. Error analysis is the process to quantify such uncertainty in the solution to a problem. Forward error analysis and backward error analysis are two popular paradigms of error analysis. Forward error analysis is more intuitive and has been explored and automated by the programming languages (PL) community. In contrast, although backward error analysis is more preferred by numerical analysts and the foundation for numerical stability, it is less known and unexplored by the PL community. To fill the gap, this paper presents an automated backward error analysis for numerical code to empower both numerical analysts and application developers. In addition, we use the computed backward error results to also compute the condition number, an important quantity recognized by numerical analysts for measuring how sensitive a function is to changes or errors in the input. Experimental results on Intel X87 FPU functions and widely-used GNU C Library functions demonstrate that our analysis is effective at analyzing the accuracy of floating-point programs.",
"title": ""
},
{
"docid": "18a5e6686a26a2f17c65a217022163b1",
"text": "This paper proposes the first derivation, implementation, and experimental validation of light field image-based visual servoing. Light field image Jacobians are derived based on a compact light field feature representation that is close to the form measured directly by light field cameras. We also enhance feature detection and correspondence by enforcing light field geometry constraints, and directly estimate the image Jacobian without knowledge of point depth. The proposed approach is implemented over a standard visual servoing control loop, and applied to a custom-mirror-based light field camera mounted on a robotic arm. Light field image-based visual servoing is then validated in both simulation and experiment. We show that the proposed method outperforms conventional monocular and stereo image-based visual servoing under field-of-view constraints and occlusions.",
"title": ""
},
{
"docid": "17ebf9f15291a3810d57771a8c669227",
"text": "We describe preliminary work toward applying a goal reasoning agent for controlling an underwater vehicle in a partially observable, dynamic environment. In preparation for upcoming at-sea tests, our investigation focuses on a notional scenario wherein a autonomous underwater vehicle pursuing a survey goal unexpectedly detects the presence of a potentially hostile surface vessel. Simulations suggest that Goal Driven Autonomy can successfully reason about this scenario using only the limited computational resources typically available on underwater robotic platforms.",
"title": ""
},
{
"docid": "b15b88a31cc1762618ca976bdf895d57",
"text": "How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional (1,000+ pixels) natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task. Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot task, with competitive results and little parameter overhead. Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task. We conclude that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.",
"title": ""
},
{
"docid": "dec3f821a1f9fc8102450a4add31952b",
"text": "Homicide by hanging is an extremely rare incident [1]. Very few cases have been reported in which a person is rendered senseless and then hanged to simulate suicidal death; though there are a lot of cases in wherein a homicide victim has been hung later. We report a case of homicidal hanging of a young Sikh individual found hanging in a well. It became evident from the results of forensic autopsy that the victim had first been given alcohol mixed with pesticides and then hanged by his turban from a well. The rare combination of lynching (homicidal hanging) and use of organo-phosporous pesticide poisoning as a means of homicide are discussed in this paper.",
"title": ""
},
{
"docid": "24b2cedc9512566e44f9fd7e1acf8a85",
"text": "This paper presents an alternative visual authentication scheme with two secure layers for desktops or laptops. The first layer is a recognition-based scheme that addresses human factors for protection against bots by recognizing a Captcha and images with specific patterns. The second layer uses a clicked based Cued-Recall graphical password scheme for authentication, it also exploits emotions perceived by humans and use them as decision factor. The proposed authentication system is effective against brute-force, online guessing and relay attacks. We believe that the perception of security is enhaced using human emotions as main decision factor. The proposed scheme usability was tested using the Computer System Usability Questionnaires, results showed that it is highly usable and could improve the security level on ATM machines.",
"title": ""
},
{
"docid": "4f64e7ff2bed569d73da9cae011e995d",
"text": "Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and bring the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which enhances the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-ofthe- art results on four datasets, including PASCAL VOC 2012, CamVid, GATECH, COCO Stuff. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6% in the test set.",
"title": ""
},
{
"docid": "f7e45feaa48b8d7741ac4cdb3ef4749b",
"text": "Classification problems refer to the assignment of some alt ern tives into predefined classes (groups, categories). Such problems often arise in several application fields. For instance, in assessing credit card applications the loan officer must evaluate the charact eristics of each applicant and decide whether an application should be accepted or rejected. Simil ar situations are very common in fields such as finance and economics, production management (fault diagnosis) , medicine, customer satisfaction measurement, data base management and retrieval, etc.",
"title": ""
},
{
"docid": "3fa0ab962ec54cea182a293810cf7ce8",
"text": "Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have. When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new ‘disease’, female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). ‘But,’ the news editor wanted to know, ‘was this paper peer reviewed?’. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)",
"title": ""
},
{
"docid": "5c8923335dd4ee4c2123b5b3245fb595",
"text": "Virtualization is a key enabler of Cloud computing. Due to the numerous vulnerabilities in current implementations of virtualization, security is the major concern of Cloud computing. In this paper, we propose an enhanced security framework to detect intrusions at the virtual network layer of Cloud. It combines signature and anomaly based techniques to detect possible attacks. It uses different classifiers viz; naive bayes, decision tree, random forest, extra trees and linear discriminant analysis for an efficient and effective detection of intrusions. To detect distributed attacks at each cluster and at whole Cloud, it collects intrusion evidences from each region of Cloud and applies Dempster-Shafer theory (DST) for final decision making. We analyze the proposed security framework in terms of Cloud IDS requirements through offline simulation using different intrusion datasets.",
"title": ""
},
{
"docid": "8e1a63bc8cb3d329af03849c5b3aafd3",
"text": "First Sight, a vision system in labeling the outline of a moving human body, is proposed in this paper. The emphasis of First Sight is on the analysis of motion information gathered solely from the outline of a moving human object. Two main processes are implemented in First Sight. The first process uses a novel technique to extract the outline of a moving human body from an image sequence. The second process, which employs a new human body model, interprets the outline and produces a labeled two-dimensional human body stick figure for each frame of the image sequence. Extensive knowledge of the structure, shape, and posture of the human body is used in the model. The experimental results of applying the technique on unedited image sequences with self-occlusions and missing boundary lines are encouraging. Index Items-Coincidence edge, difference picture, human body, human body model, labeling, model, motion, outline, pose, posture, ribbon, stick figure.",
"title": ""
},
{
"docid": "216b169897d93939e64b552e4422aa69",
"text": "The ideal treatment of the nasolabial fold, the tear trough, the labiomandibular fold and the mentolabial sulcus is still discussed controversially. The detailed topographical anatomy of the fat compartments may clarify the anatomy of facial folds and may offer valuable information for choosing the adequate treatment modality. Nine non-fixed cadaver heads in the age range between 72 and 89 years (five female and four male) were investigated. Computed tomographic scans were performed after injection of a radiographic contrast medium directly into the fat compartments surrounding prominent facial folds. The data were analysed after multiplanar image reconstruction. The fat compartments surrounding the facial folds could be defined in each subject. Different arrangement patterns of the fat compartments around the facial rhytides were found. The nasolabial fold, the tear trough and the labiomandibular fold represent an anatomical border between adjacent fat compartments. By contrast, the glabellar fold and the labiomental sulcus have no direct relation to the boundaries of facial fat. Deep fat, underlying a facial rhytide, was identified underneath the nasolabial crease and the labiomental sulcus. In conclusion, an improvement by a compartment-specific volume augmentation of the nasolabial fold, the tear trough and the labiomandibular fold is limited by existing boundaries that extend into the skin. In the area of the nasolabial fold and the mentolabial sulcus, deep fat exists which can be used for augmentation and subsequent elevation of the folds. The treatment of the tear trough deformity appears anatomically the most challenging area since the superficial and deep fat compartments are separated by an osseo-cutaneous barrier, the orbicularis retaining ligament. In severe cases, a surgical treatment should be considered. By contrast, the glabellar fold shows the most simple anatomical architecture. The fold lies above one subcutaneous fat compartment that can be used for augmentation.",
"title": ""
},
{
"docid": "9b8317646ce6cad433e47e42198be488",
"text": "OBJECTIVE\nDigital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention.\n\n\nMETHODS\nA qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change.\n\n\nRESULTS\nThe core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness.\n\n\nCONCLUSIONS\nMobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.",
"title": ""
},
{
"docid": "49e2963e84967100deee8fc810e053ba",
"text": "We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.",
"title": ""
},
{
"docid": "49f132862ca2c4a07d6233e8101a87ff",
"text": "Genetic data as a category of personal data creates a number of challenges to the traditional understanding of personal data and the rules regarding personal data processing. Although the peculiarities of and heightened risks regarding genetic data processing were recognized long before the data protection reform in the EU, the General Data Protection Regulation (GDPR) seems to pay no regard to this. Furthermore, the GDPR will create more legal grounds for (sensitive) personal data (incl. genetic data) processing whilst restricting data subjects’ means of control over their personal data. One of the reasons for this is that, amongst other aims, the personal data reform served to promote big data business in the EU. The substantive clauses of the GDPR concerning big data, however, do not differentiate between the types of personal data being processed. Hence, like all other categories of personal data, genetic data is subject to the big data clauses of the GDPR as well; thus leading to the question whether the GDPR is creating a pathway for ‘big genetic data’. This paper aims to analyse the implications that the role of the GDPR as a big data enabler bears on genetic data processing and the respective rights of the data",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "43d9566553ecf29c72cdac7466aab9dc",
"text": "This paper presents an integrated approach for the automatic extraction of rectangularand circularshape buildings from high-resolution optical spaceborne images using the integration of support vector machine (SVM) classification, Hough transformation and perceptual grouping. The building patches are detected from the image using the binary SVM classification. The generated normalized digital surface model (nDSM) and the normalized difference vegetation index (NDVI) are incorporated in the classification process as additional bands. After detecting the building patches, the building boundaries are extracted through sequential processing of edge detection, Hough transformation and perceptual grouping. Those areas that are classified as building are masked and further processing operations are performed on the masked areas only. The edges of the buildings are detected through an edge detection algorithm that generates a binary edge image of the building patches. These edges are then converted into vector form through Hough transform and the buildings are constructed by means of perceptual grouping. To validate the developed method, experiments were conducted on pan-sharpened and panchromatic Ikonos imagery, covering the selected test areas in Batikent district of Ankara, Turkey. For the test areas that contain industrial buildings, the average building detection percentage (BDP) and quality percentage (QP) values were computed to be 93.45% and 79.51%, respectively. For the test areas that contain residential rectangular-shape buildings, the average BDP and QP values were computed to be 95.34% and 79.05%, respectively. For the test areas that contain residential circular-shape buildings, the average BDP and QP values were found to be 78.74% and 66.81%, respectively. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
452bfe889d01dccd523ba2c49603cab6
|
Modeling and Control of Three-Port DC/DC Converter Interface for Satellite Applications
|
[
{
"docid": "8b70670fa152dbd5185e80136983ff12",
"text": "This letter proposes a novel converter topology that interfaces three power ports: a source, a bidirectional storage port, and an isolated load port. The proposed converter is based on a modified version of the isolated half-bridge converter topology that utilizes three basic modes of operation within a constant-frequency switching cycle to provide two independent control variables. This allows tight control over two of the converter ports, while the third port provides the power balance in the system. The switching sequence ensures a clamping path for the energy of the leakage inductance of the transformer at all times. This energy is further utilized to achieve zero-voltage switching for all primary switches for a wide range of source and load conditions. Basic steady-state analysis of the proposed converter is included, together with a suggested structure for feedback control. Key experimental results are presented that validate the converter operation and confirm its ability to achieve tight independent control over two power processing paths. This topology promises significant savings in component count and losses for power-harvesting systems. The proposed topology and control is particularly relevant to battery-backed power systems sourced by solar or fuel cells",
"title": ""
}
] |
[
{
"docid": "6718aa3480c590af254a120376822d07",
"text": "This paper proposes a novel method for content-based watermarking based on feature points of an image. At each feature point, the watermark is embedded after scale normalization according to the local characteristic scale. Characteristic scale is the maximum scale of the scale-space representation of an image at the feature point. By binding watermarking with the local characteristics of an image, resilience against a5ne transformations can be obtained easily. Experimental results show that the proposed method is robust against various image processing steps including a5ne transformations, cropping, 7ltering and JPEG compression. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ea048488791219be809072862a061444",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "060c1f1e08624c3b59610f150d6f27f8",
"text": "As graph models are applied to more widely varying fields, researchers struggle with tools for exploring and analyzing these structures. We describe GUESS, a novel system for graph exploration that combines an interpreted language with a graphical front end that allows researchers to rapidly prototype and deploy new visualizations. GUESS also contains a novel, interactive interpreter that connects the language and interface in a way that facilities exploratory visualization tasks. Our language, Gython, is a domain-specific embedded language which provides all the advantages of Python with new, graph specific operators, primitives, and shortcuts. We highlight key aspects of the system in the context of a large user survey and specific, real-world, case studies ranging from social and knowledge networks to distributed computer network analysis.",
"title": ""
},
{
"docid": "211484ec722f4df6220a86580d7ecba8",
"text": "The widespread use of vision-based surveillance systems has inspired many research efforts on people localization. In this paper, a series of novel image transforms based on the vanishing point of vertical lines is proposed for enhancement of the probabilistic occupancy map (POM)-based people localization scheme. Utilizing the characteristic that the extensions of vertical lines intersect at a vanishing point, the proposed transforms, based on image or ground plane coordinate system, aims at producing transformed images wherein each standing/walking person will have an upright appearance. Thus, the degradation in localization accuracy due to the deviation of camera configuration constraint specified can be alleviated, while the computation efficiency resulted from the applicability of integral image can be retained. Experimental results show that significant improvement in POM-based people localization for more general camera configurations can indeed be achieved with the proposed image transforms.",
"title": ""
},
{
"docid": "41b6bff4b6f3be41903725e39f630722",
"text": "Despite the huge research on crowd on behavior understanding in visual surveillance community, lack of publicly available realistic datasets for evaluating crowd behavioral interaction led not to have a fair common test bed for researchers to compare the strength of their methods in the real scenarios. This work presents a novel crowd dataset contains around 45,000 video clips which annotated by one of the five different fine-grained abnormal behavior categories. We also evaluated two state-of-the-art methods on our dataset, showing that our dataset can be effectively used as a benchmark for fine-grained abnormality detection. The details of the dataset and the results of the baseline methods are presented in the paper.",
"title": ""
},
{
"docid": "58b5be2fadbaacfb658f7d18cec807d3",
"text": "As the growth of rapid prototyping techniques shortens the development life cycle of software and electronic products, usability inquiry methods can play a more significant role during the development life cycle, diagnosing usability problems and providing metrics for making comparative decisions. A need has been realized for questionnaires tailored to the evaluation of electronic mobile products, wherein usability is dependent on both hardware and software as well as the emotional appeal and aesthetic integrity of the design. This research followed a systematic approach to develop a new questionnaire tailored to measure the usability of electronic mobile products. The Mobile Phone Usability Questionnaire (MPUQ) developed throughout this series of studies evaluates the usability of mobile phones for the purpose of making decisions among competing variations in the end-user market, alternatives of prototypes during the development process, and evolving versions during an iterative design process. In addition, the questionnaire can serve as a tool for identifying diagnostic information to improve specific usability dimensions and related interface elements. Employing the refined MPUQ, decision making models were developed using Analytic Hierarchy Process (AHP) and linear regression analysis. Next, a new group of representative mobile users was employed to develop a hierarchical model representing the usability dimensions incorporated in the questionnaire and to assign priorities to each node in the hierarchy. Employing the AHP and regression models, important usability dimensions and questionnaire items for mobile products were identified. Finally, a case study of comparative usability evaluations was performed to validate the MPUQ and models. A computerized support tool was developed to perform redundancy and relevancy analyses for the selection of appropriate questionnaire items. The weighted geometric mean was used to combine multiple numbers of matrices from pairwise comparison based on decision makers’ consistency ratio values for AHP. The AHP and regression models provided important usability dimensions so that mobile device usability practitioners can simply focus on the interface elements related to the decisive usability dimensions in order to improve the usability",
"title": ""
},
{
"docid": "2e29301adf162bb5e9fecea50a25a85a",
"text": "The collection and combination of assessment data in trustworthiness evaluation of cloud service is challenging, notably because QoS value may be missing in offline evaluation situation due to the time-consuming and costly cloud service invocation. Considering the fact that many trustworthiness evaluation problems require not only objective measurement but also subjective perception, this paper designs a novel framework named CSTrust for conducting cloud service trustworthiness evaluation by combining QoS prediction and customer satisfaction estimation. The proposed framework considers how to improve the accuracy of QoS value prediction on quantitative trustworthy attributes, as well as how to estimate the customer satisfaction of target cloud service by taking advantages of the perception ratings on qualitative attributes. The proposed methods are validated through simulations, demonstrating that CSTrust can effectively predict assessment data and release evaluation results of trustworthiness. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7165568feac9cc0bc0c1056b930958b8",
"text": "We describe a 63-year-old woman with an asymptomatic papular eruption on the vulva. Clinically, the lesions showed multiple pin-head-sized whitish papules on the labia major. Histologically, the biopsy specimen showed acantholysis throughout the epidermis with the presence of dyskeratotic cells resembling corps ronds and grains, hyperkeratosis and parakeratosis. These clinical and histological findings were consistent with the diagnosis of papular acantholytic dyskeratosis of the vulva which is a rare disorder, first described in 1984.",
"title": ""
},
{
"docid": "3e1690ae4d61d87edb0e4c3ce40f6a88",
"text": "Despite previous efforts in auditing software manually and automatically, buffer overruns are still being discovered in programs in use. A dynamic bounds checker detects buffer overruns in erroneous software before it occurs and thereby prevents attacks from corrupting the integrity of the system. Dynamic buffer overrun detectors have not been adopted widely because they either (1) cannot guard against all buffer overrun attacks, (2) break existing code, or (3) incur too high an overhead. This paper presents a practical detector called CRED (C Range Error Detector) that avoids each of these deficiencies. CRED finds all buffer overrun attacks as it directly checks for the bounds of memory accesses. Unlike the original referent-object based bounds-checking technique, CRED does not break existing code because it uses a novel solution to support program manipulation of out-of-bounds addresses. Finally, by restricting the bounds checks to strings in a program, CRED’s overhead is greatly reduced without sacrificing protection in the experiments we performed. CRED is implemented as an extension of the GNU C compiler version 3.3.1. The simplicity of our design makes possible a robust implementation that has been tested on over 20 open-source programs, comprising over 1.2 million lines of C code. CRED proved effective in detecting buffer overrun attacks on programs with known vulnerabilities, and is the only tool found to guard against a testbed of 20 different buffer overflow attacks[34]. Finding overruns only on strings impose an overhead of less This research was performed while the first author was at Stanford University, and this material is based upon work supported in part by the National Science Foundation under Grant No. 0086160. than 26% for 14 of the programs, and an overhead of up to 130% for the remaining six, while the previous state-ofthe-art bounds checker by Jones and Kelly breaks 60% of the programs and is 12 times slower. Incorporating wellknown techniques for optimizing bounds checking into CRED could lead to further performance improvements.",
"title": ""
},
{
"docid": "59a49feef4e3a79c5899fede208a183c",
"text": "This study proposed and tested a model of consumer online buying behavior. The model posits that consumer online buying behavior is affected by demographics, channel knowledge, perceived channel utilities, and shopping orientations. Data were collected by a research company using an online survey of 999 U.S. Internet users, and were cross-validated with other similar national surveys before being used to test the model. Findings of the study indicated that education, convenience orientation, Página 1 de 20 Psychographics of the Consumers in Electronic Commerce 11/10/01 http://www.ascusc.org/jcmc/vol5/issue2/hairong.html experience orientation, channel knowledge, perceived distribution utility, and perceived accessibility are robust predictors of online buying status (frequent online buyer, occasional online buyer, or non-online buyer) of Internet users. Implications of the findings and directions for future research were discussed.",
"title": ""
},
{
"docid": "21943e640ce9b56414994b5df504b1a6",
"text": "It is a preferable method to transfer power wirelessly using contactless slipring systems for rotary applications. The current single or multiple-unit single-phase systems often have limited power transfer capability, so they may not be able to meet the load requirements. This paper presents a contactless slipring system based on axially traveling magnetic field that can achieve a high output power level. A new index termed mutual inductance per pole is introduced to simplify the analysis of the mutually coupled poly-phase system to a single-phase basis. Both simulation and practical results have shown that the proposed system can transfer 2.7 times more power than a multiple-unit (six individual units) single-phase system with the same amount of ferrite and copper materials at higher power transfer efficiency. It has been found that the new system can achieve about 255.6 W of maximum power at 97% efficiency, compared to 68.4 W at 90% of a multiple-unit (six individual units) single-phase system.",
"title": ""
},
{
"docid": "caa7ecc11fc36950d3e17be440d04010",
"text": "In this paper, a comparative study of routing protocols is performed in a hybrid network to recommend the best routing protocol to perform load balancing for Internet traffic. Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP) and Intermediate System to Intermediate System (IS-IS) routing protocols are compared in OPNET modeller 14 to investigate their capability of ensuring fair distribution of traffic in a hybrid network. The network simulated is scaled to a campus. The network loads are varied in size and performance study is made by running simulations with all the protocols. The only considered performance factors for observation are packet drop, network delay, throughput and network load. IGRP presented better performance as compared to other protocols. The benefit of using IGRP is reduced packet drop, reduced network delay, increased throughput while offering relative better distribution of traffic in a hybrid network.",
"title": ""
},
{
"docid": "a74081f7108e62fadb48446255dd246b",
"text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.",
"title": ""
},
{
"docid": "2c442933c4729e56e5f4f46b5b8071d6",
"text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.",
"title": ""
},
{
"docid": "48088cbe2f40cbbb32beb53efa224f3b",
"text": "Pain is a nonmotor symptom that substantially affects the quality of life of at least one-third of patients with Parkinson disease (PD). Interestingly, patients with PD frequently report different types of pain, and a successful approach to distinguish between these pains is required so that effective treatment strategies can be established. Differences between these pains are attributable to varying peripheral pain mechanisms, the role of motor symptoms in causing or amplifying pain, and the role of PD pathophysiology in pain processing. In this Review, we propose a four-tier taxonomy to improve classification of pain in PD. This taxonomy assigns nociceptive, neuropathic and miscellaneous pains to distinct categories, as well as further characterization into subcategories. Currently, treatment of pain in PD is based on empirical data only, owing to a lack of controlled studies. The facultative symptom of 'dopaminergically maintained pain' refers to pain that benefits from antiparkinson medication. Here, we also present additional pharmacological and nonpharmacological treatment approaches, which can be targeted to a specific pain following classification using our taxonomy.",
"title": ""
},
{
"docid": "936cdd4b58881275485739518ccb4f85",
"text": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems — BN’s error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN’s usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN’s computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code.",
"title": ""
},
{
"docid": "9fe93bda131467c7851d75644de83534",
"text": "The Banking industry has undergone a dramatic change since internet penetration and the concept of internet banking. Internet banking is defined as an internet portal, through which customers can use different kinds of banking services. Internet banking has major effects on banking relationships. The primary objective of this research is to identify the factors that influence internet banking adoption. Using PLS, a model is successfully proved and it is found that internet banking is influenced by its perceived reliability, Perceived ease of use and Perceived usefulness. In the marketing process of internet banking services marketing experts should emphasize these benefits its adoption provides and awareness can also be improved to attract consumers’ attention to internet banking services. Factors Influencing Consumer Adoption of Internet Banking in India 1 Assistant professor, Karunya School of Management, Karunya University, Coimbatore, India. Email: cprema@karunya.edu",
"title": ""
},
{
"docid": "959c3d0aaa3c17ab43f0362fd03f7b98",
"text": "In this thesis, channel estimation techniques are studied and investigated for a novel multicarrier modulation scheme, Universal Filtered Multi-Carrier (UFMC). UFMC (a.k.a. UFOFDM) is considered as a candidate for the 5th Generation of wireless communication systems, which aims at replacing OFDM and enhances system robustness and performance in relaxed synchronization condition e.g. time-frequency misalignment. Thus, it may more efficiently support Machine Type Communication (MTC) and Internet of Things (IoT), which are considered as challenging applications for next generation of wireless communication systems. There exist many methods of channel estimation, time-frequency synchronization and equalization for classical CP-OFDM systems. Pilot-aided methods known from CP-OFDM are adopted and applied to UFMC systems. The performance of UFMC is then compared with CP-OFDM.",
"title": ""
},
{
"docid": "9b8e9b5fa9585cf545d6ab82483c9f38",
"text": "A survey of bacterial and archaeal genomes shows that many Tn7-like transposons contain minimal type I-F CRISPR-Cas systems that consist of fused cas8f and cas5f, cas7f, and cas6f genes and a short CRISPR array. Several small groups of Tn7-like transposons encompass similarly truncated type I-B CRISPR-Cas. This minimal gene complement of the transposon-associated CRISPR-Cas systems implies that they are competent for pre-CRISPR RNA (precrRNA) processing yielding mature crRNAs and target binding but not target cleavage that is required for interference. Phylogenetic analysis demonstrates that evolution of the CRISPR-Cas-containing transposons included a single, ancestral capture of a type I-F locus and two independent instances of type I-B loci capture. We show that the transposon-associated CRISPR arrays contain spacers homologous to plasmid and temperate phage sequences and, in some cases, chromosomal sequences adjacent to the transposon. We hypothesize that the transposon-encoded CRISPR-Cas systems generate displacement (R-loops) in the cognate DNA sites, targeting the transposon to these sites and thus facilitating their spread via plasmids and phages. These findings suggest the existence of RNA-guided transposition and fit the guns-for-hire concept whereby mobile genetic elements capture host defense systems and repurpose them for different stages in the life cycle of the element.",
"title": ""
}
] |
scidocsrr
|
6ca533a904ec1622f69593cff72dd8e8
|
Indirect content privacy surveys: measuring privacy without asking about it
|
[
{
"docid": "575da85b3675ceaec26143981dbe9b53",
"text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "1c832140fce684c68fd91779d62596e3",
"text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.",
"title": ""
},
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "671573d5f3fc356ee0a5a3e373d6a52f",
"text": "This paper presents a fuzzy logic control for a speed control of DC induction motor. The simulation developed by using Fuzzy MATLAB Toolbox and SIMULINK. The fuzzy logic controller is also introduced to the system for keeping the motor speed to be constant when the load varies. Because of the low maintenance and robustness induction motors have many applications in the industries. The speed control of induction motor is more important to achieve maximum torque and efficiency. The result of the 3x3 matrix fuzzy control rules and 5x5 matrix fuzzy control rules of the theta and speed will do comparison in this paper. Observation the effects of the fuzzy control rules on the performance of the DC- induction motor-speed control.",
"title": ""
},
{
"docid": "872d06c4d3702d79cb1c7bcbc140881a",
"text": "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.\nExisting noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.",
"title": ""
},
{
"docid": "90dfa19b821aeab985a96eba0c3037d3",
"text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.",
"title": ""
},
{
"docid": "51179905a1ded4b38d7ba8490fbdac01",
"text": "Psychology—the way learning is defined, studied, and understood—underlies much of the curricular and instructional decision-making that occurs in education. Constructivism, perhaps the most current psychology of learning, is no exception. Initially based on the work of Jean Piaget and Lev Vygotsky, and then supported and extended by contemporary biologists and cognitive scientists, it is having major ramifications on the goals teachers set for the learners with whom they work, the instructional strategies teachers employ in working towards these goals, and the methods of assessment utilized by school personnel to document genuine learning. What is this theory of learning and development that is the basis of the current reform movement and how is it different from other models of psychology?",
"title": ""
},
{
"docid": "1fc10d626c7a06112a613f223391de26",
"text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …",
"title": ""
},
{
"docid": "fbfd3294cfe070ac432bf087fc382b18",
"text": "The alignment of business and information technology (IT) strategies is an important and enduring theoretical challenge for the information systems discipline, remaining a top issue in practice over the past 20 years. Multi-business organizations (MBOs) present a particular alignment challenge because business strategies are developed at the corporate level, within individual strategic business units and across the corporate investment cycle. In contrast, the extant literature implicitly assumes that IT strategy is aligned with a single business strategy at a single point in time. This paper draws on resource-based theory and path dependence to model functional, structural, and temporal IT strategic alignment in MBOs. Drawing on Makadok’s theory of profit, we show how each form of alignment creates value through the three strategic drivers of competence, governance, and flexibility, respectively. We illustrate the model with examples from a case study on the Commonwealth Bank of Australia. We also explore the model’s implications for existing IT alignment models, providing alternative theoretical explanations for how IT alignment creates value. Journal of Information Technology (2015) 30, 101–118. doi:10.1057/jit.2015.1; published online 24 March 2015",
"title": ""
},
{
"docid": "b03273ada7d85d37e4c44f1195c9a450",
"text": "Nowadays the trend to solve optimization problems is to use s pecific algorithms rather than very general ones. The UNLocBoX provides a general framework allowing the user to design his own algorithms. To do so, the framework try to stay as close from the mathematical problem as possible. M ore precisely, the UNLocBoX is a Matlab toolbox designed to solve convex optimi zation problem of the form",
"title": ""
},
{
"docid": "48fffb441a5e7f304554e6bdef6b659e",
"text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.",
"title": ""
},
{
"docid": "d21308f9ffa990746c6be137964d2e12",
"text": "'Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers', This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "c2fc4e65c484486f5612f4006b6df102",
"text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.",
"title": ""
},
{
"docid": "1ecbdb3a81e046452905105600b90780",
"text": "Identity-invariant estimation of head pose from still images is a challenging task due to the high variability of facial appearance. We present a novel 3D head pose estimation approach, which utilizes the flexibility and expressibility of a dense generative 3D facial model in combination with a very fast fitting algorithm. The efficiency of the head pose estimation is obtained by a 2D synthesis of the facial input image. This optimization procedure drives the appearance and pose of the 3D facial model. In contrast to many other approaches we are specifically interested in the more difficult task of head pose estimation from still images, instead of tracking faces in image sequences. We evaluate our approach on two publicly available databases (FacePix and USF HumanID) and compare our method to the 3D morphable model and other state of the art approaches in terms of accuracy and speed.",
"title": ""
},
{
"docid": "2ce36ce9de500ba2367b1af83ac3e816",
"text": "We examine whether the information content of the earnings report, as captured by the earnings response coefficient (ERC), increases when investors’ uncertainty about the manager’s reporting objectives decreases, as predicted in Fischer and Verrecchia (2000). We use the 2006 mandatory compensation disclosures as an instrument to capture a decrease in investors’ uncertainty about managers’ incentives and reporting objectives. Employing a difference-in-differences design and exploiting the staggered adoption of the new rules, we find a statistically and economically significant increase in ERC for treated firms relative to control firms, largely driven by profit firms. Cross-sectional tests suggest that the effect is more pronounced in subsets of firms most affected by the new rules. Our findings represent the first empirical evidence of a role of compensation disclosures in enhancing the information content of financial reports. JEL Classification: G38, G30, G34, M41",
"title": ""
},
{
"docid": "959ad8268836d34648a52c449f5de987",
"text": "There is widespread sentiment that fast gradient methods (e.g. Nesterov’s acceleration, conjugate gradient, heavy ball) are not effective for the purposes of stochastic optimization due to their instability and error accumulation. Numerous works have attempted to quantify these instabilities in the face of either statistical or non-statistical errors (Paige, 1971; Proakis, 1974; Polyak, 1987; Greenbaum, 1989; Roy and Shynk, 1990; Sharma et al., 1998; d’Aspremont, 2008; Devolder et al., 2014; Yuan et al., 2016). This work considers these issues for the special case of stochastic approximation for the least squares regression problem, and our main result refutes this conventional wisdom by showing that acceleration can be made robust to statistical errors. In particular, this work introduces an accelerated stochastic gradient method that provably achieves the minimax optimal statistical risk faster than stochastic gradient descent. Critical to the analysis is a sharp characterization of accelerated stochastic gradient descent as a stochastic process. We hope this characterization gives insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex optimization problems.",
"title": ""
},
{
"docid": "3c33528735b53a4f319ce4681527c163",
"text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈mgordy@frb.gov〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our",
"title": ""
},
{
"docid": "56a072fc480c64e6a288543cee9cd5ac",
"text": "The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.",
"title": ""
},
{
"docid": "7fd5f3461742db10503dd5e3d79fe3ed",
"text": "There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.",
"title": ""
},
{
"docid": "14032695043a1cc16239317e496bac35",
"text": "The rearing of bees is a quite difficult job since it requires experience and time. Beekeepers are used to take care of their bee colonies observing them and learning to interpret their behavior. Despite the rearing of bees represents one of the most antique human habits, nowadays bees risk the extinction principally because of the increasing pollution levels related to human activity. It is important to increase our knowledge about bees in order to develop new practices intended to improve their protection. These practices could include new technologies, in order to increase profitability of beekeepers and economical interest related to bee rearing, but also innovative rearing techniques, genetic selections, environmental politics and so on. Moreover bees, since they are very sensitive to pollution, are considered environmental indicators, and the research on bees could give important information about the conditions of soil, air and water. In this paper we propose a real hardware and software solution for apply the internet-of-things concept to bees in order to help beekeepers to improve their business and collect data for research purposes.",
"title": ""
},
{
"docid": "83195a7a81b58fb7c22b1bb1d806eb42",
"text": "We demonstrate high-performance, flexible, transparent heaters based on large-scale graphene films synthesized by chemical vapor deposition on Cu foils. After multiple transfers and chemical doping processes, the graphene films show sheet resistance as low as ∼43 Ohm/sq with ∼89% optical transmittance, which are ideal as low-voltage transparent heaters. Time-dependent temperature profiles and heat distribution analyses show that the performance of graphene-based heaters is superior to that of conventional transparent heaters based on indium tin oxide. In addition, we confirmed that mechanical strain as high as ∼4% did not substantially affect heater performance. Therefore, graphene-based, flexible, transparent heaters are expected to find uses in a broad range of applications, including automobile defogging/deicing systems and heatable smart windows.",
"title": ""
}
] |
scidocsrr
|
342719595773f40c1753fe109e4219a9
|
Power watersheds: A new image segmentation framework extending graph cuts, random walker and optimal spanning forest
|
[
{
"docid": "0a78c9305d4b5584e87327ba2236d302",
"text": "This paper presents GeoS, a new algorithm for the efficient segmentation of n-dimensional image and video data. The segmentation problem is cast as approximate energy minimization in a conditional random field. A new, parallel filtering operator built upon efficient geodesic distance computation is used to propose a set of spatially smooth, contrast-sensitive segmentation hypotheses. An economical search algorithm finds the solution with minimum energy within a sensible and highly restricted subset of all possible labellings. Advantages include: i) computational efficiency with high segmentation accuracy; ii) the ability to estimate an approximation to the posterior over segmentations; iii) the ability to handle generally complex energy models. Comparison with max-flow indicates up to 60 times greater computational efficiency as well as greater memory efficiency. GeoS is validated quantitatively and qualitatively by thorough comparative experiments on existing and novel ground-truth data. Numerous results on interactive and automatic segmentation of photographs, video and volumetric medical image data are presented.",
"title": ""
},
{
"docid": "fb89fd2d9bf526b8bc7f1433274859a6",
"text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes",
"title": ""
}
] |
[
{
"docid": "617bb88fdb8b76a860c58fc887ab2bc4",
"text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.",
"title": ""
},
{
"docid": "a0b147e6baae3ea7622446da0b8d8e26",
"text": "The Web has come a long way since its invention by Berners-Lee, when it focused essentially on visualization and presentation of content for human consumption (Syntactic Web), to a Web providing meaningful content, facilitating the integration between people and machines (Semantic Web). This paper presents a survey of different tools that provide the enrichment of the Web with understandable annotation, in order to make its content available and interoperable between systems. We can group Semantic Annotation tools into the diverse dimensions: dynamicity, storage, information extraction process, scalability and customization. The analysis of the different annotation tools shows that (semi-)automatic and automatic systems aren't as efficient as needed without human intervention and will continue to evolve to solve the challenge. Microdata, RDFa and the new HTML5 standard will certainly bring new contributions to this issue.",
"title": ""
},
{
"docid": "6669f7260e0df11c320ac739433c6b40",
"text": "OBJECTIVE\nThis study was conducted to determine the attitudes of university students studying in different fields toward discrimination of the elderly.\n\n\nMETHODS\nThis descriptive study was conducted with students who were still studying in the 2015-2016 period. A sample size of 416 students was determined by the stratified sampling method, and students were selected by simple random sampling. Data were collected using an identifying information form and an Age Discrimination Attitude Scale (ADAS) by face-to-face interview. Statistical analysis was performed using the program SPSS 20.0.\n\n\nRESULTS\nThe mean total ADAS score of students was 67.7±6.0. The total ADAS scores and the scores of male students on limiting the life of the elderly was significantly higher than those of female students (p<0.05).\n\n\nCONCLUSION\nIt was determined that university students studying in different fields have a positive attitude toward the elderly. Action must be taken to remove discrimination of the elderly, and policies must be developed to increase social sensitivity.",
"title": ""
},
{
"docid": "53b22601144b7ea44c91fb7fca5c5bce",
"text": "In recent years, flexibility has emerged as an important guiding principle in the design of business processes. However, research on process flexibility has traditionally been solely focused on ways of how the demand for process flexibility can be satisfied by advanced process modelling techniques, i.e., issues intrinsic to the process. This paper proposes to extent existing research by studying the extrinsic drivers for process flexibility, i.e. an analysis of the root causes that drive the demand for flexible processes. These drivers can be found in the context of the process, which may include among others time, location, weather, legislation, culture or performance requirements. We argue for a stronger and more explicit consideration of these contextual factors in the design and modelling of business processes. Based on a real case study, we discuss how context can be conceptualized and integrated with existing approaches to business process design. These extensions are an essential foundation for the definition and implementation of agile processes and as such of high practical and theoretical value.",
"title": ""
},
{
"docid": "2841406ba32b534bb85fb970f2a00e58",
"text": "We present WHATSUP, a collaborative filtering system for disseminating news items in a large-scale dynamic setting with no central authority. WHATSUP constructs an implicit social network based on user profiles that express the opinions of users about the news items they receive (like-dislike). Users with similar tastes are clustered using a similarity metric reflecting long-standing and emerging (dis)interests. News items are disseminated through a novel heterogeneous gossip protocol that (1) biases the orientation of its targets towards those with similar interests, and (2) amplifies dissemination based on the level of interest in every news item. We report on an extensive evaluation of WHATSUP through (a) simulations, (b) a ModelNet emulation on a cluster, and (c) a PlanetLab deployment based on real datasets. We show that WHATSUP outperforms various alternatives in terms of accurate and complete delivery of relevant news items while preserving the fundamental advantages of standard gossip: namely, simplicity of deployment and robustness.",
"title": ""
},
{
"docid": "55b76ecbc7c994f095b0c45cb6ae034c",
"text": "of greenhouse gases in the atmosphere (IPCC, 2001), and ability of our agricultural systems to sustain producSociety is facing three related issues: overreliance on imported fuel, tion at rates needed to feed a growing world population increasing levels of greenhouse gases in the atmosphere, and producing sufficient food for a growing world population. The U.S. De(Cassman, 1999). Many papers have been written on partment of Energy and private enterprise are developing technology these topics both individually and in the various combinecessary to use high-cellulose feedstock, such as crop residues, for nations (Doran, 2002; Follett, 2001; Janzen et al., 1998a, ethanol production. Corn (Zea mays L.) residue can provide about 1998b; Lal et al., 1999). However, few authors have ad1.7 times more C than barley (Hordeum vulgare L.), oat (Avena sativa dressed all three topics together. L.), sorghum [Sorghum bicolor (L.) Moench], soybean [Glycine max Recent developments in the energy industry and ac(L.) Merr.], sunflower (Helianthus annuus L.), and wheat (Triticum tivity by entrepreneurs have prompted new strategies aestivum L.) residues based on production levels. Removal of crop for addressing the first issue, overreliance on imported residue from the field must be balanced against impacting the environfuels (Hettenhaus et al., 2000). This strategy expands use ment (soil erosion), maintaining soil organic matter levels, and preof biomass for fuel production and is contingent on deserving or enhancing productivity. Our objective is to summarize published works for potential impacts of wide-scale, corn stover collection velopment of new organisms or enzymes to convert on corn production capacity in Corn Belt soils. We address the issue of cellulosic (a high concentration of cellulose) biomass crop yield (sustainability) and related soil processes directly. However, [opposed to grain (starchy) biomass] to ethanol for use scarcity of data requires us to deal with the issue of greenhouse gases as a motor vehicle fuel. The U.S. DOE, in concert with indirectly and by inference. All ramifications of new management pracprivate enterprise, is making great strides toward develtices and crop uses must be explored and evaluated fully before an oping enzymes and improving efficiency in fuel producindustry is established. Our conclusion is that within limits, corn stover tion from biomass (DiPardo, 2000; Hettenhaus et al., can be harvested for ethanol production to provide a renewable, do2000). mestic source of energy that reduces greenhouse gases. RecommendaSources of cellulosic biomass are numerous (woody biotion for removal rates will vary based on regional yield, climatic mass crops and lumber industry wastes, forage crops, inconditions, and cultural practices. Agronomists are challenged to develop a procedure (tool) for recommending maximum permissible dustrial and municipal wastes, animal manure, and crop removal rates that ensure sustained soil productivity. residues); however, currently few sources are perceived to be available in sufficient quantity and quality to support development of an economically sized processing facility of about 1800 Mg dry matter d 1 (Hettenhaus T of the most pressing issues facing our society, et al., 2000), except crop residues (DiPardo, 2000). Bain the midterm, are overreliance on imported fuels gasse [remaining after sap extraction from sugarcane [U.S. Department of Energy (DOE) Office of Energy Ef(Saccharum officinarum L.)] in Louisiana and rice (Orficiency and Renewable Energy, 2002], increasing levels yza sativa L.) straw in California are regional examples of crop residues collected in current culture and availW.W. Wilhelm, USDA-ARS, 120 Keim Hall, Univ. of Nebraska, Linable for production of ethanol (DiPardo, 2000). Creatcoln, NE 68583-0934; J.M.F. Johnson, USDA-ARS, 803 Iowa Ave., ing an acceptable use or disposal procedure for these Morris, MN 56267-1065; J.L. Hatfield, 108 Natl. Soil Tilth Lab., 2150 residues represents a huge problem in the regions where Pammel Drive, Ames, IA 50011-3120; W.B. Voorhees, USDA-ARS (retired), 803 Iowa Ave., Morris, MN 56267-1065; and D.R. Linden, they are produced although the total quantity is not USDA-ARS (retired), 1991 Upper Buford Circle, St. Paul, MN 55108sufficient to have a great impact on fuel needs for the 0000. This paper is a joint contribution of the USDA-ARS and the nation (DiPardo, 2000). On the other hand, the quantity Agricultural Research Division of the University of Nebraska. Pubof corn stover is large, but corn stover is generally not lished as Journal Ser. no. 13949. Received 12 Dec. 2002. *Corresponding author (wwilhelm1@unl.edu). Abbreviations: 13C, change in 13C atom percent; DOE, Department Published in Agron. J. 96:1–17 (2004). American Society of Agronomy of Energy; HI, harvest index; SOC, soil organic carbon; SOM, soil organic matter. 677 S. Segoe Rd., Madison, WI 53711 USA",
"title": ""
},
{
"docid": "95c1eac3e2f814799c9d6a816714213c",
"text": "User interfaces for web image search engine results differ significantly from interfaces for traditional (text) web search results, supporting a richer interaction. In particular, users can see an enlarged image preview by hovering over a result image, and an `image preview' page allows users to browse further enlarged versions of the results, and to click-through to the referral page where the image is embedded. No existing work investigates the utility of these interactions as implicit relevance feedback for improving search ranking, beyond using clicks on images displayed in the search results page. In this paper we propose a number of implicit relevance feedback features based on these additional interactions: hover-through rate, 'converted-hover' rate, referral page click through, and a number of dwell time features. Also, since images are never self-contained, but always embedded in a referral page, we posit that clicks on other images that are embedded on the same referral webpage as a given image can carry useful relevance information about that image. We also posit that query-independent versions of implicit feedback features, while not expected to capture topical relevance, will carry feedback about the quality or attractiveness of images, an important dimension of relevance for web image search. In an extensive set of ranking experiments in a learning to rank framework, using a large annotated corpus, the proposed features give statistically significant gains of over 2% compared to a state of the art baseline that uses standard click features.",
"title": ""
},
{
"docid": "72cc9333577fb255c97f137c5d19fd54",
"text": "The purpose of this study was to provide insight on attitudes towards Facebook advertising. In order to figure out the attitudes towards Facebook advertising, a snowball survey was executed among Facebook users by spreading a link to the survey. This study was quantitative study but the results of the study were interpreted in qualitative way. This research was executed with the help of factor analysis and cluster analysis, after which Chisquare test was used. This research expected that the result of the survey would lead in to two different groups with negative and positive attitudes. Factor analysis was used to find relations between variables that the survey data generated. The factor analysis resulted in 12 factors that were put in a cluster analysis to find different kinds of groups. Surprisingly the cluster analysis enabled the finding of three groups with different interests and different attitudes towards Facebook advertising. These clusters were analyzed and compared. One group was clearly negative, tending to block and avoid advertisements. Second group was with more neutral attitude towards advertising, and more carefree internet using. They did not have blocking software in use and they like to participate in activities more often. The third group had positive attitude towards advertising. The result of this study can be used to help companies better plan their Facebook advertising according to groups. It also reminds about the complexity of people and their attitudes; not everything suits everybody.",
"title": ""
},
{
"docid": "77d80da2b0cd3e8598f9c677fc8827a9",
"text": "In this report, our approach to tackling the task of ActivityNet 2018 Kinetics-600 challenge is described in detail. Though spatial-temporal modelling methods, which adopt either such end-to-end framework as I3D [1] or two-stage frameworks (i.e., CNN+RNN), have been proposed in existing state-of-the-arts for this task, video modelling is far from being well solved. In this challenge, we propose spatial-temporal network (StNet) for better joint spatial-temporal modelling and comprehensively video understanding. Besides, given that multimodal information is contained in video source, we manage to integrate both early-fusion and later-fusion strategy of multi-modal information via our proposed improved temporal Xception network (iTXN) for video understanding. Our StNet RGB single model achieves 78.99% top-1 precision in the Kinetics-600 validation set and that of our improved temporal Xception network which integrates RGB, flow and audio modalities is up to 82.35%. After model ensemble, we achieve top-1 precision as high as 85.0% on the validation set and rank No.1 among all submissions.",
"title": ""
},
{
"docid": "04d319e0efbe7c79ab9487af67ef228d",
"text": "With the introduction of large-scale datasets and deep learning models capable of learning complex representations, impressive advances have emerged in face detection and recognition tasks. Despite such advances, existing datasets do not capture the difficulty of face recognition in the wildest scenarios, such as hostile disputes or fights. Furthermore, existing datasets do not represent completely unconstrained cases of low resolution, high blur and large pose/occlusion variances. To this end, we introduce the Wildest Faces dataset, which focuses on such adverse effects through violent scenes. The dataset consists of an extensive set of violent scenes of celebrities from movies. Our experimental results demonstrate that state-of-the-art techniques are not well-suited for violent scenes, and therefore, Wildest Faces is likely to stir further interest in face detection and recognition research.",
"title": ""
},
{
"docid": "7a290652407550fc1701b78beb557f75",
"text": "A fter years of focusing on explaining and predicting positive employee attitudes (e.g., job satisfaction, employee commitment) and behaviors (e.g., employee citizenship, work performance), organizational behavior researchers have increasingly turned their attention to understanding what drives Although researchers have used a variety of terms to describe such employee behavior (e.g., deviance, antisocial behavior, misbehavior, counterproductive behavior, unethical behavior), all of them share a concern with counternormative behavior intended to harm the organization or its stakeholders (O'Leary-Kelly, Duffy, & Griffin, 2000). Unethical behavior in organizations has been widely reported in the wake of many recent high-profile corporate scandals. As researchers and practitioners consider what may be driving such behavior, leaders are coming under increasing scrutiny not only because many senior executives are accused of having committed unethical acts but also because of the role that leaders at all levels are thought to play in managing the ethical (and unethical) conduct of organization members. For example, Bernie Ebbers, the former chief executive officer of WorldCom, was hailed as a great leader for growing the company into a telecommunications superpower.",
"title": ""
},
{
"docid": "e86ee868324e80910d57093c30c5c3f7",
"text": "These notes are based on a series of lectures I gave at the Tokyo Institute of Technology from April to July 2005. They constituted a course entitled “An introduction to geometric group theory” totalling about 20 hours. The audience consisted of fourth year students, graduate students as well as several staff members. I therefore tried to present a logically coherent introduction to the subject, tailored to the background of the students, as well as including a number of diversions into more sophisticated applications of these ideas. There are many statements left as exercises. I believe that those essential to the logical developments will be fairly routine. Those related to examples or diversions may be more challenging. The notes assume a basic knowledge of group theory, and metric and topological spaces. We describe some of the fundamental notions of geometric group theory, such as quasi-isometries, and aim for a basic overview of hyperbolic groups. We describe group presentations from first principles. We give an outline description of fundamental groups and covering spaces, sufficient to allow us to illustrate various results with more explicit examples. We also give a crash course on hyperbolic geometry. Again the presentation is rather informal, and aimed at providing a source of examples of hyperbolic groups. This is not logically essential to most of what follows. In principle, the basic theory of hyperbolic groups can be developed with no reference to hyperbolic geometry, but interesting examples would be rather sparse. In order not to interupt the exposition, I have not given references in the main text. We give sources and background material as notes in the final section. I am very grateful for the generous support offered by the Tokyo Insititute of Technology, which allowed me to complete these notes, as well as giving me the freedom to pursue my own research interests. I am indebted to Sadayoshi Kojima for his invitation to spend six months there, and for many interesting conversations. I thank Toshiko Higashi for her constant help in making my stay a very comfortable and enjoyable one. My then PhD student Ken Shackleton accompanied me on my visit, and provided some tutorial assistance. Shigeru Mizushima and Hiroshi Ooyama helped with some matters of translatation etc.",
"title": ""
},
{
"docid": "4ac6eb0f8db4d2c02b877c3d1c6892e0",
"text": "Safety and efficient operation are imperative factors t offshore production sites and a main concern to all Oil & Gas companies. A promising solution to improve both safety and efficiency is to increase the level of automation on the platforms by introducing intelligent robotic systems. Robots can execute a wide variety of tasks in offshore environments, incl uding monitoring and inspection, diagnosis and maintenance, proc ess production intervention, and cargo transport operations. In particular, considering the distance of offshore platfor ms from the Brazilian coast, such technology has great potential to increase safety by decreasing the number of onboard personnel , simp ify logistics, and reduce operating costs of Brazili n facilities. The use of robots can also allow proactive int grity management and increase frequency and efficiency of platform inspection. DORIS is a research project which endeavors to design and implement a mobile robot for remote supervision, diagnosi s, and data acquisition on offshore facilities. The propos ed ystem is composed of a rail-guided mobile robot capable of carrying different sensors through the inspected environment. The robot can also analyze sensor data and identify anomalies, such a intruders, abandoned objects, smoke, fire, and liquid lea kage. The system is able to read valves and make machine ry diagnosis as well. To prove the viability of the proposed system, an initial prototype is developed using a Roomba robot with several onboard sensors and preliminary tests have been performed in a real environment similar to an offshore platform. The te sts show that the robot is capable of indicating the presence or absence o f objects in a video stream and mapping the local area wit h laser sensor data during motion. A second prototype has been built to test the DORIS mechanical design. This prototype is us ed to test concepts related to motion on a rail with straight, cu rved, horizontal, and vertical sections. Initial results support the proposed mechanical concept and its functionalities. Introduction During the last decade, several Oil & Gas companies, re sea ch groups, and academic communities have shown an increas ed interest in the use of robotic systems for operation o f offshore facilities. Recent studies project a substant ial decrease in the level of human operation and an increase in automation used o n future offshore oil fields (Skourup and Pretlove, 2009). Today, robotic systems are used mainly for subsea tasks, s uch a mapping the seabed and performing inspection tasks on underwater equipment, risers, or pipelines using Remotely O perated Vehicles (ROVs) or Autonomous Underwater Vehicle s (AUVs). Topside operations, on the other hand, have not yet ado pted robotized automation as a solution to inspection and operation tasks. From (2010) points out the potential increase in efficiency and productivity with robot operators rather than humans, give n that robots work 24 hours per day and 7 days per week, ar less prone to errors, and are more reliable. Another hi ghlighted point is the improvement Health, Safety, and Environment ( HSE) conditions, as robots can replace humans in tasks perf orm d in unhealthy, hazardous, or confined areas. In the specific Brazilian case, the Oil & Gas industry is growing at a high pace, mainly due to the recent discove ries of big oil fields in the pre-salt layer off the Brazilian coast. These oil reservoirs are located farther than 300 km f ro the shore and at depths of 5000 to 7000 km. These factors, especially the la rg distances, motivate the development of an offshore produ cti n system with a high degree of automation based on advanced roboti cs systems.",
"title": ""
},
{
"docid": "b25b7100c035ad2953fb43087ede1625",
"text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.",
"title": ""
},
{
"docid": "249d835b11078e26bc406ae98e773df6",
"text": "This paper addresses the problem of simultaneous estimation of a vehicle's ego motion and motions of multiple moving objects in the scene-called eoru motions-through a monocular vehicle-mounted camera. Localization of multiple moving objects and estimation of their motions is crucial for autonomous vehicles. Conventional localization and mapping techniques (e.g., visual odometry and simultaneous localization and mapping) can only estimate the ego motion of the vehicle. The capability of a robot localization pipeline to deal with multiple motions has not been widely investigated in the literature. We present a theoretical framework for robust estimation of multiple relative motions in addition to the camera ego motion. First, the framework for general unconstrained motion is introduced and then it is adapted to exploit the vehicle kinematic constraints to increase efficiency. The method is based on projective factorization of the multiple-trajectory matrix. First, the ego motion is segmented and then several hypotheses are generated for the eoru motions. All the hypotheses are evaluated and the one with the smallest reprojection error is selected. The proposed framework does not need any a priori knowledge of the number of motions and is robust to noisy image measurements. The method with a constrained motion model is evaluated on a popular street-level image dataset collected in urban environments (the KITTI dataset), including several relative ego-motion and eoru-motion scenarios. A benchmark dataset (Hopkins 155) is used to evaluate this method with a general motion model. The results are compared with those of the state-of-the-art methods considering a similar problem, referred to as multibody structure from motion in the computer vision community.",
"title": ""
},
{
"docid": "83d711f1364fc63d87dc565b697b620d",
"text": "Despite decades of scientific study, the functional significance of the human female orgasm remains unsettled. Whereas male orgasm is usually coupled with ejaculation, there is no parallel association between women’s orgasm and a physiological process critical to reproduction. Indeed, even in a culture in which women’s orgasm was reportedly unknown, women managed to conceive without noticeable difficulty (Messenger 1971). It strikes many as curious that an event of such remarkable psychological import that it has been called la petite mort (“the little death”) would have no obvious reproductive function. This apparent paradox has inspired a number of scientists to offer hypotheses about the utility of the female orgasm, resulting in a heated and ongoing debate. As we discuss, some researchers have suggested that orgasm in women is a nonfunctional by-product of orgasm in men, whereas others suggest that women’s orgasm has been shaped by selection for its own function—in other words, that orgasm is an adaptation in women. In this chapter, we outline the debate between these viewpoints and review evidence for several functional hypotheses that are among the most plausible.",
"title": ""
},
{
"docid": "bda980d41e0b64ec7ec41502cada6e7f",
"text": "In this paper, we address semantic parsing in a multilingual context. We train one multilingual model that is capable of parsing natural language sentences from multiple different languages into their corresponding formal semantic representations. We extend an existing sequence-to-tree model to a multi-task learning framework which shares the decoder for generating semantic representations. We report evaluation results on the multilingual GeoQuery corpus and introduce a new multilingual version of the ATIS corpus.",
"title": ""
},
{
"docid": "57f3bb106406bf6a6f37dd7d7a8c7ef9",
"text": "Finding new uses for existing drugs, or drug repositioning, has been used as a strategy for decades to get drugs to more patients. As the ability to measure molecules in high-throughput ways has improved over the past decade, it is logical that such data might be useful for enabling drug repositioning through computational methods. Many computational predictions for new indications have been borne out in cellular model systems, though extensive animal model and clinical trial-based validation are still pending. In this review, we show that computational methods for drug repositioning can be classified in two axes: drug based, where discovery initiates from the chemical perspective, or disease based, where discovery initiates from the clinical perspective of disease or its pathology. Newer algorithms for computational drug repositioning will likely span these two axes, will take advantage of newer types of molecular measurements, and will certainly play a role in reducing the global burden of disease.",
"title": ""
},
{
"docid": "58c4c9bd2033645ece7db895d368cda6",
"text": "Nanorobotics is the technology of creating machines or robots of the size of few hundred nanometres and below consisting of components of nanoscale or molecular size. There is an all around development in nanotechnology towards realization of nanorobots in the last two decades. In the present work, the compilation of advancement in nanotechnology in context to nanorobots is done. The challenges and issues in movement of a nanorobot and innovations present in nature to overcome the difficulties in moving at nano-size regimes are discussed. The efficiency aspect in context to artificial nanorobot is also presented.",
"title": ""
},
{
"docid": "0814d93829261505cb88d33a73adc4e7",
"text": "Partial shading of a photovoltaic array is the condition under which different modules in the array experience different irradiance levels due to shading. This difference causes mismatch between the modules, leading to undesirable effects such as reduction in generated power and hot spots. The severity of these effects can be considerably reduced by photovoltaic array reconfiguration. This paper proposes a novel mathematical formulation for the optimal reconfiguration of photovoltaic arrays to minimize partial shading losses. The paper formulates the reconfiguration problem as a mixed integer quadratic programming problem and finds the optimal solution using a branch and bound algorithm. The proposed formulation can be used for an equal or nonequal number of modules per row. Moreover, it can be used for fully reconfigurable or partially reconfigurable arrays. The improvement resulting from the reconfiguration with respect to the existing photovoltaic interconnections is demonstrated by extensive simulation results.",
"title": ""
}
] |
scidocsrr
|
e0010e45735154c0088a1485a137db46
|
A scalability analysis of classifiers in text categorization
|
[
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
}
] |
[
{
"docid": "f8a1ba148f564f9dcc0c57873bb5ce60",
"text": "Advances in online technologies have raised new concerns about privacy. A sample of expert household end users was surveyed concerning privacy, risk perceptions, and online behavior intentions. A new e-privacy typology consisting of privacyaware, privacy-suspicious, and privacy-active types was developed from a principal component factor analysis. Results suggest the presence of a privacy hierarchy of effects where awareness leads to suspicion, which subsequently leads to active behavior. An important finding was that privacy-active behavior that was hypothesized to increase the likelihood of online subscription and purchasing was not found to be significant. A further finding was that perceived risk had a strong negative influence on the extent to which respondents participated in online subscription and purchasing. Based on these results, a number of implications for managers and directions for future research are discussed.",
"title": ""
},
{
"docid": "c5427ac777eaa3ecf25cb96a124eddfe",
"text": "One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.",
"title": ""
},
{
"docid": "f4d060cd114ffa2c028dada876fcb735",
"text": "Mutations of SALL1 related to spalt of Drosophila have been found to cause Townes-Brocks syndrome, suggesting a function of SALL1 for the development of anus, limbs, ears, and kidneys. No function is yet known for SALL2, another human spalt-like gene. The structure of SALL2 is different from SALL1 and all other vertebrate spalt-like genes described in mouse, Xenopus, and Medaka, suggesting that SALL2-like genes might also exist in other vertebrates. Consistent with this hypothesis, we isolated and characterized a SALL2 homologous mouse gene, Msal-2. In contrast to other vertebrate spalt-like genes both SALL2 and Msal-2 encode only three double zinc finger domains, the most carboxyterminal of which only distantly resembles spalt-like zinc fingers. The evolutionary conservation of SALL2/Msal-2 suggests that two lines of sal-like genes with presumably different functions arose from an early evolutionary duplication of a common ancestor gene. Msal-2 is expressed throughout embryonic development but also in adult tissues, predominantly in brain. However, the function of SALL2/Msal-2 still needs to be determined.",
"title": ""
},
{
"docid": "e7646a79b25b2968c3c5b668d0216aa6",
"text": "In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediatelevel descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.",
"title": ""
},
{
"docid": "68c840dbfe505d735b389dd9ff7715d3",
"text": "A new design for single-feed dual-layer dual-band patch antenna with linear polarization is presented in this letter. The dual-band performance is achieved by E-shaped and U-slot patches. The proposed bands of the antenna are WLAN (2.40-2.4835 GHz) and WiMAX (3.40-3.61 GHz) bands. The fundamental modes of the two bands are TM01 mode, and the impedance bandwidths ( ) of 26.9% and 7.1% are achieved at central frequencies of 2.60 and 3.50 GHz. The peak gains of two different bands are 7.1 and 7.4 dBi, and good band isolation is achieved between the two bands. The advantages of the antenna are simple structure, wideband performance at low band, and high gains.",
"title": ""
},
{
"docid": "1eb43d21aa090151aef2ba722b6fc704",
"text": "This study was carried out to investigate pre-service teachers’ perceived ease of use, perceived usefulness, attitude and intentions towards the utilization of virtual laboratory package in teaching and learning of Nigerian secondary school physics concepts. Descriptive survey research was employed and 66 fourth and fifth year Physics education students were purposively used as research sample. Four research questions guided the study and a 16-item questionnaire was used as instrument for data collection. The questionnaire was validated by educational technology experts, physics expert and guidance and counselling experts. Pilot study was carried out on year three physics education students and a reliability coefficients ranging from 0.76 to 0.89 was obtained for each of the four sections of the questionnaire. Data collected from the administration of the research instruments were analyzed using descriptive statistics of Mean and Standard Deviation. A decision rule was set, in which, a mean score of 2.50 and above was considered Agreed while a mean score below 2.50 was considered Disagreed. Findings revealed that pre-service physics teachers perceived the virtual laboratory package easy to use and useful with mean scores of 3.18 and 3.34 respectively. Also, respondents’ attitude and intentions to use the package in teaching and learning of physics were positive with mean scores of 3.21 and 3.37 respectively. Based on these findings, it was recommended among others that administrators should equip schools with adequate Information and Communication Technology facilities that would aid students and teachers’ utilization of virtual-based learning environments in teaching and learning process.",
"title": ""
},
{
"docid": "5a73be1c8c24958779272a1190a3df20",
"text": "We study how contract element extraction can be automated. We provide a labeled dataset with gold contract element annotations, along with an unlabeled dataset of contracts that can be used to pre-train word embeddings. Both datasets are provided in an encoded form to bypass privacy issues. We describe and experimentally compare several contract element extraction methods that use manually written rules and linear classifiers (logistic regression, SVMs) with hand-crafted features, word embeddings, and part-of-speech tag embeddings. The best results are obtained by a hybrid method that combines machine learning (with hand-crafted features and embeddings) and manually written post-processing rules.",
"title": ""
},
{
"docid": "86d725fa86098d90e5e252c6f0aaab3c",
"text": "This paper illustrates the manner in which UML can be used to study mappings to different types of database systems. After introducing UML through a comparison to the EER model, UML diagrams are used to teach different approaches for mapping conceptual designs to the relational model. As we cover object-oriented and object-relational database systems, different features of UML are used over the same enterprise example to help students understand mapping alternatives for each model. Students are required to compare and contrast the mappings in each model as part of the learning process. For object-oriented and object-relational database systems, we address mappings to the ODMG and SQL99 standards in addition to specific commercial implementations.",
"title": ""
},
{
"docid": "9a7ef5c9f6ceca7a88d2351504404954",
"text": "In this paper, we propose a 3D HMM (Three-dimensional Hidden Markov Models) approach to recognizing human facial expressions and associated emotions. Human emotion is usually classified by psychologists into six categories: Happiness, Sadness, Anger, Fear, Disgust and Surprise. Further, psychologists categorize facial movements based on the muscles that produce those movements using a Facial Action Coding System (FACS). We look beyond pure muscle movements and investigate facial features – brow, mouth, nose, eye height and facial shape – as a means of determining associated emotions. Histogram of Optical Flow is used as the descriptor for extracting and describing the key features, while training and testing are performed on 3D Hidden Markov Models. Experiments on datasets show our approach is promising and robust.",
"title": ""
},
{
"docid": "f06e080b68b5c6d640e4745537610843",
"text": "Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed triplets could be costly. Hence, a manually designed procedure is often used when training the models. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform multi-step inference implicitly through a controller and shared memory. Without a human-designed inference procedure, IRNs use training data to learn to perform multi-step inference in an embedding neural space through the shared memory and controller. While the inference procedure does not explicitly operate on top of observed triplets, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.",
"title": ""
},
{
"docid": "f291c66ebaa6b24d858103b59de792b7",
"text": "In this study, the authors investigated the hypothesis that women's sexual orientation and sexual responses in the laboratory correlate less highly than do men's because women respond primarily to the sexual activities performed by actors, whereas men respond primarily to the gender of the actors. The participants were 20 homosexual women, 27 heterosexual women, 17 homosexual men, and 27 heterosexual men. The videotaped stimuli included men and women engaging in same-sex intercourse, solitary masturbation, or nude exercise (no sexual activity); human male-female copulation; and animal (bonobo chimpanzee or Pan paniscus) copulation. Genital and subjective sexual arousal were continuously recorded. The genital responses of both sexes were weakest to nude exercise and strongest to intercourse. As predicted, however, actor gender was more important for men than for women, and the level of sexual activity was more important for women than for men. Consistent with this result, women responded genitally to bonobo copulation, whereas men did not. An unexpected result was that homosexual women responded more to nude female targets exercising and masturbating than to nude male targets, whereas heterosexual women responded about the same to both sexes at each activity level.",
"title": ""
},
{
"docid": "b53c46bc41237333f68cf96208d0128c",
"text": "Practical pattern classi cation and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classi ed. This paper presents an approach to the multi-criteria optimization problem of feature subset selection using a genetic algorithm. Our experiments demonstrate the feasibility of this approach for feature subset selection in the automated design of neural networks for pattern classi cation and knowledge discovery.",
"title": ""
},
{
"docid": "822fdafcb1cec1c0f54e82fb79900ff3",
"text": "Chlorophyll fluorescence imaging was used to follow infections of Nicotiana benthamiana with the hemibiotrophic fungus, Colletotrichum orbiculare. Based on Fv/Fm images, infected leaves were divided into: healthy tissue with values similar to non-inoculated leaves; water-soaked/necrotic tissue with values near zero; and non-necrotic disease-affected tissue with intermediate values, which preceded or surrounded water-soaked/necrotic tissue. Quantification of Fv/Fm images showed that there were no changes until late in the biotrophic phase when spots of intermediate Fv/Fm appeared in visibly normal tissue. Those became water-soaked approx. 24 h later and then turned necrotic. Later in the necrotrophic phase, there was a rapid increase in affected and necrotic tissue followed by a slower increase as necrotic areas merged. Treatment with the induced systemic resistance activator, 2R, 3R-butanediol, delayed affected and necrotic tissue development by approx. 24 h. Also, the halo of affected tissue was narrower indicating that plant cells retained a higher photosystem II efficiency longer prior to death. While chlorophyll fluorescence imaging can reveal much about the physiology of infected plants, this study demonstrates that it is also a practical tool for quantifying hemibiotrophic fungal infections, including affected tissue that is appears normal visually but is damaged by infection.",
"title": ""
},
{
"docid": "28c8e13252ea46d888d4d9a4dedf61a5",
"text": "It is almost cliché to say that there has been an explosion in the amount of research on leadership in a cross-cultural context. In this review, we describe major advances and emerging patterns in this research domain over the last several years. Our starting point for this update is roughly 1996–1997, since those are the dates of two important reviews of the cross-cultural leadership literature [specifically, House, Wright, and Aditya (House, R. J., Wright, N. S., & Aditya, R. N. (1997). Cross-cultural research on organizational leadership: A critical analysis and a proposed theory. In: P. C. Earley, & M. Erez (Eds.), New perspectives on international industrial/organizational psychology (pp. 535–625). San Francisco, CA) and Dorfman (Dorfman, P. W. (1996). International and cross-cultural leadership research. In: B. J. Punnett, & O. Shenkar (Eds.), Handbook for international management research, pp. 267–349, Oxford, UK: Blackwell)]. We describe the beginnings of the decline in the quest for universal leadership principles that apply equivalently across all cultures, and we focus on the increasing application of the dimensions of culture identified by Hofstede [Hofstede, G. (1980). Culture’s consequences: International differences in work-related values (Abridged ed.). Newbury Park, CA: Sage] and others to describe variation in leadership styles, practices, and preferences. We also note the emergence of the field of cross-cultural leadership as a legitimate and independent field of endeavor, as reflected in the emergence of publication outlets for this research, and the establishment of long-term multinational multi-investigator research programs on the topic. We conclude with a discussion of progress made since the two pieces that were our departure point, and of progress yet to be made. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "259e95c8d756f31408d30bbd7660eea3",
"text": "The capacity to identify cheaters is essential for maintaining balanced social relationships, yet humans have been shown to be generally poor deception detectors. In fact, a plethora of empirical findings holds that individuals are only slightly better than chance when discerning lies from truths. Here, we report 5 experiments showing that judges' ability to detect deception greatly increases after periods of unconscious processing. Specifically, judges who were kept from consciously deliberating outperformed judges who were encouraged to do so or who made a decision immediately; moreover, unconscious thinkers' detection accuracy was significantly above chance level. The reported experiments further show that this improvement comes about because unconscious thinking processes allow for integrating the particularly rich information basis necessary for accurate lie detection. These findings suggest that the human mind is not unfit to distinguish between truth and deception but that this ability resides in previously overlooked processes.",
"title": ""
},
{
"docid": "2a60bb7773d2e5458de88d2dc0e78e54",
"text": "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols.",
"title": ""
},
{
"docid": "59d57e31357eb72464607e89ba4ba265",
"text": "Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks, kernels, and e-Science workloads. We also compare using long-term traces the performance characteristics and cost models of clouds with those of other platforms accessible to scientists. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community. Wp 1 http://www.pds.ewi.tudelft.nl/∼iosup/ S. Ostermann et al. Wp Early Cloud Computing EvaluationWp PDS",
"title": ""
},
{
"docid": "17ab4797666afed3a37a8761fcbb0d1e",
"text": "In this paper, we propose a CPW fed triple band notch UWB antenna array with EBG structure. The major consideration in the antenna array design is the mutual coupling effect that exists within the elements. The use of Electromagnetic Band Gap structures in the antenna arrays can limit the coupling by suppresssing the surface waves. The triple band notch antenna consists of three slots which act as notch resonators for a specific band of frequencies, the C shape slot at the main radiator (WiMax-3.5GHz), a pair of CSRR structures at the ground plane(WLAN-5.8GHz) and an inverted U shaped slot in the center of the patch (Satellite Service bands-8.2GHz). The main objective is to reduce mutual coupling which in turn improves the peak realized gain, directivity.",
"title": ""
},
{
"docid": "859e5fda6de846a73c291dbe656d4137",
"text": "A platform to study ultrasound as a source for wireless energy transfer and communication for implanted medical devices is described. A tank is used as a container for a pair of electroacoustic transducers, where a control unit is fixed to one wall of the tank and a transponder can be manually moved in three axes and rotate using a mechanical system. The tank is filled with water to allow acoustic energy and data transfer, and the system is optimized to avoid parasitic effects due to cables, reflection paths and cross talk problems. A printed circuit board is developed to test energy scavenging such that enough acoustic intensity is generated by the control unit to recharge a battery loaded to the transponder. In the same manner, a second printed circuit board is fabricated to study transmission of information through acoustic waves.",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
}
] |
scidocsrr
|
8b83b7be2115801005e3fd42ff9ec760
|
Music-evoked nostalgia: affect, memory, and personality.
|
[
{
"docid": "4b04a4892ef7c614b3bf270f308e6984",
"text": "One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.",
"title": ""
}
] |
[
{
"docid": "3f50585a983c91575c38c52219091c63",
"text": "Most fingerprint matching systems are based on matching minutia points between two fingerprint images. Each minutia is represented by a fixed number of attributes such as the location, orientation, type and other local information. A hard decision is made on the match between a pair of minutiae based on the similarity of these attributes. In this paper, we present a minutiae matching algorithm that uses spatial correlation of regions around the minutiae to ascertain the quality of each minutia match. The proposed algorithm has two main advantages. Since the gray level values of the pixels around a minutia point retain most of the local information, spatial correlation provides an accurate measure of the similarity between minutia regions. Secondly, no hard decision is made on the correspondence between a minutia pair. Instead the quality of all the minutiae matches are accumulated to arrive at the final matching score between the template and query fingerprint impressions. Experiments on a database of 160 users (4 impressions per finger) indicate that the proposed algorithm serves well to complement the 2D dynamic programming based minutiae matching technique; a combination of these two methods can reduce the false non-match rate by approximately 3.5% at a false match rate of 0.1%.",
"title": ""
},
{
"docid": "e82d3eedc733d536c49a69856ad66e00",
"text": "Artificial neural networks, trained only on sample deals, without presentation of any human knowledge or even rules of the game, are used to estimate the number of tricks to be taken by one pair of bridge players in the so-called double dummy bridge problem (DDBP). Four representations of a deal in the input layer were tested leading to significant differences in achieved results. In order to test networks' abilities to extract knowledge from sample deals, experiments with additional inputs representing estimators of hand's strength used by humans were also performed. The superior network trained solely on sample deals outperformed all other architectures, including those using explicit human knowledge of the game of bridge. Considering the suit contracts, this network, in a sample of 100 000 testing deals, output a perfect answer in 53.11% of the cases and only in 3.52% of them was mistaken by more than one trick. The respective figures for notrump contracts were equal to 37.80% and 16.36%. The above results were compared with the ones obtained by 24 professional human bridge players-members of The Polish Bridge Union-on test sets of sizes between 27 and 864 deals per player (depending on player's time availability). In case of suit contracts, the perfect answer was obtained in 53.06% of the testing deals for ten upper-classified players and in 48.66% of them, for the remaining 14 participants of the experiment. For the notrump contracts, the respective figures were equal to 73.68% and 60.78%. Except for checking the ability of neural networks in solving the DDBP, the other goal of this research was to analyze connection weights in trained networks in a quest for weights' patterns that are explainable by experienced human bridge players. Quite surprisingly, several such patterns were discovered (e.g., preference for groups of honors, drawing special attention to Aces, favoring cards from a trump suit, gradual importance of cards in one suit-from two to the Ace, etc.). Both the numerical figures and weight patterns are stable and repeatable in a sample of neural architectures (differing only by randomly chosen initial weights). In summary, the piece of research described in this paper provides a detailed comparison between various data representations of the DDBP solved by neural networks. On a more general note, this approach can be extended to a certain class of binary classification problems.",
"title": ""
},
{
"docid": "22f61d8bab9ba3b89b9ce23d5ee2ef04",
"text": "Images of female scientists and engineers in popular$lms convey cultural and social assumptions about the role of women in science, engineering, and technology (SET). This study analyzed cultural representations of gender conveyed through images offemale scientists andengineers in popularjilms from 1991 to 2001. While many of these depictions of female scientists and engineers emphasized their appearance and focused on romance, most depictions also presented female scientists and engineers in professional positions of high status. Other images that showed the fernale scientists and engineers' interactions with male colleagues, ho~vevel; reinforced traditional social and cultural assumptions about the role of women in SET through overt and subtle forms of stereotyping. This article explores the sign$cance of thesejindings fordevelopingprograms to change girls'perceptions of scientists and engineers and attitudes toward SET careers.",
"title": ""
},
{
"docid": "a1fcf0d2b9a619c0a70b210c70cf4bfd",
"text": "This paper demonstrates a reliable navigation of a mobile robot in outdoor environment. We fuse differential GPS and odometry data using the framework of extended Kalman filter to localize a mobile robot. And also, we propose an algorithm to detect curbs through the laser range finder. An important feature of road environment is the existence of curbs. The mobile robot builds the map of the curbs of roads and the map is used for tracking and localization. The navigation system for the mobile robot consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The control station receives and displays the image data and the teleoperator commands the mobile robot based on the image data. Since the image data does not contain enough data for reliable navigation, a hybrid strategy for reliable mobile robot in outdoor environment is suggested. When the mobile robot is faced with unexpected obstacles or the situation that, if it follows the command, it can happen to collide, it sends a warning message to the teleoperator and changes the mode from teleoperated to autonomous to avoid the obstacles by itself. After avoiding the obstacles or the collision situation, the mode of the mobile robot is returned to teleoperated mode. We have been able to confirm that the appropriate change of navigation mode can help the teleoperator perform reliable navigation in outdoor environment through experiments in the road.",
"title": ""
},
{
"docid": "1bf462c3645458c0bd2e88c237a885f1",
"text": "OBJECTIVE\nUsing a new construct, job embeddedness, from the business management literature, this study first examines its value in predicting employee retention in a healthcare setting and second, assesses whether the factors that influence the retention of nurses are systematically different from those influencing other healthcare workers.\n\n\nBACKGROUND\nThe shortage of skilled healthcare workers makes it imperative that healthcare providers develop effective recruitment and retention plans. With nursing turnover averaging more than 20% a year and competition to hire new nurses fierce, many administrators rightly question whether they should develop specialized plans to recruit and retain nurses.\n\n\nMETHODS\nA longitudinal research design was employed to assess the predictive validity of the job embeddedness concept. At time 1, surveys were mailed to a random sample of 500 employees of a community-based hospital in the Northwest region of the United States. The survey assessed personal characteristics, job satisfaction, organizational commitment, job embeddedness, job search, perceived alternatives, and intent to leave. One year later (time 2) the organization provided data regarding voluntary leavers from the hospital.\n\n\nRESULTS\nHospital employees returned 232 surveys, yielding a response rate of 46.4 %. The results indicate that job embeddedness predicted turnover over and beyond a combination of perceived desirability of movement measures (job satisfaction, organizational commitment) and perceived ease of movement measures (job alternatives, job search). Thus, job embeddedness assesses new and meaningful variance in turnover in excess of that predicted by the major variables included in almost all the major models of turnover.\n\n\nCONCLUSIONS\nThe findings suggest that job embeddedness is a valuable lens through which to evaluate employee retention in healthcare organizations. Further, the levers for influencing retention are substantially similar for nurses and other healthcare workers. Implications of these findings and recommendations for recruitment and retention policy development are presented.",
"title": ""
},
{
"docid": "c694936a9b8f13654d06b72c077ed8f4",
"text": "Druid is an open source data store designed for real-time exploratory analytics on large data sets. The system combines a column-oriented storage layout, a distributed, shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. In this paper, we describe Druid’s architecture, and detail how it supports fast aggregations, flexible filters, and low latency data ingestion.",
"title": ""
},
{
"docid": "604362129b2ed5510750cc161cf54bbf",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and speed are also important concerns. These are the two main characteristics that differentiate one encryption algorithm from another. This paper provides the performance comparison between four of the most commonly used encryption algorithms: DES(Data Encryption Standard), 3DES(Triple DES), BLOWFISH and AES (Rijndael). The comparison has been conducted by running several setting to process different sizes of data blocks to evaluate the algorithms encryption and decryption speed. Based on the performance analysis of these algorithms under different hardware and software platform, it has been concluded that the Blowfish is the best performing algorithm among the algorithms under the security against unauthorized attack and the speed is taken into consideration.",
"title": ""
},
{
"docid": "4d8335fa722e1851536182d5657ab738",
"text": "Location-aware mobile applications have become extremely common, with a recent wave of mobile dating applications that provide relatively sparse profiles to connect nearby individuals who may not know each other for immediate social or sexual encounters. These applications have become particularly popular among men who have sex with men (MSM) and raise a range of questions about self-presentation, visibility to others, and impression formation, as traditional geographic boundaries and social circles are crossed. In this paper we address two key questions around how people manage potentially stigmatized identities in using these apps and what types of information they use to self-present in the absence of a detailed profile or rich social cues. To do so, we draw on profile data observed in twelve locations on Grindr, a location-aware social application for MSM. Results suggest clear use of language to manage stigma associated with casual sex, and that users draw regularly on location information and other descriptive language to present concisely to others nearby.",
"title": ""
},
{
"docid": "81cc7e40bd2b2b13a026022148e3c7d1",
"text": "BACKGROUND\nThe long-term treatment of Parkinson disease (PD) may be complicated by the development of levodopa-induced dyskinesia. Clinical and animal model data support the view that modulation of cannabinoid function may exert an antidyskinetic effect. The authors conducted a randomized, double-blind, placebo-controlled crossover trial to examine the hypothesis that cannabis may have a beneficial effect on dyskinesia in PD.\n\n\nMETHODS\nA 4-week dose escalation study was performed to assess the safety and tolerability of cannabis in six PD patients with levodopa-induced dyskinesia. Then a randomized placebo-controlled crossover study (RCT) was performed, in which 19 PD patients were randomized to receive oral cannabis extract followed by placebo or vice versa. Each treatment phase lasted for 4 weeks with an intervening 2-week washout phase. The primary outcome measure was a change in Unified Parkinson's Disease Rating Scale (UPDRS) (items 32 to 34) dyskinesia score. Secondary outcome measures included the Rush scale, Bain scale, tablet arm drawing task, and total UPDRS score following a levodopa challenge, as well as patient-completed measures of a dyskinesia activities of daily living (ADL) scale, the PDQ-39, on-off diaries, and a range of category rating scales.\n\n\nRESULTS\nSeventeen patients completed the RCT. Cannabis was well tolerated, and had no pro- or antiparkinsonian action. There was no evidence for a treatment effect on levodopa-induced dyskinesia as assessed by the UPDRS, or any of the secondary outcome measures.\n\n\nCONCLUSIONS\nOrally administered cannabis extract resulted in no objective or subjective improvement in dyskinesias or parkinsonism.",
"title": ""
},
{
"docid": "349df0d3c48b6c1b6fcad1935f5e1e0a",
"text": "Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.",
"title": ""
},
{
"docid": "8056b29e7b39dee06f04b738807a53f9",
"text": "This paper proposes a novel topology of a multiport DC/DC converter composed of an H-bridge inverter, a high-frequency galvanic isolation transformer, and a combined circuit with a current-doubler and a buck chopper. The topology has lower conduction loss by multiple current paths and smaller output capacitors by means of an interleave operation. Results of computer simulations and experimental tests show proper operations and feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "fe38de8c129845b86ee0ec4acf865c14",
"text": "0 7 4 0 7 4 5 9 / 0 2 / $ 1 7 . 0 0 © 2 0 0 2 I E E E McDonald’s develop product lines. But software product lines are a relatively new concept. They are rapidly emerging as a practical and important software development paradigm. A product line succeeds because companies can exploit their software products’ commonalities to achieve economies of production. The Software Engineering Institute’s (SEI) work has confirmed the benefits of pursuing this approach; it also found that doing so is both a technical and business decision. To succeed with software product lines, an organization must alter its technical practices, management practices, organizational structure and personnel, and business approach.",
"title": ""
},
{
"docid": "d1ad10c873fd5a02d1ce072b4ffc788c",
"text": "Zero-shot learning for visual recognition, e.g., object and action recognition, has recently attracted a lot of attention. However, it still remains challenging in bridging the semantic gap between visual features and their underlying semantics and transferring knowledge to semantic categories unseen during learning. Unlike most of the existing zero-shot visual recognition methods, we propose a stagewise bidirectional latent embedding framework of two subsequent learning stages for zero-shot visual recognition. In the bottom–up stage, a latent embedding space is first created by exploring the topological and labeling information underlying training data of known classes via a proper supervised subspace learning algorithm and the latent embedding of training data are used to form landmarks that guide embedding semantics underlying unseen classes into this learned latent space. In the top–down stage, semantic representations of unseen-class labels in a given label vocabulary are then embedded to the same latent space to preserve the semantic relatedness between all different classes via our proposed semi-supervised Sammon mapping with the guidance of landmarks. Thus, the resultant latent embedding space allows for predicting the label of a test instance with a simple nearest-neighbor rule. To evaluate the effectiveness of the proposed framework, we have conducted extensive experiments on four benchmark datasets in object and action recognition, i.e., AwA, CUB-200-2011, UCF101 and HMDB51. The experimental results under comparative studies demonstrate that our proposed approach yields the state-of-the-art performance under inductive and transductive settings.",
"title": ""
},
{
"docid": "223d5658dee7ba628b9746937aed9bb3",
"text": "A low-power receiver with a one-tap data and edge decision-feedback equalizer (DFE) and a clock recovery circuit is presented. The receiver employs analog adders for the tap-weight summation in both the data and the edge path to simultaneously optimize both the voltage and timing margins. A switched-capacitor input stage allows the receiver to be fully compatible with near-GND input levels without extra level conversion circuits. Furthermore, the critical path of the DFE is simplified to relax the timing margin. Fabricated in the 65-nm CMOS technology, a prototype DFE receiver shows that the data-path DFE extends the voltage and timing margins from 40 mVpp and 0.3 unit interval (UI), respectively, to 70 mVpp and 0.6 UI, respectively. Likewise, the edge-path equalizer reduces the uncertain sampling region (the edge region), which results in 17% reduction of the recovered clock jitter. The DFE core, including adders and samplers, consumes 1.1 mW from a 1.2-V supply while operating at 6.4 Gb/s.",
"title": ""
},
{
"docid": "54032bb625ea3c4bc8cd408c4f9f0324",
"text": "This study integrates an ecological perspective and trauma theory in proposing a model of the effects of domestic violence on women's parenting and children's adjustment. One hundred and twenty women and their children between the ages of 7 and 12 participated. Results supported an ecological model of the impact of domestic violence on women and children. The model predicted 40% of the variance in children's adjustment, 8% of parenting style, 43% of maternal psychological functioning, and 23% of marital satisfaction, using environmental factors such as social support, negative life events, and maternal history of child abuse. Overall, results support the ecological framework and trauma theory in understanding the effects of domestic violence on women and children. Rather than focusing on internal pathology, behavior is seen to exist on a continuum influenced heavily by the context in which the person is developing.",
"title": ""
},
{
"docid": "00b8207e783aed442fc56f7b350307f6",
"text": "A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.",
"title": ""
},
{
"docid": "5621d7df640dbe3d757ebb600486def9",
"text": "Dynamic spectrum access is the key to solving worldwide spectrum shortage. The open wireless medium subjects DSA systems to unauthorized spectrum use by illegitimate users. This paper presents SpecGuard, the first crowdsourced spectrum misuse detection framework for DSA systems. In SpecGuard, a transmitter is required to embed a spectrum permit into its physical-layer signals, which can be decoded and verified by ubiquitous mobile users. We propose three novel schemes for embedding and detecting a spectrum permit at the physical layer. Detailed theoretical analyses, MATLAB simulations, and USRP experiments confirm that our schemes can achieve correct, low-intrusive, and fast spectrum misuse detection.",
"title": ""
},
{
"docid": "37ba886ef73a8d35b4e9a4ae5dfa68bf",
"text": "Owe to the rapid development of deep neural network (DNN) techniques and the emergence of large scale face databases, face recognition has achieved a great success in recent years. During the training process of DNN, the face features and classification vectors to be learned will interact with each other, while the distribution of face features will largely affect the convergence status of network and the face similarity computing in test stage. In this work, we formulate jointly the learning of face features and classification vectors, and propose a simple yet effective centralized coordinate learning (CCL) method, which enforces the features to be dispersedly spanned in the coordinate space while ensuring the classification vectors to lie on a hypersphere. An adaptive angular margin is further proposed to enhance the discrimination capability of face features. Extensive experiments are conducted on six face benchmarks, including those have large age gap and hard negative samples. Trained only on the small-scale CASIA Webface dataset with 460K face images from about 10K subjects, our CCL model demonstrates high effectiveness and generality, showing consistently competitive performance across all the six benchmark databases.",
"title": ""
},
{
"docid": "0cf25d7f955a2eb7b015b4de91bb4524",
"text": "We describe the University of Maryland machine translation systems submitted to the IWSLT 2015 French-English and Vietnamese-English tasks. We built standard hierarchical phrase-based models, extended in two ways: (1) we applied novel data selection techniques to select relevant information from the large French-English training corpora, and (2) we experimented with neural language models. Our FrenchEnglish system compares favorably against the organizers’ baseline, while the Vietnamese-English one does not, indicating the difficulty of the translation scenario.",
"title": ""
}
] |
scidocsrr
|
2a67a80a255f0b73961353fcb760c567
|
A novel 24-GHz series-fed patch antenna array for radar system
|
[
{
"docid": "5e75a4ea83600736c601e46cb18aa2c9",
"text": "This paper deals with a low-cost 24GHz Doppler radar sensor for traffic surveillance. The basic building blocks of the transmit/receive chain, namely the antennas, the balanced power amplifier (PA), the dielectric resonator oscillator (DRO), the low noise amplifier (LNA) and the down conversion diode mixer are presented underlining the key technologies and manufacturing approaches by means the required performances can be attained while keeping industrial costs extremely low.",
"title": ""
}
] |
[
{
"docid": "e7811adcfb76f9d6ca458252909541fc",
"text": "Performing facial recognition between Near Infrared (NIR) and visible-light (VIS) images has been established as a common method of countering illumination variation problems in face recognition. In this paper we present a new database to enable the evaluation of cross-spectral face recognition. A series of preprocessing algorithms, followed by Local Binary Pattern Histogram (LBPH) representation and combinations with Linear Discriminant Analysis (LDA) are used for recognition. These experiments are conducted on both NIR→VIS and the less common VIS→NIR protocols, with permutations of uni-modal training sets. 12 individual baseline algorithms are presented. In addition, the best performing fusion approaches involving a subset of 12 algorithms are also described.",
"title": ""
},
{
"docid": "d33b6c231c9f8032b44581f0a14901df",
"text": "This article is focused on technical details for successfully reconstructing the nasal skin cover in parts or totally. Nasal reconstruction is based on the successful reconstruction of the inner lining and the nasal framework in three-layer defects. The details to be considered include planning the flap, subunit reconstruction and outline of margins, dealing with hair-bearing forehead skin, sequence of stages, intermediate debulking, details of pedicle dissection, brow reconstruction, forehead closure, forehead expansion, and complication management.",
"title": ""
},
{
"docid": "30eb03eca06dcc006a28b5e00431d9ed",
"text": "We present for the first time a μW-power convolutional neural network for seizure detection running on a low-power microcontroller. On a dataset of 22 patients a median sensitivity of 100% is achieved. With a false positive rate of 20.7 fp/h and a short detection delay of 3.4 s it is suitable for the application in an implantable closed-loop device.",
"title": ""
},
{
"docid": "39c0d4c998a81a5de43ff99646a67624",
"text": "Internet of Things (IoT) has recently emerged as an enabling technology for context-aware and interconnected “smart things.” Those smart things along with advanced power engineering and wireless communication technologies have realized the possibility of next generation electrical grid, smart grid, which allows users to deploy smart meters, monitoring their electric condition in real time. At the same time, increased environmental consciousness is driving electric companies to replace traditional generators with renewable energy sources which are already productive in user’s homes. One of the most incentive ways is for electric companies to institute electricity buying-back schemes to encourage end users to generate more renewable energy. Different from the previous works, we consider renewable energy buying-back schemes with dynamic pricing to achieve the goal of energy efficiency for smart grids. We formulate the dynamic pricing problem as a convex optimization dual problem and propose a day-ahead time-dependent pricing scheme in a distributed manner which provides increased user privacy. The proposed framework seeks to achieve maximum benefits for both users and electric companies. To our best knowledge, this is one of the first attempts to tackle the time-dependent problem for smart grids with consideration of environmental benefits of renewable energy. Numerical results show that our proposed framework can significantly reduce peak time loading and efficiently balance system energy distribution.",
"title": ""
},
{
"docid": "2f20bca0134eb1bd9d65c4791f94ddcc",
"text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"title": ""
},
{
"docid": "40d46bc75d11b6d4139cb7a1267ac234",
"text": "10 Abstract This paper introduces the third generation of Pleated Pneumatic Artificial Muscles (PPAM), which has been developed to simplify the production over the first and second prototype. This type of artificial muscle was developed to overcome dry friction and material deformation, which is present in the widely used McKibben muscle. The essence of the PPAM is its pleated membrane structure which enables the 15 muscle to work at low pressures and at large contractions. In order to validate the new PPAM generation, it has been compared with the mathematical model and the previous generation. The new production process and the use of new materials introduce improvements such as 55% reduction in the actuator’s weight, a higher reliability, a 75% reduction in the production time and PPAMs can now be produced in all sizes from 4 to 50 cm. This opens the possibility to commercialize this type of muscles 20 so others can implement it. Furthermore, a comparison with experiments between PPAM and Festo McKibben muscles is discussed. Small PPAMs present similar force ranges and larger contractions than commercially available McKibben-like muscles. The use of series arrangements of PPAMs allows for large strokes and relatively small diameters at the same time and, since PPAM 3.0 is much more lightweight than the commong McKibben models made by Festo, it presents better force-to-mass and energy 25 to mass ratios than Festo models. 2012 Taylor & Francis and The Robotics Society of Japan",
"title": ""
},
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "7604fdb727d378f9a63e6c5f43772236",
"text": "In this paper, we propose a novel graph kernel specifically to address a challenging problem in the field of cyber-security, namely, malware detection. Previous research has revealed the following: (1) Graph representations of programs are ideally suited for malware detection as they are robust against several attacks, (2) Besides capturing topological neighbourhoods (i.e., structural information) from these graphs it is important to capture the context under which the neighbourhoods are reachable to accurately detect malicious neighbourhoods. We observe that state-of-the-art graph kernels, such as Weisfeiler-Lehman kernel (WLK) capture the structural information well but fail to capture contextual information. To address this, we develop the Contextual Weisfeiler-Lehman kernel (CWLK) which is capable of capturing both these types of information. We show that for the malware detection problem, CWLK is more expressive and hence more accurate than WLK while maintaining comparable efficiency. Through our largescale experiments with more than 50,000 real-world Android apps, we demonstrate that CWLK outperforms two state-of-the-art graph kernels (including WLK) and three malware detection techniques by more than 5.27% and 4.87% F-measure, respectively, while maintaining high efficiency. This high accuracy and efficiency make CWLK suitable for large-scale real-world malware detection.",
"title": ""
},
{
"docid": "f1325dd1350acf612dc1817db693a3d6",
"text": "Software for the measurement of genetic diversity (SMOGD) is a web-based application for the calculation of the recently proposed genetic diversity indices G'(ST) and D(est) . SMOGD includes bootstrapping functionality for estimating the variance, standard error and confidence intervals of estimated parameters, and SMOGD also generates genetic distance matrices from pairwise comparisons between populations. SMOGD accepts standard, multilocus Genepop and Arlequin formatted input files and produces HTML and tab-delimited output. This allows easy data submission, quick visualization, and rapid import of results into spreadsheet or database programs.",
"title": ""
},
{
"docid": "ac4b6ec32fe607e5e9981212152901f5",
"text": "As an important matrix factorization model, Nonnegative Matrix Factorization (NMF) has been widely used in information retrieval and data mining research. Standard Nonnegative Matrix Factorization is known to use the Frobenius norm to calculate the residual, making it sensitive to noises and outliers. It is desirable to use robust NMF models for practical applications, in which usually there are many data outliers. It has been studied that the 2,1, or 1-norm can be used for robust NMF formulations to deal with data outliers. However, these alternatives still suffer from the extreme data outliers. In this paper, we present a novel robust capped norm orthogonal Nonnegative Matrix Factorization model, which utilizes the capped norm for the objective to handle these extreme outliers. Meanwhile, we derive a new efficient optimization algorithm to solve the proposed non-convex non-smooth objective. Extensive experiments on both synthetic and real datasets show our proposed new robust NMF method consistently outperforms related approaches.",
"title": ""
},
{
"docid": "c58f8730f255bb7623fe8db5155ed93a",
"text": "In multimedia security, it is an important task to localize the tampered image regions. In this work, deep learning is used to solve this problem and the approach can be applied to multi-format images. Concretely, we use Stack Autoencoder to obtain the tampered image block features so that the forgery can be identified in a semi-automatic manner. Contextual information of image block is further integrated to improve the localization accuracy. The approach is tested on a benchmark dataset, with a 92.84% localization accuracy and a 0.9375 Area Under Curve (AUC) score. Compared to the state-of-the-art solutions for multi-format images, our solution has an over 40% AUC improvement and 5.7 times F1 improvement. The results also out-perform several approaches which are designed specifically for JPEG images by 41.12%∼63.08% in AUC and with a 4∼8 times better F1.",
"title": ""
},
{
"docid": "f7f6a525e69dd7a28bb7c4d056df8011",
"text": "This paper presents a 3D path planing algorithm for an unmanned aerial vehicle (UAV) operating in cluttered natural environments. The algorithm satisfies the upper bounded curvature constraint and the continuous curvature requirement. In this work greater attention is placed on the computational complexity in comparison with other path-planning considerations. The rapidly-exploring random trees (RRTs) algorithm is used for the generation of collision free waypoints. The unnecessary waypoints are removed by a simple path pruning algorithm generating a piecewise linear path. Then a path smoothing algorithm utilizing cubic Bezier spiral curves to generate a continuous curvature path that satisfies the minimum radius of curvature constraint of UAV is implemented. The angle between two waypoints is the only information required for the generation of the continuous curvature path. The result shows that the suggested algorithm is simple and easy to implement compared with the Clothoids method.",
"title": ""
},
{
"docid": "68a31c4830f71e7e94b90227d69b5a79",
"text": "For many primary storage customers, storage must balance the requirements for large capacity, high performance, and low cost. A well studied technique is to place a solid state drive (SSD) cache in front of hard disk drive (HDD) storage, which can achieve much of the performance benefit of SSDs and the cost per gigabyte efficiency of HDDs. To further lower the cost of SSD caches and increase effective capacity, we propose the addition of data reduction techniques. Our cache architecture, called Nitro, has three main contributions: (1) an SSD cache design with adjustable deduplication, compression, and large replacement units, (2) an evaluation of the trade-offs between data reduction, RAM requirements, SSD writes (reduced up to 53%, which improves lifespan), and storage performance, and (3) acceleration of two prototype storage systems with an increase in IOPS (up to 120%) and reduction of read response time (up to 55%) compared to an SSD cache without Nitro. Additional benefits of Nitro include improved random read performance, faster snapshot restore, and reduced writes to SSDs.",
"title": ""
},
{
"docid": "fc421a5ef2556b86c34d6f2bb4dc018e",
"text": "It's been over a decade now. We've forgotten how slow the adoption of consumer Internet commerce has been compared to other Internet growth metrics. And we're surprised when security scares like spyware and phishing result in lurches in consumer use.This paper re-visits an old theme, and finds that consumer marketing is still characterised by aggression and dominance, not sensitivity to customer needs. This conclusion is based on an examination of terms and privacy policy statements, which shows that businesses are confronting the people who buy from them with fixed, unyielding interfaces. Instead of generating trust, marketers prefer to wield power.These hard-headed approaches can work in a number of circumstances. Compelling content is one, but not everyone sells sex, gambling services, short-shelf-life news, and even shorter-shelf-life fashion goods. And, after decades of mass-media-conditioned consumer psychology research and experimentation, it's far from clear that advertising can convert everyone into salivating consumers who 'just have to have' products and services brand-linked to every new trend, especially if what you sell is groceries or handyman supplies.The thesis of this paper is that the one-dimensional, aggressive concept of B2C has long passed its use-by date. Trading is two-way -- consumers' attention, money and loyalty, in return for marketers' products and services, and vice versa.So B2C is conceptually wrong, and needs to be replaced by some buzzphrase that better conveys 'B-with-C' rather than 'to-C' and 'at-C'. Implementations of 'customised' services through 'portals' have to mature beyond data-mining-based manipulation to support two-sided relationships, and customer-managed profiles.It's all been said before, but now it's time to listen.",
"title": ""
},
{
"docid": "013174960fc2bd32ac658056f3a07e8a",
"text": "Early detection of ventricular fibrillation (VF) and rapid ventricular tachycardia (VT) is crucial for the success of the defibrillation therapy. A wide variety of detection algorithms have been proposed based on temporal, spectral, or complexity parameters extracted from the ECG. However, these algorithms are mostly constructed by considering each parameter individually. In this study, we present a novel life-threatening arrhythmias detection algorithm that combines a number of previously proposed ECG parameters by using support vector machines classifiers. A total of 13 parameters were computed accounting for temporal (morphological), spectral, and complexity features of the ECG signal. A filter-type feature selection (FS) procedure was proposed to analyze the relevance of the computed parameters and how they affect the detection performance. The proposed methodology was evaluated in two different binary detection scenarios: shockable (FV plus VT) versus nonshockable arrhythmias, and VF versus nonVF rhythms, using the information contained in the medical imaging technology database, the Creighton University ventricular tachycardia database, and the ventricular arrhythmia database. sensitivity (SE) and specificity (SP) analysis on the out of sample test data showed values of SE=95%, SP=99%, and SE=92% , SP=97% in the case of shockable and VF scenarios, respectively. Our algorithm was benchmarked against individual detection schemes, significantly improving their performance. Our results demonstrate that the combination of ECG parameters using statistical learning algorithms improves the efficiency for the detection of life-threatening arrhythmias.",
"title": ""
},
{
"docid": "29ccb1a24069d94c21b4e26b6ece0046",
"text": "The business market has been undergoing a paradigmatic change. The rise of the Internet, market fragmentation, and increasing global competition is changing the “value” that business marketers provide. This paradigmatic transformation requires changes in the way companies are organized to create and deliver value to their customers. Business marketers have to continuously increase their contribution to the value chain. If not, value migrates from a given business paradigm (e.g., minicomputers and DEC) to alternate business paradigms (e.g., flexibly manufactured PCs and Dell). This article focuses on ways in which business marketers are creating value in the Internet and digital age. Examples from business marketers are discussed and managerial implications are highlighted. © 2001 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "973334e5704c861bc917abf5c0f4d0a1",
"text": "Today e-commerce has become crucial element to transform some of the world countries into an information society. Business to consumer (B2C) in the developing countries is not yet a normalcy as compared to the developed countries. Consumer behaviour research has shown disappointing results regarding the overall use of the Web for online shopping, despite its considerable promise as a channel for commerce. As the use of the Internet continues to grow in all aspects of daily life, there is an increasing need to better understand what trends of internet usage and to study the barriers and problem of ecommerce adoption. Hence, the purpose of this research is to define how far Technology Acceptance Model (TAM) contributed in e-commerce adoption. Data for this study was collected by the means of a survey conducted in Malaysia in 2010. A total of 611 questionnaire forms were delivered to respondents. The location of respondents was within Penang state. By studying this sample, conclusions would be drawn to generalize the interests of the population.",
"title": ""
},
{
"docid": "258e931d5c8d94f73be41cbb0058f49b",
"text": "VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output.\n VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.",
"title": ""
},
{
"docid": "a57e470ad16c025f6b0aae99de25f498",
"text": "Purpose To establish the efficacy and safety of botulinum toxin in the treatment of Crocodile Tear Syndrome and record any possible complications.Methods Four patients with unilateral aberrant VII cranial nerve regeneration following an episode of facial paralysis consented to be included in this study after a comprehensive explanation of the procedure and possible complications was given. On average, an injection of 20 units of botulinum toxin type A (Dysport®) was given to the affected lacrimal gland. The effect was assessed with a Schirmer’s test during taste stimulation. Careful recording of the duration of the effect and the presence of any local or systemic complications was made.Results All patients reported a partial or complete disappearance of the reflex hyperlacrimation following treatment. Schirmer’s tests during taste stimulation documented a significant decrease in tear secretion. The onset of effect of the botulinum toxin was typically 24–48 h after the initial injection and lasted 4–5 months. One patient had a mild increase in his preexisting upper lid ptosis, but no other local or systemic side effects were experienced.Conclusions The injection of botulinum toxin type A into the affected lacrimal glands of patients with gusto-lacrimal reflex is a simple, effective and safe treatment.",
"title": ""
},
{
"docid": "198bb9c6900396c2ba7678ed1635b5da",
"text": "In agriculture, detection and diagnosis of plant disease using digital image processing techniques focused on accurate segmentation of healthy and diseased tissue. Among various segmentation methods, the most widely used semiautomatic segmentation is based on gray scale histogram. In a novel semi-automatic segmentation process, the edges were removed along with pixels and then color conversion was done. After color conversion, pixel value adjustments and contrast enhancement of an image were performed to improve the image quality. Histogram with 100 bins was constructed for recognizing the diseased tissue from the healthier part of a leaf image. At last, segmentation of diseased leaf was found based on the histogram bins. Such bins were found manually which is not easy for all cases. Moreover, detection accuracy was reduced the quality by the influence of reflection light and distortion regions in an acquired image. Hence reflection light and distortion from image were removed using Quality Assessment Method Scheme (QAMS) algorithm. For automatic separation of diseased part from the healthier regions in a leaf image an optimization algorithm is required. To automatically define the histogram bins and separate diseased part from the healthier regions in a leaf image the Convolutional Neural Networks (CNN) algorithm used. After segmenting the diseased leaf image, the classification is done by Support Vector Machine (SVM) to detect the leaf diseases. The method provides better detection accuracy and computational time is reduced. Keywords-CNN, Diseases detection, Distortion removal, Reflection light, SVM.",
"title": ""
}
] |
scidocsrr
|
09ac51c093547175df6b553cc17f7670
|
Drivable Road Detection with 3D Point Clouds Based on the MRF for Intelligent Vehicle
|
[
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
}
] |
[
{
"docid": "be43b90cce9638b0af1c3143b6d65221",
"text": "Reasoning on provenance information and property propagation is of significant importance in e-science since it helps scientists manage derived metadata in order to understand the source of an object, reproduce results of processes and facilitate quality control of results and processes. In this paper we introduce a simple, yet powerful reasoning mechanism based on property propagation along the transitive part-of and derivation chains, in order to trace the provenance of an object and to carry useful inferences. We apply our reasoning in semantic repositories using the CIDOC-CRM conceptual schema and its extension CRMdig, which has been develop for representing the digital and empirical provenance of digi-",
"title": ""
},
{
"docid": "e85b761664a01273a10819566699bf4f",
"text": "Julius Bernstein belonged to the Berlin school of “organic physicists” who played a prominent role in creating modern physiology and biophysics during the second half of the nineteenth century. He trained under du Bois-Reymond in Berlin, worked with von Helmholtz in Heidelberg, and finally became Professor of Physiology at the University of Halle. Nowadays his name is primarily associated with two discoveries: (1) The first accurate description of the action potential in 1868. He developed a new instrument, a differential rheotome (= current slicer) that allowed him to resolve the exact time course of electrical activity in nerve and muscle and to measure its conduction velocity. (2) His ‘Membrane Theory of Electrical Potentials’ in biological cells and tissues. This theory, published by Bernstein in 1902, provided the first plausible physico-chemical model of bioelectric events; its fundamental concepts remain valid to this day. Bernstein pursued an intense and long-range program of research in which he achieved a new level of precision and refinement by formulating quantitative theories supported by exact measurements. The innovative design and application of his electromechanical instruments were milestones in the development of biomedical engineering techniques. His seminal work prepared the ground for hypotheses and experiments on the conduction of the nervous impulse and ultimately the transmission of information in the nervous system. Shortly after his retirement, Bernstein (1912) summarized his electrophysiological work and extended his theoretical concepts in a book Elektrobiologie that became a classic in its field. The Bernstein Centers for Computational Neuroscience recently established at several universities in Germany were named to honor the person and his work.",
"title": ""
},
{
"docid": "82a3fe6dfa81e425eb3aa3404799e72d",
"text": "ABSTRACT: Nonlinear control problem for a missile autopilot is quick adaptation and minimizing the desired acceleration to missile nonlinear model. For this several missile controllers are provided which are on the basis of nonlinear control or design of linear control for the linear missile system. In this paper a linear control of dynamic matrix type is proposed for the linear model of missile. In the first section, an approximate two degrees of freedom missile model, known as Horton model, is introduced. Then, the nonlinear model is converted into observable and controllable model base on the feedback linear rule of input-state mode type. Finally for design of control model, the dynamic matrix flight control, which is one of the linear predictive control design methods on the basis of system step response information, is used. This controller is a recursive method which calculates the development of system input by definition and optimization of a cost function and using system dynamic matrix. So based on the applied inputs and previous output information, the missile acceleration would be calculated. Unlike other controllers, this controller doesn’t require an interaction effect and accurate model. Although, it has predicting and controlling horizon, there isn’t such horizons in non-predictive methods.",
"title": ""
},
{
"docid": "c966c67c098e8178e6c05b6d446f6dd3",
"text": "Data are today an asset more critical than ever for all organizations we may think of. Recent advances and trends, such as sensor systems, IoT, cloud computing, and data analytics, are making possible to pervasively, efficiently, and effectively collect data. However for data to be used to their full power, data security and privacy are critical. Even though data security and privacy have been widely investigated over the past thirty years, today we face new difficult data security and privacy challenges. Some of those challenges arise from increasing privacy concerns with respect to the use of data and from the need of reconciling privacy with the use of data for security in applications such as homeland protection, counterterrorism, and health, food and water security. Other challenges arise because the deployments of new data collection and processing devices, such as those used in IoT systems, increase the data attack surface. In this paper, we discuss relevant concepts and approaches for data security and privacy, and identify research challenges that must be addressed by comprehensive solutions to data security and privacy.",
"title": ""
},
{
"docid": "c1a76ba2114ec856320651489ee9b28b",
"text": "The boost of available digital media has led to a significant increase in derivative work. With tools for manipulating objects becoming more and more mature, it can be very difficult to determine whether one piece of media was derived from another one or tampered with. As derivations can be done with malicious intent, there is an urgent need for reliable and easily usable tampering detection methods. However, even media considered semantically untampered by humans might have already undergone compression steps or light post-processing, making automated detection of tampering susceptible to false positives. In this paper, we present the PSBattles dataset which is gathered from a large community of image manipulation enthusiasts and provides a basis for media derivation and manipulation detection in the visual domain. The dataset consists of 102’028 images grouped into 11’142 subsets, each containing the original image as well as a varying number of manipulated derivatives.",
"title": ""
},
{
"docid": "e54c308623cb2a2f97e3075e572fdadb",
"text": "Augmented Reality is becoming increasingly popular. The success of a platform is typically observed by measuring the health of the software ecosystem surrounding it. In this paper, we take a closer look at the Vuforia ecosystem’s health by mining the Vuforia platform application repository. It is observed that the developer ecosystem is the strength of the platform. We also determine that Vuforia could be the biggest player in the market if they lay its focus on specific types of app",
"title": ""
},
{
"docid": "049a7164a973fb515ed033ba216ec344",
"text": "Modern vehicle fleets, e.g., for ridesharing platforms and taxi companies, can reduce passengers' waiting times by proactively dispatching vehicles to locations where pickup requests are anticipated in the future. Yet it is unclear how to best do this: optimal dispatching requires optimizing over several sources of uncertainty, including vehicles' travel times to their dispatched locations, as well as coordinating between vehicles so that they do not attempt to pick up the same passenger. While prior works have developed models for this uncertainty and used them to optimize dispatch policies, in this work we introduce a model-free approach. Specifically, we propose MOVI, a Deep Q-network (DQN)-based framework that directly learns the optimal vehicle dispatch policy. Since DQNs scale poorly with a large number of possible dispatches, we streamline our DQN training and suppose that each individual vehicle independently learns its own optimal policy, ensuring scalability at the cost of less coordination between vehicles. We then formulate a centralized receding-horizon control (RHC) policy to compare with our DQN policies. To compare these policies, we design and build MOVI as a large-scale realistic simulator based on 15 million taxi trip records that simulates policy-agnostic responses to dispatch decisions. We show that the DQN dispatch policy reduces the number of unserviced requests by 76% compared to without dispatch and 20% compared to the RHC approach, emphasizing the benefits of a model-free approach and suggesting that there is limited value to coordinating vehicle actions. This finding may help to explain the success of ridesharing platforms, for which drivers make individual decisions.",
"title": ""
},
{
"docid": "18d28769691fb87a6ebad5aae3eae078",
"text": "The current head Injury Assessment Reference Values (IARVs) for the child dummies are based in part on scaling adult and animal data and on reconstructions of real world accident scenarios. Reconstruction of well-documented accident scenarios provides critical data in the evaluation of proposed IARV values, but relatively few accidents are sufficiently documented to allow for accurate reconstructions. This reconstruction of a well documented fatal-fall involving a 23-month old child supplies additional data for IARV assessment. The videotaped fatal-fall resulted in a frontal head impact onto a carpet-covered cement floor. The child suffered an acute right temporal parietal subdural hematoma without skull fracture. The fall dynamics were reconstructed in the laboratory and the head linear and angular accelerations were quantified using the CRABI-18 Anthropomorphic Test Device (ATD). Peak linear acceleration was 125 ± 7 g (range 114-139), HIC15 was 335 ± 115 (Range 257-616), peak angular velocity was 57± 16 (Range 26-74), and peak angular acceleration was 32 ± 12 krad/s 2 (Range 15-56). The results of the CRABI-18 fatal fall reconstruction were consistent with the linear and rotational tolerances reported in the literature. This study investigates the usefulness of the CRABI-18 anthropomorphic testing device in forensic investigations of child head injury and aids in the evaluation of proposed IARVs for head injury. INTRODUCTION Defining the mechanisms of injury and the associated tolerance of the pediatric head to trauma has been the focus of a great deal of research and effort. In contrast to the multiple cadaver experimental studies of adult head trauma published in the literature, there exist only a few experimental studies of infant head injury using human pediatric cadaveric tissue [1-6]. While these few studies have been very informative, due to limitations in sample size, experimental equipment, and study objectives, current estimates of the tolerance of the pediatric head are based on relatively few pediatric cadaver data points combined with the use of scaled adult and animal data. In effort to assess and refine these tolerance estimates, a number of researchers have performed detailed accident reconstructions of well-documented injury scenarios [7-11] . The reliability of the reconstruction data are predicated on the ability to accurately reconstruct the actual accident and quantify the result in a useful injury metric(s). These resulting injury metrics can then be related to the injuries of the child and this, when combined with other reliable reconstructions, can form an important component in evaluating pediatric injury mechanisms and tolerance. Due to limitations in case identification, data collection, and resources, relatively few reconstructions of pediatric accidents have been performed. In this study, we report the results of the reconstruction of an uncharacteristically well documented fall resulting in a fatal head injury of a 23 month old child. The case study was previously reported as case #5 by Plunkett [12]. BACKGROUND As reported by Plunkett (2001), A 23-month-old was playing on a plastic gym set in the garage at her home with her older brother. She had climbed the attached ladder to the top rail above the platform and was straddling the rail, with her feet 0.70 meters (28 inches) above the floor. She lost her balance and fell headfirst onto a 1-cm (3⁄8-inch) thick piece of plush carpet remnant covering the concrete floor. She struck the carpet first with her outstretched hands, then with the right front side of her forehead, followed by her right shoulder. Her grandmother had been watching the children play and videotaped the fall. She cried after the fall but was alert",
"title": ""
},
{
"docid": "8d4288ddbdee91e934e6a98734285d1a",
"text": "Find loads of the designing social interfaces principles patterns and practices for improving the user experience book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.",
"title": ""
},
{
"docid": "160e06b33d6db64f38480c62989908fb",
"text": "A theoretical and experimental study has been performed on a low-profile, 2.4-GHz dipole antenna that uses a frequency-selective surface (FSS) with varactor-tuned unit cells. The tunable unit cell is a square patch with a small aperture on either side to accommodate the varactor diodes. The varactors are placed only along one dimension to avoid the use of vias and simplify the dc bias network. An analytical circuit model for this type of electrically asymmetric unit cell is shown. The measured data demonstrate tunability from 2.15 to 2.63 GHz with peak gains at broadside that range from 3.7- to 5-dBi and instantaneous bandwidths of 50 to 280 MHz within the tuning range. It is shown that tuning for optimum performance in the presence of a human-core body phantom can be achieved. The total antenna thickness is approximately λ/45.",
"title": ""
},
{
"docid": "572867885a16afc0af6a8ed92632a2a7",
"text": "We present an Efficient Log-based Troubleshooting(ELT) system for cloud computing infrastructures. ELT adopts a novel hybrid log mining approach that combines coarse-grained and fine-grained log features to achieve both high accuracy and low overhead. Moreover, ELT can automatically extract key log messages and perform invariant checking to greatly simplify the troubleshooting task for the system administrator. We have implemented a prototype of the ELT system and conducted an extensive experimental study using real management console logs of a production cloud system and a Hadoop cluster. Our experimental results show that ELT can achieve more efficient and powerful troubleshooting support than existing schemes. More importantly, ELT can find software bugs that cannot be detected by current cloud system management practice.",
"title": ""
},
{
"docid": "0c43c0dbeaff9afa0e73bddb31c7dac0",
"text": "A compact dual-band dielectric resonator antenna (DRA) using a parasitic c-slot fed by a microstrip line is proposed. In this configuration, the DR performs the functions of an effective radiator and the feeding structure of the parasitic c-slot in the ground plane. By optimizing the proposed structure parameters, the structure resonates at two different frequencies. One is from the DRA with the broadside patterns and the other from the c-slot with the dipole-like patterns. In order to determine the performance of varying design parameters on bandwidth and resonance frequency, the parametric study is carried out using simulation software High-Frequency Structure Simulator and experimental results. The measured and simulated results show excellent agreement.",
"title": ""
},
{
"docid": "1465b6c38296dfc46f8725dca5179cf1",
"text": "A brief introduction is given to the actual mechanics of simulated annealing, and a simple example from an IC layout is used to illustrate how these ideas can be applied. The complexities and tradeoffs involved in attacking a realistically complex design problem are illustrated by dissecting two very different annealing algorithms for VLSI chip floorplanning. Several current research problems aimed at determining more precisely how and why annealing algorithms work are examined. Some philosophical issues raised by the introduction of annealing are discussed.<<ETX>>",
"title": ""
},
{
"docid": "e72c88990ad5778eea9ce6dabace4326",
"text": "Studies in humans and rodents have suggested that behavior can at times be \"goal-directed\"-that is, planned, and purposeful-and at times \"habitual\"-that is, inflexible and automatically evoked by stimuli. This distinction is central to conceptions of pathological compulsion, as in drug abuse and obsessive-compulsive disorder. Evidence for the distinction has primarily come from outcome devaluation studies, in which the sensitivity of a previously learned behavior to motivational change is used to assay the dominance of habits versus goal-directed actions. However, little is known about how habits and goal-directed control arise. Specifically, in the present study we sought to reveal the trial-by-trial dynamics of instrumental learning that would promote, and protect against, developing habits. In two complementary experiments with independent samples, participants completed a sequential decision task that dissociated two computational-learning mechanisms, model-based and model-free. We then tested for habits by devaluing one of the rewards that had reinforced behavior. In each case, we found that individual differences in model-based learning predicted the participants' subsequent sensitivity to outcome devaluation, suggesting that an associative mechanism underlies a bias toward habit formation in healthy individuals.",
"title": ""
},
{
"docid": "cc5fae51afaac0119e3cac1cbdae722e",
"text": "The healthcare organization (hospitals, medical centers) should provide quality services at affordable costs. Quality of service implies diagnosing patients accurately and suggesting treatments that are effective. To achieve a correct and cost effective treatment, computer-based information and/or decision support Systems can be developed to full-fill the task. The generated information systems typically consist of large amount of data. Health care organizations must have ability to analyze these data. The Health care system includes data such as resource management, patient centric and transformed data. Data mining techniques are used to explore, analyze and extract these data using complex algorithms in order to discover unknown patterns. Many data mining techniques have been used in the diagnosis of heart disease with good accuracy. Neural Networks have shown great potential to be applied in the development of prediction system for various type of heart disease. This paper investigates the benefits and overhead of various neural network models for heart disease prediction.",
"title": ""
},
{
"docid": "a354f6c1d6411e4dec02031561c93ebd",
"text": "An operating system (OS) kernel is a critical software regarding to reliability and efficiency. Quality of modern OS kernels is already high enough. However, this is not the case for kernel modules, like, for example, device drivers that, due to various reasons, have a significantly lower level of quality. One of the most critical and widespread bugs in kernel modules are violations of rules for correct usage of a kernel API. One can find all such violations in modules or can prove their correctness using static verification tools that need contract specifications describing obligations of a kernel and modules relative to each other. This paper considers present methods and toolsets for static verification of kernel modules for different OSs. A new method for static verification of Linux kernel modules is proposed. This method allows one to configure the verification process at all its stages. It is shown how it can be adapted for checking kernel components of other OSs. An architecture of a configurable toolset for static verification of Linux kernel modules that implements the proposed method is described, and results of its practical application are presented. Directions for further development of the proposed method are discussed in conclusion.",
"title": ""
},
{
"docid": "8c29241ff4fd2f7c01043307a10c1726",
"text": "We are experiencing an abundance of Internet-of-Things (IoT) middleware solutions that provide connectivity for sensors and actuators to the Internet. To gain a widespread adoption, these middleware solutions, referred to as platforms, have to meet the expectations of different players in the IoT ecosystem, including device providers, application developers, and end-users, among others. In this article, we evaluate a representative sample of these platforms, both proprietary and open-source, on the basis of their ability to meet the expectations of different IoT users. The evaluation is thus more focused on how ready and usable these platforms are for IoT ecosystem players, rather than on the peculiarities of the underlying technological layers. The evaluation is carried out as a gap analysis of the current IoT landscape with respect to (i) the support for heterogeneous sensing and actuating technologies, (ii) the data ownership and its implications for security and privacy, (iii) data processing and data sharing capabilities, (iv) the support offered to application developers, (v) the completeness of an IoT ecosystem, and (vi) the availability of dedicated IoT marketplaces. The gap analysis aims to highlight the deficiencies of today’s solutions to improve their integration to tomorrow’s ecosystems. In order to strengthen the finding of our analysis, we conducted a survey among the partners of the Finnish IoT program, counting over 350 experts, to evaluate the most critical issues for the development of future IoT platforms. Based on the results of our analysis and our survey, we conclude this article with a list of recommendations for extending these IoT platforms in order to fill in the gaps.",
"title": ""
},
{
"docid": "9e4b7e87229dfb02c2600350899049be",
"text": "This paper presents an efficient and reliable swarm intelligence-based approach, namely elitist-mutated particle swarm optimization EMPSO technique, to derive reservoir operation policies for multipurpose reservoir systems. Particle swarm optimizers are inherently distributed algorithms, in which the solution for a problem emerges from the interactions between many simple individuals called particles. In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance. The proposed approach is first tested on a hypothetical multireservoir system, used by earlier researchers. EMPSO showed promising results, when compared with other techniques. To show practical utility, EMPSO is then applied to a realistic case study, the Bhadra reservoir system in India, which serves multiple purposes, namely irrigation and hydropower generation. To handle multiple objectives of the problem, a weighted approach is adopted. The results obtained demonstrate that EMPSO is consistently performing better than the standard PSO and genetic algorithm techniques. It is seen that EMPSO is yielding better quality solutions with less number of function evaluations. DOI: 10.1061/ ASCE 0733-9496 2007 133:3 192 CE Database subject headings: Reservoir operation; Optimization; Irrigation; Hydroelectric power generation.",
"title": ""
},
{
"docid": "11355807aa6b24f2eade366f391f0338",
"text": "Object detectors have hugely profited from moving towards an end-to-end learning paradigm: proposals, fea tures, and the classifier becoming one neural network improved results two-fold on general object detection. One indispensable component is non-maximum suppression (NMS), a post-processing algorithm responsible for merging all detections that belong to the same object. The de facto standard NMS algorithm is still fully hand-crafted, suspiciously simple, and — being based on greedy clustering with a fixed distance threshold — forces a trade-off between recall and precision. We propose a new network architecture designed to perform NMS, using only boxes and their score. We report experiments for person detection on PETS and for general object categories on the COCO dataset. Our approach shows promise providing improved localization and occlusion handling.",
"title": ""
},
{
"docid": "d8fc658756c4dd826b90a7e126e2e44d",
"text": "Knowledge graph embedding refers to projecting entities and relations in knowledge graph into continuous vector spaces. State-of-the-art methods, such as TransE, TransH, and TransR build embeddings by treating relation as translation from head entity to tail entity. However, previous models can not deal with reflexive/one-to-many/manyto-one/many-to-many relations properly, or lack of scalability and efficiency. Thus, we propose a novel method, flexible translation, named TransF, to address the above issues. TransF regards relation as translation between head entity vector and tail entity vector with flexible magnitude. To evaluate the proposed model, we conduct link prediction and triple classification on benchmark datasets. Experimental results show that our method remarkably improve the performance compared with several state-of-the-art baselines.",
"title": ""
}
] |
scidocsrr
|
31e27d53a3fe6dfbe288783e4d26c06c
|
Enterprise Cloud Service Architecture
|
[
{
"docid": "84cb130679353dbdeff24100409f57fe",
"text": "Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both.",
"title": ""
}
] |
[
{
"docid": "6d6e3b9ae698aca9981dc3b6dfb11985",
"text": "Several recent papers have tried to address the genetic determination of eye colour via microsatellite linkage, testing of pigmentation candidate gene polymorphisms and the genome wide analysis of SNP markers that are informative for ancestry. These studies show that the OCA2 gene on chromosome 15 is the major determinant of brown and/or blue eye colour but also indicate that other loci will be involved in the broad range of hues seen in this trait in Europeans.",
"title": ""
},
{
"docid": "4c5dd43f350955b283f1a04ddab52d41",
"text": "This thesis deals with interaction design for a class of upcoming computer technologies for human use characterized by being different from traditional desktop computers in their physical appearance and the contexts in which they are used. These are typically referred to as emerging technologies. Emerging technologies often imply interaction dissimilar from how computers are usually operated. This challenges the scope and applicability of existing knowledge about human-computer interaction design. The thesis focuses on three specific technologies: virtual reality, augmented reality and mobile computer systems. For these technologies, five themes are addressed: current focus of research, concepts, interaction styles, methods and tools. These themes inform three research questions, which guide the conducted research. The thesis consists of five published research papers and a summary. In the summary, current focus of research is addressed from the perspective of research methods and research purpose. Furthermore, the notions of human-computer interaction design and emerging technologies are discussed and two central distinctions are introduced. Firstly, interaction design is divided into two categories with focus on systems and processes respectively. Secondly, the three studied emerging technologies are viewed in relation to immersion into virtual space and mobility in physical space. These distinctions are used to relate the five paper contributions, each addressing one of the three studied technologies with focus on properties of systems or the process of creating them respectively. Three empirical sources contribute to the results. Experiments with interaction design inform the development of concepts and interaction styles suitable for virtual reality, augmented reality and mobile computer systems. Experiments with designing interaction inform understanding of how methods and tools support design processes for these technologies. Finally, a literature survey informs a review of existing research, and identifies current focus, limitations and opportunities for future research. The primary results of the thesis are: 1) Current research within human-computer interaction design for the studied emerging technologies focuses on building systems ad-hoc and evaluating them in artificial settings. This limits the generation of cumulative theoretical knowledge. 2) Interaction design for the emerging technologies studied requires the development of new suitable concepts and interaction styles. Suitable concepts describe unique properties and challenges of a technology. Suitable interaction styles respond to these challenges by exploiting the technology’s unique properties. 3) Designing interaction for the studied emerging technologies involves new use situations, a distance between development and target platforms and complex programming. Elements of methods exist, which are useful for supporting the design of interaction, but they are fragmented and do not support the process as a whole. The studied tools do not support the design process as a whole either but support aspects of interaction design by bridging the gulf between development and target platforms and providing advanced programming environments. Menneske-maskine interaktionsdesign for opkommende teknologier Virtual Reality, Augmented Reality og Mobile Computersystemer",
"title": ""
},
{
"docid": "28fd803428e8f40a4627e05a9464e97b",
"text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"title": ""
},
{
"docid": "ec6b1d26b06adc99092659b4a511da44",
"text": "Social identity threat is the notion that one of a person's many social identities may be at risk of being devalued in a particular context (C. M. Steele, S. J. Spencer, & J. Aronson, 2002). The authors suggest that in domains in which women are already negatively stereotyped, interacting with a sexist man can trigger social identity threat, undermining women's performance. In Study 1, male engineering students who scored highly on a subtle measure of sexism behaved in a dominant and sexually interested way toward an ostensible female classmate. In Studies 2 and 3, female engineering students who interacted with such sexist men, or with confederates trained to behave in the same way, performed worse on an engineering test than did women who interacted with nonsexist men. Study 4 replicated this finding and showed that women's underperformance did not extend to an English test, an area in which women are not negatively stereotyped. Study 5 showed that interacting with sexist men leads women to suppress concerns about gender stereotypes, an established mechanism of stereotype threat. Discussion addresses implications for social identity threat and for women's performance in school and at work.",
"title": ""
},
{
"docid": "4e530e55fffbf5e0bc465a7cf378d148",
"text": "We describe a project to link the Princeton WordNet to 3D representations of real objects and scenes. The goal is to establish a dataset that helps us to understand how people categorize everyday common objects via their parts, attributes, and context. This paper describes the annotation and data collection effort so far as well as ideas for future work.",
"title": ""
},
{
"docid": "7d84e574d2a6349a9fc2669fdbe08bba",
"text": "Domain-specific languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a promising path to target different parallel and heterogeneous hardware from the same algorithm description. In theory, the DSL description can capture all characteristics of the algorithm that are required to generate highly efficient parallel implementations. However, most frameworks do not make use of this knowledge and the performance cannot reach that of optimized library implementations. In this article, we present the HIPAcc framework, a DSL and source-to-source compiler for image processing. We show that domain knowledge can be captured in the language and that this knowledge enables us to generate tailored implementations for a given target architecture. Back ends for CUDA, OpenCL, and Renderscript allow us to target discrete graphics processing units (GPUs) as well as mobile, embedded GPUs. Exploiting the captured domain knowledge, we can generate specialized algorithm variants that reach the maximal achievable performance due to the peak memory bandwidth. These implementations outperform state-of-the-art domain-specific languages and libraries significantly.",
"title": ""
},
{
"docid": "b12619b74b84dcc48af3e07313771c8b",
"text": "Domain adaptation is important in sentiment analysis as sentiment-indicating words vary between domains. Recently, multi-domain adaptation has become more pervasive, but existing approaches train on all available source domains including dissimilar ones. However, the selection of appropriate training data is as important as the choice of algorithm. We undertake – to our knowledge for the first time – an extensive study of domain similarity metrics in the context of sentiment analysis and propose novel representations, metrics, and a new scope for data selection. We evaluate the proposed methods on two largescale multi-domain adaptation settings on tweets and reviews and demonstrate that they consistently outperform strong random and balanced baselines, while our proposed selection strategy outperforms instance-level selection and yields the best score on a large reviews corpus. All experiments are available at url_redacted1",
"title": ""
},
{
"docid": "08dbd88adb399721e0f5ee91534c9888",
"text": "Many theories of attention have proposed that visual working memory plays an important role in visual search tasks. The present study examined the involvement of visual working memory in search using a dual-task paradigm in which participants performed a visual search task while maintaining no, two, or four objects in visual working memory. The presence of a working memory load added a constant delay to the visual search reaction times, irrespective of the number of items in the visual search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the search array, indicating that the search process itself was not slowed by the memory load. Moreover, the search task did not substantially impair the maintenance of information in visual working memory. These results suggest that visual search requires minimal visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.",
"title": ""
},
{
"docid": "3e845c9a82ef88c7a1f4447d57e35a3e",
"text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.",
"title": ""
},
{
"docid": "a425425658207587c079730a68599572",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstoLorg/aboutiterms.html. JSTOR's Terms and Conditions ofDse provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. Operations Research is published by INFORMS. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/jowllalslinforms.html.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "50d0ee6100a5678620d12217a2a72184",
"text": "1. Identify 5 feedback systems that you encounter in your everyday environment. For each system, identify the sensing mechanism, actuation mechanism, and control law. Describe the uncertainty that the feedback system provides robustness with respect to and/or the dynamics that are changed through the use of feedback. At least one example should correspond to a system that comes from your own discipline or research activities.",
"title": ""
},
{
"docid": "1f1958a2b1a83fecc4a3cc9223d151e5",
"text": "We present acoustic barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. We conclude with several example applications that highlight the utility of our approach, and a user study that explores its feasibility.",
"title": ""
},
{
"docid": "4125dba64f9d693a8b89854ee712eca5",
"text": "Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.",
"title": ""
},
{
"docid": "4434ad83cad1b8dc353f24fdf12a606c",
"text": "Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not used, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.",
"title": ""
},
{
"docid": "cf18799eeaf3c5f2b344c1bbbc15da7f",
"text": "This paper presents a machine-learning classifier where the computation is performed within a standard 6T SRAM array. This eliminates explicit memory operations, which otherwise pose energy/performance bottlenecks, especially for emerging algorithms (e.g., from machine learning) that result in high ratio of memory accesses. We present an algorithm and prototype IC (in 130nm CMOS), where a 128×128 SRAM array performs storage of classifier models and complete classifier computations. We demonstrate a real application, namely digit recognition from MNIST-database images. The accuracy is equal to a conventional (ideal) digital/SRAM system, yet with 113× lower energy. The approach achieves accuracy >95% with a full feature set (i.e., 28×28=784 image pixels), and 90% when reduced to 82 features (as demonstrated on the IC due to area limitations). The energy per 10-way digit classification is 633pJ at a speed of 50MHz.",
"title": ""
},
{
"docid": "f599668745fd60d907deca91026d48da",
"text": "While Bregman divergences have been used for clustering and embedding problems in recent years, the facts that they are asymmetric and do not satisfy triangle inequality have been a major concern. In this paper, we investigate the relationship between two families of symmetrized Bregman divergences and metrics, which satisfy the triangle inequality. The first family can be derived from any well-behaved convex function under clearly quantified conditions. The second family generalizes the Jensen-Shannon divergence, and can only be derived from convex functions with certain conditional positive definiteness structure. We interpret the required structure in terms of cumulants of infinitely divisible distributions, and related results in harmonic analysis. We investigate kmeans-type clustering problems using both families of symmetrized divergences, and give efficient algorithms for the same",
"title": ""
},
{
"docid": "c1c9730b191f2ac9186ac704fd5b929f",
"text": "This paper reports on the results of a survey of user interface programming. The survey was widely distributed, and we received 74 responses. The results show that in today's applications, an average of 48% of the code is devoted to the user interface portion. The average time spent on the user interface portion is 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. 34% of the systems were implemented using a toolkit, 27% used a UIMS, 14% used an interface builder, and 26% used no tools. This appears to be because the toolkit systems had more sophisticated user interfaces. The projects using UIMSs or interface builders spent the least percent of time and code on the user interface (around 41%) suggesting that these tools are effective. In general, people were happy with the tools they used, especially the graphical interface builders. The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.",
"title": ""
},
{
"docid": "691a24c16b926378d5c586c7f2b1ce22",
"text": "Isolated 7p22.3p22.2 deletions are rarely described with only two reports in the literature. Most other reported cases either involve a much larger region of the 7p arm or have an additional copy number variation. Here, we report five patients with overlapping microdeletions at 7p22.3p22.2. The patients presented with variable developmental delays, exhibiting relative weaknesses in expressive language skills and relative strengths in gross, and fine motor skills. The most consistent facial features seen in these patients included a broad nasal root, a prominent forehead a prominent glabella and arched eyebrows. Additional variable features amongst the patients included microcephaly, metopic ridging or craniosynostosis, cleft palate, cardiac defects, and mild hypotonia. Although the patients' deletions varied in size, there was a 0.47 Mb region of overlap which contained 7 OMIM genes: EIP3B, CHST12, LFNG, BRAT1, TTYH3, AMZ1, and GNA12. We propose that monosomy of this region represents a novel microdeletion syndrome. We recommend that individuals with 7p22.3p22.2 deletions should receive a developmental assessment and a thorough cardiac exam, with consideration of an echocardiogram, as part of their initial evaluation.",
"title": ""
},
{
"docid": "b1dd830adf87c283ff58630eade75b3c",
"text": "Self-control is a central function of the self and an important key to success in life. The exertion of self-control appears to depend on a limited resource. Just as a muscle gets tired from exertion, acts of self-control cause short-term impairments (ego depletion) in subsequent self-control, even on unrelated tasks. Research has supported the strength model in the domains of eating, drinking, spending, sexuality, intelligent thought, making choices, and interpersonal behavior. Motivational or framing factors can temporarily block the deleterious effects of being in a state of ego depletion. Blood glucose is an important component of the energy. KEYWORDS—self-control; ego depletion; willpower; impulse; strength Every day, people resist impulses to go back to sleep, to eat fattening or forbidden foods, to say or do hurtful things to their relationship partners, to play instead of work, to engage in inappropriate sexual or violent acts, and to do countless other sorts of problematic behaviors—that is, ones that might feel good immediately or be easy but that carry long-term costs or violate the rules and guidelines of proper behavior. What enables the human animal to follow rules and norms prescribed by society and to resist doing what it selfishly wants? Self-control refers to the capacity for altering one’s own responses, especially to bring them into line with standards such as ideals, values, morals, and social expectations, and to support the pursuit of long-term goals. Many writers use the terms selfcontrol and self-regulation interchangeably, but those whomake a distinction typically consider self-control to be the deliberate, conscious, effortful subset of self-regulation. In contrast, homeostatic processes such as maintaining a constant body temperature may be called self-regulation but not self-control. Self-control enables a person to restrain or override one response, thereby making a different response possible. Self-control has attracted increasing attention from psychologists for two main reasons. At the theoretical level, self-control holds important keys to understanding the nature and functions of the self. Meanwhile, the practical applications of self-control have attracted study in many contexts. Inadequate self-control has been linked to behavioral and impulse-control problems, including overeating, alcohol and drug abuse, crime and violence, overspending, sexually impulsive behavior, unwanted pregnancy, and smoking (e.g., Baumeister, Heatherton, & Tice, 1994; Gottfredson & Hirschi, 1990; Tangney, Baumeister, & Boone, 2004; Vohs & Faber, 2007). It may also be linked to emotional problems, school underachievement, lack of persistence, various failures at task performance, relationship problems and dissolution, and more.",
"title": ""
}
] |
scidocsrr
|
84b7a1c3955565ab6fb959021dea9873
|
Tilt set-point correction system for balancing robot using PID controller
|
[
{
"docid": "749f79007256f570b73983b8d3f36302",
"text": "This paper addresses some of the potential benefits of using fuzzy logic controllers to control an inverted pendulum system. The stages of the development of a fuzzy logic controller using a four input Takagi-Sugeno fuzzy model were presented. The main idea of this paper is to implement and optimize fuzzy logic control algorithms in order to balance the inverted pendulum and at the same time reducing the computational time of the controller. In this work, the inverted pendulum system was modeled and constructed using Simulink and the performance of the proposed fuzzy logic controller is compared to the more commonly used PID controller through simulations using Matlab. Simulation results show that the fuzzy logic controllers are far more superior compared to PID controllers in terms of overshoot, settling time and response to parameter changes.",
"title": ""
},
{
"docid": "f043acf163d787c4a53924515b509aba",
"text": "A two-wheeled self-balancing robot is a special type of wheeled mobile robot, its balance problem is a hot research topic due to its unstable state for controlling. In this paper, human transporter model has been established. Kinematic and dynamic models are constructed and two control methods: Proportional-integral-derivative (PID) and Linear-quadratic regulator (LQR) are implemented to test the system model in which controls of two subsystems: self-balance (preventing system from falling down when it moves forward or backward) and yaw rotation (steering angle regulation when it turns left or right) are considered. PID is used to control both two subsystems, LQR is used to control self-balancing subsystem only. By using simulation in Matlab, two methods are compared and discussed. The theoretical investigations for controlling the dynamic behavior are meaningful for design and fabrication. Finally, the result shows that LQR has a better performance than PID for self-balancing subsystem control.",
"title": ""
}
] |
[
{
"docid": "89ead93b4f234e50b6d6e70ad4f54d67",
"text": "Clinical impressions of metabolic disease problems in dairy herds can be corroborated with herd-based metabolic testing. Ruminal pH should be evaluated in herds showing clinical signs associated with SARA (lame cows, thin cows, high herd removals or death loss across all stages of lactation, or milk fat depression). Testing a herd for the prevalence of SCK via blood BHB sampling in early lactation is useful in almost any dairy herd, and particularly if the herd is experiencing a high incidence of displaced abomasum or high removal rates of early lactation cows. If cows are experiencing SCK within the first 3 weeks of lactation, then consider NEFA testing of the prefresh cows to corroborate prefresh negative energy balance. Finally, monitoring cows on the day of calving for parturient hypocalcemia can provide early detection of diet-induced problems in calcium homeostasis. If hypocalcemia problems are present despite supplementing anionic salts before calving, then it may be helpful to evaluate mean urinary pH of a group of the prefresh cows. Quantitative testing strategies based on statistical analyses can be used to establish minimum sample sizes and interpretation guidelines for all of these tests.",
"title": ""
},
{
"docid": "462d93a89154fb67772bbbba5343399c",
"text": "In this paper, we proposed a DBSCAN-based clustering algorithm called NNDD-DBSCAN with the main focus of handling multi-density datasets and reducing parameter sensitivity. The NNDD-DBSCAN used a new distance measuring method called nearest neighbor density distance (NNDD) which makes the new algorithm can clustering properly in multi-density datasets. By analyzing the relationship between the threshold of nearest neighbor density distance and the threshold of nearest neighborcollection, we give a heuristic method to find the appropriate nearest neighbor density distance threshold and reducing parameter sensitivity. Experimental results show that the NNDD-DBSCAN has a good robustadaptation and can get the ideal clustering result both in single density datasets and multi-density datasets.",
"title": ""
},
{
"docid": "26da66e7f52458058e6a8552ceea234f",
"text": "17.",
"title": ""
},
{
"docid": "320c7c49dd4341cca532fa02965ef953",
"text": "During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks, and KDDCUP'99 is the mostly widely used data set for the evaluation of these systems. Having conducted a statistical analysis on this data set, we found two important issues which highly affects the performance of evaluated systems, and results in a very poor evaluation of anomaly detection approaches. To solve these issues, we have proposed a new data set, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.",
"title": ""
},
{
"docid": "4539b6dda3a8b85dfb1ba0f5da6e7c8c",
"text": "3D Printing promises to produce complex biomedical devices according to computer design using patient-specific anatomical data. Since its initial use as pre-surgical visualization models and tooling molds, 3D Printing has slowly evolved to create one-of-a-kind devices, implants, scaffolds for tissue engineering, diagnostic platforms, and drug delivery systems. Fueled by the recent explosion in public interest and access to affordable printers, there is renewed interest to combine stem cells with custom 3D scaffolds for personalized regenerative medicine. Before 3D Printing can be used routinely for the regeneration of complex tissues (e.g. bone, cartilage, muscles, vessels, nerves in the craniomaxillofacial complex), and complex organs with intricate 3D microarchitecture (e.g. liver, lymphoid organs), several technological limitations must be addressed. In this review, the major materials and technology advances within the last five years for each of the common 3D Printing technologies (Three Dimensional Printing, Fused Deposition Modeling, Selective Laser Sintering, Stereolithography, and 3D Plotting/Direct-Write/Bioprinting) are described. Examples are highlighted to illustrate progress of each technology in tissue engineering, and key limitations are identified to motivate future research and advance this fascinating field of advanced manufacturing.",
"title": ""
},
{
"docid": "2fdbe007690a844da8dc3cb306d077f8",
"text": "In this paper, we propose a structured image inpainting method employing an energy based model. In order to learn structural relationship between patterns observed in images and missing regions of the images, we employ an energy-based structured prediction method. The structural relationship is learned by minimizing an energy function which is defined by a simple convolutional neural network. The experimental results on various benchmark datasets show that our proposed method significantly outperforms the state-of-the-art methods which use Generative Adversarial Networks (GANs). We obtained 497.35 mean squared error (MSE) on the Olivetti face dataset compared to 833.0 MSE provided by the state-of-the-art method. Moreover, we obtained 28.4 dB peak signal to noise ratio (PSNR) on the SVHN dataset and 23.53 dB on the CelebA dataset, compared to 22.3 dB and 21.3 dB, provided by the state-of-the-art methods, respectively. The code is publicly available.11https:llgithub.com/cvlab-tohoku/DSEBImageInpainting.",
"title": ""
},
{
"docid": "c256283819014d79dd496a3183116b68",
"text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.",
"title": ""
},
{
"docid": "7b1782eb96134edda9bc5661b1ad4de6",
"text": "Quality inspection is an important aspect of modern industrial manufacturing. In textile industry production, automate fabric inspection is important for maintain the fabric quality. For a long time the fabric defects inspection process is still carried out with human visual inspection, and thus, insufficient and costly. Therefore, automatic fabric defect inspection is required to reduce the cost and time waste caused by defects. The investment in automated fabric defect detection is more than economical when reduction in labor cost and associated benefits are considered. The development of fully automated web inspection system requires robust and efficient fabric defect detection algorithms. Image analysis has great potential to provide reliable measurements for detecting defects in fabrics. In this paper, we are using the principles of image analysis, an automatic fabric evaluation system, which enables automatic computerized defect detection (analysis of fabrics) was developed. Online fabric defect detection was tested automatically by analyzing fabric images captured by a digital camera.",
"title": ""
},
{
"docid": "bd178b04fe57db1ce408452edeb8a6d4",
"text": "BACKGROUND\nIn 1998, the French Ministry of Environment revealed that of 71 French municipal solid waste incinerators processing more than 6 metric tons of material per hour, dioxin emission from 15 of them was above the 10 ng international toxic equivalency factor/m3 (including Besançon, emitting 16.3 ng international toxic equivalency factor/m3) which is substantially higher than the 0.1 international toxic equivalency factor/m3 prescribed by a European directive of 1994. In 2000, a macrospatial epidemiological study undertaken in the administrative district of Doubs, identified two significant clusters of soft-tissue sarcoma and non Hodgkin lymphoma in the vicinity of the municipal solid waste incinerator of Besançon. This microspatial study (at the Besançon city scale), was designed to test the association between the exposure to dioxins emitted by the municipal solid waste incinerator of Besançon and the risk of soft-tissue sarcoma.\n\n\nMETHODS\nGround-level concentrations of dioxin were modeled with a dispersion model (Air Pollution Control 3 software). Four increasing zones of exposure were defined. For each case of soft tissue sarcoma, ten controls were randomly selected from the 1990 census database and matched for gender and age. A geographic information system allowed the attribution of a dioxin concentration category to cases and controls, according to their place of residence.\n\n\nRESULTS\nThirty-seven cases of soft tissue sarcoma were identified by the Doubs cancer registry between 1980 and 1995, corresponding to a standardized incidence (French population) of 2.44 per 100,000 inhabitants. Compared with the least exposed zone, the risk of developing a soft tissue sarcoma was not significantly increased for people living in the more exposed zones.\n\n\nCONCLUSION\nBefore definitely concluding that there is no relationship between the exposure to dioxin released by a solid waste incinerator and soft tissue sarcoma, a nationwide investigation based on other registries should be conducted.",
"title": ""
},
{
"docid": "acc906c2129e26d169e0cdc5747027ee",
"text": "Intermolecular interactions within living organisms have been found to occur not as individual independent events but as a part of a collective array of interconnected events. The problem of the emergence of this collective dynamics and of the correlated biocommunication therefore arises. In the present paper we review the proposals given within the paradigm of modern molecular biology and those given by some holistic approaches to biology. In recent times, the collective behavior of ensembles of microscopic units (atoms/molecules) has been addressed in the conceptual framework of Quantum Field Theory. The possibility of producing physical states where all the components of the ensemble move in unison has been recognized. In such cases, electromagnetic fields trapped within the ensemble appear. In the present paper we present a scheme based on Quantum Field Theory where molecules are able to move in phase-correlated unison among them and with a self-produced electromagnetic field. Experimental corroboration of this scheme is presented. Some consequences for future biological developments are discussed.",
"title": ""
},
{
"docid": "b47b06f8548716e0ef01a0e113d48e5d",
"text": "This paper proposes a framework to automatically construct taxonomies from a corpus of text documents. This framework first extracts terms from documents using a part-of-speech parser. These terms are then filtered using domain pertinence, domain consensus, lexical cohesion, and structural relevance. The remaining terms represent concepts in the taxonomy. These concepts are arranged in a hierarchy with either the extended subsumption method that accounts for concept ancestors in determining the parent of a concept or a hierarchical clustering algorithm that uses various text-based window and document scopes for concept co-occurrences. Our evaluation in the field of management and economics indicates that a trade-off between taxonomy quality and depth must be made when choosing one of these methods. The subsumption method is preferable for shallow taxonomies, whereas the hierarchical clustering algorithm is recommended for deep taxonomies.",
"title": ""
},
{
"docid": "066e2671c23d73617639810075e184d0",
"text": "Bit is added to the left of the partial product using sign extension.products required to half that required by a simple add and shift method. A signed binary multiplication technique. Quarterly Journal of.Booths multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in twos complement notation. The above mentioned technique is inadequate when the multiplicand is most negative number that can be represented e.g. Create a book Download as PDF Printable version.Lecture 8: Binary Multiplication Division. Sign-and-magnitude: the most significant bit represents.BINARY MULTIPLICATION. Division Method.per and pencil. This method adds the multiplicand X to itself Y times, where Y de. In the case of binary multiplication, since the digits are 0 and 1, each step of.implemented a Signed-Unsigned Booths Multiplier and a. defined as the multiplication performed on signed binary numbers and. While the second method.accept unsigned binary inputs, one bit at a time, least significant bit first, and produce. Method for the multiplication of signed numbers, based on their earlier.will also describe how to apply this new method to the familiar multipliers such as Booth and.",
"title": ""
},
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
},
{
"docid": "fc9ee686c2a339f2f790074aeee5432b",
"text": "Recent work using auxiliary prediction task classifiers to investigate the properties of LSTM representations has begun to shed light on why pretrained representations, like ELMo (Peters et al., 2018) and CoVe (McCann et al., 2017), are so beneficial for neural language understanding models. We still, though, do not yet have a clear understanding of how the choice of pretraining objective affects the type of linguistic information that models learn. With this in mind, we compare four objectives—language modeling, translation, skip-thought, and autoencoding—on their ability to induce syntactic and part-of-speech information. We make a fair comparison between the tasks by holding constant the quantity and genre of the training data, as well as the LSTM architecture. We find that representations from language models consistently perform best on our syntactic auxiliary prediction tasks, even when trained on relatively small amounts of data. These results suggest that language modeling may be the best data-rich pretraining task for transfer learning applications requiring syntactic information. We also find that the representations from randomly-initialized, frozen LSTMs perform strikingly well on our syntactic auxiliary tasks, but this effect disappears when the amount of training data for the auxiliary tasks is reduced.",
"title": ""
},
{
"docid": "083f03665d2b802737a54f2cd811e27c",
"text": "This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods), were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.",
"title": ""
},
{
"docid": "b5475fb64673f6be82e430d307b31fa2",
"text": "We report a novel technique: a 1-stage transfer of 2 paddles of thoracodorsal artery perforator (TAP) flap with 1 pair of vascular anastomoses for simultaneous restoration of bilateral facial atrophy. A 47-year-old woman with a severe bilateral lipodystrophy of the face (Barraquer-Simons syndrome) was surgically treated using this procedure. Sufficient blood supply to each of the 2 flaps was confirmed with fluorescent angiography using the red-excited indocyanine green method. A good appearance was obtained, and the patient was satisfied with the result. Our procedure has advantages over conventional methods in that bilateral facial atrophy can be augmented simultaneously with only 1 donor site. Furthermore, our procedure requires only 1 pair of vascular anastomoses and the horizontal branch of the thoracodorsal nerve can be spared. To our knowledge, this procedure has not been reported to date. We consider that 2 paddles of TAP flap are safely elevated if the distal flap is designed on the descending branch, and this technique is useful for the reconstruction of bilateral facial atrophy or deformity.",
"title": ""
},
{
"docid": "3047a970a55f3a20660992143880bb52",
"text": "This paper defines money laundering in the context of international trade and builds analytic models to measure the contribution of transfer prices to international money laundering. Money laundry-related transfer pricing and transfer price-based money laundering are analyzed in detail. It argues that transfer price-based capital flight and tax evasion are variants of money laundering in nature to the extent that they all enable the apparently legal ownership of the property shifted illegally. Our main contribution lies in the identification of the artificial transfer pricing (ATP) paradigm and integrating capital flight and tax evasion into the models to estimate its contribution to global money laundering pool, helping anti-money laundering (AML) policy-makers better understand the nature of transfer pricing and its negative impact upon the economy. It concludes that effective audit and inspection systems should be established in order to better detect suspicious money laundering transactions and prevent money laundering crimes (MLCs)",
"title": ""
},
{
"docid": "811edf1cfc3a36c6a2e136b2d25f5027",
"text": "Success for many businesses depends on their information software systems. Keeping these systems operational is critical, as failure in these systems is costly. Such systems are in many cases sophisticated, distributed and dynamically composed. To ensure high availability and correct operation, it is essential that failures be detected promptly, their causes diagnosed and remedial actions taken. Although automated recovery approaches exists for specific problem domains, the problem-resolution process is in many cases manual and painstaking. Computer support personnel put a great deal of effort into resolving the reported failures. The growing size and complexity of these systems creates the need to automate this process. The primary focus of our research is on automated fault diagnosis and recovery using discrete monitoring data such as log files and notifications. Our goal is to quickly pinpoint the root-cause of a failure. Our contributions are: • Modelling discrete monitoring data for automated analysis, • Automatically leveraging common symptoms of failures from historic monitoring data using such models to pinpoint faults, and • Providing a model for decision-making under uncertainty such that appropriate recovery actions are chosen. Failures in such systems are caused by software defects, human error, hardware failures, environmental conditions and malicious behaviour. Our primary focus in this thesis is on software defects and misconfiguration.",
"title": ""
},
{
"docid": "bbbbe3f926de28d04328f1de9bf39d1a",
"text": "The detection of fraudulent financial statements (FFS) is an important and challenging issue that has served as the impetus for many academic studies over the past three decades. Although nonfinancial ratios are generally acknowledged as the key factor contributing to the FFS of a corporation, they are usually excluded from early detection models. The objective of this study is to increase the accuracy of FFS detection by integrating the rough set theory (RST) and support vector machines (SVM) approaches, while adopting both financial and nonfinancial ratios as predictive variables. The results showed that the proposed hybrid approach (RSTþSVM) has the best classification rate as well as the lowest occurrence of Types I and II errors, and that nonfinancial ratios are indeed valuable information in FFS detection.",
"title": ""
},
{
"docid": "710806a37de381a9894443157b0d84c2",
"text": "This paper presents the operation of a multiagent system (MAS) for the control of a Microgrid. The approach presented utilizes the advantages of using the MAS technology for controlling a Microgrid and a classical distributed algorithm based on the symmetrical assignment problem for the optimal energy exchange between the production units of the Microgrid and the local loads, as well the main grid.",
"title": ""
}
] |
scidocsrr
|
fee3cad6a022121bf6b4b82a54c5ac2b
|
An agile boot camp: Using a LEGO®-based active game to ground agile development principles
|
[
{
"docid": "be0ba5b90102aab7cbee08a29333be93",
"text": "Test-driven development (TDD) has been proposed as a solution to improve testing in Industry and in academia. The purpose of this poster is to outline the challenges of teaching a novel Test-First approach in a Level 8 course on Software Testing. Traditionally, introductory programming and software testing courses teach a test-last approach. After the introduction of the Extreme Programming version of AGILE, industry and academia have slowly shifted their focus to the Test-First approach. This poster paper is a pedagogical insight into this shift from the test-last to the test-first approach known as Test Driven Development (TDD).",
"title": ""
}
] |
[
{
"docid": "0406ef30ccc781558480458c225e7716",
"text": "The electrical parameters degradations of lateral double-diffused MOS with multiple floating poly-gate field plates under different stress conditions have been investigated experimentally. For the maximum substrate current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{{\\text {submax}}})$ </tex-math></inline-formula> stress, the increased interface states at the bird’s beak mainly result in an on-resistance (<inline-formula> <tex-math notation=\"LaTeX\">${R}_{ \\mathrm{\\scriptscriptstyle ON}})$ </tex-math></inline-formula> increase at the beginning of the stress, while hot holes injection and trapping into the oxide beneath the edge of real poly-gate turns out to be the dominating degradation mechanism after around 800-s stress, making the <inline-formula> <tex-math notation=\"LaTeX\">${R}_{{ \\mathrm{\\scriptscriptstyle ON}}}$ </tex-math></inline-formula> decrease. For the maximum operating gate voltage (<inline-formula> <tex-math notation=\"LaTeX\">${V}_{{\\text {gmax}}})$ </tex-math></inline-formula> stress, the trapped hot electrons in the channel region bring an increase in threshold voltage (<inline-formula> <tex-math notation=\"LaTeX\">${V}_{{\\text {th}}})$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${R}_{{ \\mathrm{\\scriptscriptstyle ON}}}$ </tex-math></inline-formula>, while the generation of large numbers of interface states at the bird’s beak further dramatically increases the <inline-formula> <tex-math notation=\"LaTeX\">${R}_{{ \\mathrm{\\scriptscriptstyle ON}}}$ </tex-math></inline-formula>. A novel device structurewith a poly-gate partly recessed into the field oxide has been presented to decrease the hot-carrier-induced degradations.",
"title": ""
},
{
"docid": "ef98966f79d5c725b33e227f86e610a2",
"text": "We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. We achieve a new state of the art on the WIKITEXT-103 benchmark of 20.51 perplexity, improving the next best known result by 8.7 perplexity. On the BILLION WORD benchmark, we achieve a state of the art of 24.14 perplexity.1",
"title": ""
},
{
"docid": "d4b9d294d60ef001bee3a872b17a75b1",
"text": "Real-time formative assessment of student learning has become the subject of increasing attention. Students' textual responses to short answer questions offer a rich source of data for formative assessment. However, automatically analyzing textual constructed responses poses significant computational challenges, and the difficulty of generating accurate assessments is exacerbated by the disfluencies that occur prominently in elementary students' writing. With robust text analytics, there is the potential to accurately analyze students' text responses and predict students' future success. In this paper, we present WriteEval, a hybrid text analytics method for analyzing student-composed text written in response to constructed response questions. Based on a model integrating a text similarity technique with a semantic analysis technique, WriteEval performs well on responses written by fourth graders in response to short-text science questions. Further, it was found that WriteEval's assessments correlate with summative analyses of student performance.",
"title": ""
},
{
"docid": "3d7fcc8b4715bdf2e54dfab4c989cf29",
"text": "All vertebrates, including humans, obtain most of their daily vitamin D requirement from casual exposure to sunlight. During exposure to sunlight, the solar ultraviolet B photons (290-315 nm) penetrate into the skin where they cause the photolysis of 7-dehydrocholesterol to precholecalciferol. Once formed, precholecalciferol undergoes a thermally induced rearrangement of its double bonds to form cholecalciferol. An increase in skin pigmentation, aging, and the topical application of a sunscreen diminishes the cutaneous production of cholecalciferol. Latitude, season, and time of day as well as ozone pollution in the atmosphere influence the number of solar ultraviolet B photons that reach the earth's surface, and thereby, alter the cutaneous production of cholecalciferol. In Boston, exposure to sunlight during the months of November through February will not produce any significant amounts of cholecalciferol in the skin. Because windowpane glass absorbs ultraviolet B radiation, exposure of sunlight through glass windows will not result in any production of cholecalciferol. It is now recognized that vitamin D insufficiency and vitamin D deficiency are common in elderly people, especially in those who are infirm and not exposed to sunlight or who live at latitudes that do not provide them with sunlight-mediated cholecalciferol during the winter months. Vitamin D insufficiency and deficiency exacerbate osteoporosis, cause osteomalacia, and increase the risk of skeletal fractures. Vitamin D insufficiency and deficiency can be prevented by encouraging responsible exposure to sunlight and/or consumption of a multivitamin tablet that contains 10 micrograms (400 IU) vitamin D.",
"title": ""
},
{
"docid": "9e20e4a12808a7947623cc23d84c9a6f",
"text": "In this paper we will present the new design of TEM double-ridged horn antenna, resulting in a better VSWR and improved gain of antenna. A cavity back and a new technique for tapering the flared section of the TEM horn antenna are introduced to improve the return loss and matching of the impedance, respectively. By tapering the ridges of antenna both laterally and longitudinally it is possible to extend the operating frequency band while decreasing the size of antenna. The proposed antenna is simulated with two commercially available packages, namely Ansoft HFSS and CST microwave studio. Stimulation results for the VSWR, radiation patterns, and gain of the designed TEM horn antenna over the frequency band 2–18 GHz are presented.",
"title": ""
},
{
"docid": "baa71f083831919a067322ab4b268db5",
"text": "– The theoretical analysis gives an overview of the functioning of DDS, especially with respect to noise and spurs. Different spur reduction techniques are studied in detail. Four ICs, which were the circuit implementations of the DDS, were designed. One programmable logic device implementation of the CORDIC based quadrature amplitude modulation (QAM) modulator was designed with a separate D/A converter IC. For the realization of these designs some new building blocks, e.g. a new tunable error feedback structure and a novel and more cost-effective digital power ramp generator, were developed. Implementing a DDS on an FPGA using Xilinx’s ISE software. IndexTerms—CORDIC, DDS, NCO, FPGA, SFDR. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "416a03dba8d76458d07a3e8d9303d4ac",
"text": "We introduce a unified optimization framework for geometry processing based on shape constraints. These constraints preserve or prescribe the shape of subsets of the points of a geometric data set, such as polygons, one-ring cells, volume elements, or feature curves. Our method is based on two key concepts: a shape proximity function and shape projection operators. The proximity function encodes the distance of a desired least-squares fitted elementary target shape to the corresponding vertices of the 3D model. Projection operators are employed to minimize the proximity function by relocating vertices in a minimal way to match the imposed shape constraints. We demonstrate that this approach leads to a simple, robust, and efficient algorithm that allows implementing a variety of geometry processing applications, simply by combining suitable projection operators. We show examples for computing planar and circular meshes, shape space exploration, mesh quality improvement, shape-preserving deformation, and conformal parametrization. Our optimization framework provides a systematic way of building new solvers for geometry processing and produces similar or better results than state-of-the-art methods.",
"title": ""
},
{
"docid": "44e7ba0be5275047587e9afd22f1de2a",
"text": "Dialogue state tracking plays an important role in statistical dialogue management. Domain-independent rule-based approaches are attractive due to their efficiency, portability and interpretability. However, recent rule-based models are still not quite competitive to statistical tracking approaches. In this paper, a novel framework is proposed to formulate rule-based models in a general way. In the framework, a rule is considered as a special kind of polynomial function satisfying certain linear constraints. Under some particular definitions and assumptions, rule-based models can be seen as feasible solutions of an integer linear programming problem. Experiments showed that the proposed approach can not only achieve competitive performance compared to statistical approaches, but also have good generalisation ability. It is one of the only two entries that outperformed all the four baselines in the third Dialog State Tracking Challenge.",
"title": ""
},
{
"docid": "373dfa09c3833d4d497fd79d7b0297cc",
"text": "This paper introduces a novel approach to battery management. In contrast to state-of-the-art solutions where a central Battery Management System (BMS) exists, we propose an Embedded Battery Management (EBM) that entirely decentralizes the monitoring and control of the battery pack. For this purpose, each cell of the pack is equipped with a Cell Management Unit (CMU) that monitors and controls local parameters of the respective cell, using its computational and communication resources. This combination of a battery cell and CMU forms the smart cell. Consequently, system-level functions are performed in a distributed fashion by the network of smart cells, applying concepts of self-organization to enable plug-and-play integration. This decentralized distributed architecture might offer significant advantages over centralized BMSs, resulting in higher modularity, easier integration and shorter time to market for battery packs. A development platform has been set up to design and analyze circuits, protocols and algorithms for EBM enabled by smart cells.",
"title": ""
},
{
"docid": "089c003534670cf6ab296828bf2604a3",
"text": "The development of ultra-low power LSIs is a promising area of research in microelectronics. Such LSIs would be suitable for use in power-aware LSI applications such as portable mobile devices, implantable medical devices, and smart sensor networks [1]. These devices have to operate with ultra-low power, i.e., a few microwatts or less, because they will probably be placed under conditions where they have to get the necessary energy from poor energy sources such as microbatteries or energy scavenging devices [2]. As a step toward such LSIs, we first need to develop voltage and current reference circuits that can operate with an ultra-low current, several tens of nanoamperes or less, i.e., sub-microwatt operation. To achieve such low-power operation, the circuits have to be operated in the subthreshold region, i.e., a region at which the gate-source voltage of MOSFETs is lower than the threshold voltage [3; 4]. Voltage and current reference circuits are important building blocks for analog, digital, and mixed-signal circuit systems in microelectronics, because the performance of these circuits is determined mainly by their bias voltages and currents. The circuits generate a constant reference voltage and current for various other components such as operational amplifiers, comparators, AD/DA converters, oscillators, and PLLs. For this purpose, bandgap reference circuits with CMOS-based vertical bipolar transistors are conventionally used in CMOS LSIs [5; 6]. However, they need resistors with a high resistance of several hundred megaohms to achieve low-current, subthreshold operation. Such a high resistance needs a large area to be implemented, and this makes conventional bandgap references unsuitable for use in ultra-low power LSIs. Therefore, modified voltage and current reference circuits for lowpower LSIs have been reported (see [7]-[12], [14]-[17]). However, these circuits have various problems. For example, their power dissipations are still large, their output voltages and currents are sensitive to supply voltage and temperature variations, and they have complex circuits with many MOSFETs; these problems are inconvenient for practical use in ultra-low power LSIs. Moreover, the effect of process variations on the reference signal has not been discussed in detail. To solve these problems, I and my colleagues reported new voltage and current reference circuits [13; 18] that can operate with sub-microwatt power dissipation and with low sensitivity to temperature and supply voltage. Our circuits consist of subthreshold MOSFET circuits and use no resistors.",
"title": ""
},
{
"docid": "e9698e55abb8cee0f3a5663517bd0037",
"text": "0377-2217/$ see front matter 2008 Elsevier B.V. A doi:10.1016/j.ejor.2008.06.027 * Corresponding author. Tel.: +32 16326817. E-mail address: Nicolas.Glady@econ.kuleuven.ac.b The definition and modeling of customer loyalty have been central issues in customer relationship management since many years. Recent papers propose solutions to detect customers that are becoming less loyal, also called churners. The churner status is then defined as a function of the volume of commercial transactions. In the context of a Belgian retail financial service company, our first contribution is to redefine the notion of customer loyalty by considering it from a customer-centric viewpoint instead of a product-centric one. We hereby use the customer lifetime value (CLV) defined as the discounted value of future marginal earnings, based on the customer’s activity. Hence, a churner is defined as someone whose CLV, thus the related marginal profit, is decreasing. As a second contribution, the loss incurred by the CLV decrease is used to appraise the cost to misclassify a customer by introducing a new loss function. In the empirical study, we compare the accuracy of various classification techniques commonly used in the domain of churn prediction, including two cost-sensitive classifiers. Our final conclusion is that since profit is what really matters in a commercial environment, standard statistical accuracy measures for prediction need to be revised and a more profit oriented focus may be desirable. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fb37da1dc9d95501e08d0a29623acdab",
"text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.",
"title": ""
},
{
"docid": "de83d02f5f120163ed86050ee6962f50",
"text": "Researchers have recently questioned the benefits associated with having high self-esteem. The authors propose that the importance of self-esteem lies more in how people strive for it rather than whether it is high or low. They argue that in domains in which their self-worth is invested, people adopt the goal to validate their abilities and qualities, and hence their self-worth. When people have self-validation goals, they react to threats in these domains in ways that undermine learning; relatedness; autonomy and self-regulation; and over time, mental and physical health. The short-term emotional benefits of pursuing self-esteem are often outweighed by long-term costs. Previous research on self-esteem is reinterpreted in terms of self-esteem striving. Cultural roots of the pursuit of self-esteem are considered. Finally, the alternatives to pursuing self-esteem, and ways of avoiding its costs, are discussed.",
"title": ""
},
{
"docid": "48a476d5100f2783455fabb6aa566eba",
"text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].",
"title": ""
},
{
"docid": "3dcfcaa97fcc1bce04ce515027e64927",
"text": "Abs t rac t . RoboCup is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and exaznined. The first R o b o C u p competition was held at IJCAI-97, Nagoya. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensorfllsion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup's final target is a world cup with real robots, RoboCup offers a softwaxe platform for research on the software aspects of RoboCup. This paper describes technical chalhmges involw~d in RoboCup, rules, and simulation environment.",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "bceb9f8cc1726017e564c6474618a238",
"text": "The modulators are the basic requirement of communication systems they are designed to reduce the channel distortion & to use in RF communication hence many type of carrier modulation techniques has been already proposed according to channel properties & data rate of the system. QPSK (Quadrature Phase Shift Keying) is one of the modulation schemes used in wireless communication system due to its ability to transmit twice the data rate for a given bandwidth. The QPSK is the most often used scheme since it does not suffer from BER (Bit Error rate) degradation while the bandwidth efficiency is increased. It is very popular in Satellite communication. As the design of complex mathematical models such as QPSK modulator in „pure HDL‟ is very difficult and costly; it requires from designer many additional skills and is time-consuming. To overcome these types of difficulties, the proposed QPSK modulator can be implemented on FPGA by using the concept of hardware co-simulation at Low power. In this process, QPSK modulator is simulated with Xilinx System Generator Simulink software and later on it is converted in Very high speed integrated circuit Hardware Descriptive Language to implement it on FPGA. Along with the co-simulation, power of the proposed QPSK modulator can be minimized than conventional QPSK modulator. As a conclusion, the proposed architecture will not only able to operate on co-simulation platform but at the same time it will significantly consume less operational power.",
"title": ""
},
{
"docid": "77c8a86fba0183e2b9183ba823e9d9cf",
"text": "The study of neural circuit reconstruction, i.e., connectomics, is a challenging problem in neuroscience. Automated and semi-automated electron microscopy (EM) image analysis can be tremendously helpful for connectomics research. In this paper, we propose a fully automatic approach for intra-section segmentation and inter-section reconstruction of neurons using EM images. A hierarchical merge tree structure is built to represent multiple region hypotheses and supervised classification techniques are used to evaluate their potentials, based on which we resolve the merge tree with consistency constraints to acquire final intra-section segmentation. Then, we use a supervised learning based linking procedure for the inter-section neuron reconstruction. Also, we develop a semi-automatic method that utilizes the intermediate outputs of our automatic algorithm and achieves intra-segmentation with minimal user intervention. The experimental results show that our automatic method can achieve close-to-human intra-segmentation accuracy and state-of-the-art inter-section reconstruction accuracy. We also show that our semi-automatic method can further improve the intra-segmentation accuracy.",
"title": ""
},
{
"docid": "7d59b5e4523a6ee280b758ad8a55d3eb",
"text": "A feasible pathway to scale germanium (Ge) FETs in future technology nodes has been proposed using the tunable diamond-shaped Ge nanowire (NW). The Ge NW was obtained through a simple top-down dry etching and blanket Ge epitaxy techniques readily available in mass production. The different etching selectivity of surface orientations for Cl2 and HBr was employed for the three-step isotropic/anisotropic/isotropic dry etching. The ratio of Cl2 and HBr, mask width, and Ge recess depth were crucial for forming the nearly defect-free suspended Ge channel through effective removal of dislocations near the Si/Ge interface. This technique could also be applied for forming diamond-shaped Si NWs. The suspended diamond-shaped NW gate-all-around NWFETs feature excellent electrostatics, the favorable {111} surfaces along the (110) direction with high carrier mobility, and the nearly defect-free Ge channel. The pFET with a high ION/IOFF ratio of 6 × 107 and promising nFET performance have been demonstrated successfully.",
"title": ""
},
{
"docid": "71e8c35e0f0b5756d14821622a8d0fc5",
"text": "Classic drugs of abuse lead to specific increases in cerebral functional activity and dopamine release in the shell of the nucleus accumbens (the key neural structure for reward, motivation, and addiction). In contrast, caffeine at doses reflecting daily human consumption does not induce a release of dopamine in the shell of the nucleus accumbens but leads to a release of dopamine in the prefrontal cortex, which is consistent with its reinforcing properties.",
"title": ""
}
] |
scidocsrr
|
c2bc157bdb2ea223963acb780bd5dc53
|
Ideal ratio mask estimation using deep neural networks for robust speech recognition
|
[
{
"docid": "1cd45a4f897ea6c473d00c4913440836",
"text": "What is the computational goal of auditory scene analysis? This is a key issue to address in the Marrian information-processing framework. It is also an important question for researchers in computational auditory scene analysis (CASA) because it bears directly on how a CASA system should be evaluated. In this chapter I discuss different objectives used in CASA. I suggest as a main CASA goal the use of the ideal time-frequency (T-F) binary mask whose value is one for a T-F unit where the target energy is greater than the interference energy and is zero otherwise. The notion of the ideal binary mask is motivated by the auditory masking phenomenon. Properties of the ideal binary mask are discussed, including their relationship to automatic speech recognition and human speech intelligibility. This CASA goal has led to algorithms that directly estimate the ideal binary mask in monaural and binaural conditions, and these algorithms have substantially advanced the state-of-the-art performance in speech separation.",
"title": ""
}
] |
[
{
"docid": "8e06dbf42df12a34952cdd365b7f328b",
"text": "Data and theory from prism adaptation are reviewed for the purpose of identifying control methods in applications of the procedure. Prism exposure evokes three kinds of adaptive or compensatory processes: postural adjustments (visual capture and muscle potentiation), strategic control (including recalibration of target position), and spatial realignment of various sensory-motor reference frames. Muscle potentiation, recalibration, and realignment can all produce prism exposure aftereffects and can all contribute to adaptive performance during prism exposure. Control over these adaptive responses can be achieved by manipulating the locus of asymmetric exercise during exposure (muscle potentiation), the similarity between exposure and post-exposure tasks (calibration), and the timing of visual feedback availability during exposure (realignment).",
"title": ""
},
{
"docid": "309dee96492cf45ed2887701b27ad3ee",
"text": "The objective of a systematic review is to obtain empirical evidence about the topic under review and to allow moving forward the body of knowledge of a discipline. Therefore, systematic reviewing is a tool we can apply in Software Engineering to develop well founded guidelines with the final goal of improving the quality of the software systems. However, we still do not have as much experience in performing systematic reviews as in other disciplines like medicine, and therefore we need detailed guidance. This paper presents a proposal of a improved process to perform systematic reviews in software engineering. This process is the result of the tasks carried out in a first review and a subsequent update concerning the effectiveness of elicitation techniques.",
"title": ""
},
{
"docid": "cdc44e34c20a5fa88ee222e0ee8f8417",
"text": "We introduce a novel framework for interactive landscape authoring that supports bi-directional feedback between erosion and vegetation simulation. Vegetation and terrain erosion have strong mutual impact and their interplay influences the overall realism of virtual scenes. Despite their importance, these complex interactions have been neglected in computer graphics. Our framework overcomes this by simulating the effect of a variety of geomorphological agents and the mutual interaction between different material and vegetation layers, including rock, sand, humus, grass, shrubs, and trees. Users are able to exploit these interactions with an authoring interface that consistently shapes the terrain and populates it with details. Our method, validated through side-by-side comparison with real terrains, can be used not only to generate realistic static landscapes, but also to follow the temporal evolution of a landscape over a few centuries.",
"title": ""
},
{
"docid": "38fce364b69f543049ccffd1a20b064b",
"text": "Phishing attacks sap billions of dollars annually from unsuspecting individuals while compromising individual privacy. Companies and privacy advocates seek ways to better educate the populace against such attacks. Current approaches examining phishing include test-based techniques that ask subjects to classify content as phishing or not and inthe- wild techniques that directly observe subject behavior through distribution of faked phishing attacks. Both approaches have issues. Test-based techniques produce less reliable data since subjects may adjust their behavior with the expectation of seeing phishing stimuli, while in-the-wild studies can put subjects at risk through lack of consent or exposure of data. This paper examines a third approach that seeks to incorporate game-based learning techniques to combine the realism of in-thewild approaches with the training features of testing approaches. We propose a three phase experiment to test our approach on our CyberPhishing simulation platform, and present the results of phase one.",
"title": ""
},
{
"docid": "8e305c5682d2587b34f324dee394bf68",
"text": "This paper aescriDes a program, ca l led \"PRUW\", which w r i t es programs. PROW accepts the s p e c i f i ca t i on of the program in the language of predicate ca l cu lus , decides the a lgor i thm f o r the program and then produces a LISP program which is an im plementat ion of the a lgor i thm. Since the construc t i o n of the a lgor i thm is obtained by formal theorem-proving techniques, the programs tha t PROW wr i t es are f ree from l o g i c a l er rors and do not have to be debugged. The user of PROW can make PROW w r i t e programs in languages other than LISP by modifying the par t of PROW tha t t rans la tes an a lgor i thm to a LISP program. Thus PROW can be modi f ied to w r i t e programs in any language. In the end of t h i s paper, it is shown tha t PROW can a lso be used as a quest ion-answering program.",
"title": ""
},
{
"docid": "ccd663355ff6070b3668580150545cea",
"text": "In this paper, the user effects on mobile terminal antennas at 28 GHz are statistically investigated with the parameters of body loss, coverage efficiency, and power in the shadow. The data are obtained from the measurements of 12 users in data and talk modes, with the antenna placed on the top and bottom of the chassis. In the measurements, the users hold the phone naturally. The radiation patterns and shadowing regions are also studied. It is found that a significant amount of power can propagate into the shadow of the user by creeping waves and diffractions. A new metric is defined to characterize this phenomenon. A mean body loss of 3.2–4 dB is expected in talk mode, which is also similar to the data mode with the bottom antenna. A body loss of 1 dB is expected in data mode with the top antenna location. The variation of the body loss between the users at 28 GHz is less than 2 dB, which is much smaller than that of the conventional cellular bands below 3 GHz. The coverage efficiency is significantly reduced in talk mode, but only slightly affected in data mode.",
"title": ""
},
{
"docid": "d647410661f83652e2a1be51c7ec878b",
"text": "The objective of this study was to assess the effect of the probiotic Lactobacillus murinus native strain (LbP2) on general clinical parameters of dogs with distemper-associated diarrhea. Two groups of dogs over 60 d of age with distemper and diarrhea were used in the study, which was done at the Animal Hospital of the Veterinary Faculty of the University of Uruguay, Montevideo, Uruguay. The dogs were treated orally each day for 5 d with the probiotic or with a placebo (vehicle without bacteria). Clinical parameters were assessed and scored according to a system specially designed for this study. Blood parameters were also measured. Administration of the probiotic significantly improved the clinical score of the patients, whereas administration of the placebo did not. Stool output, fecal consistency, mental status, and appetite all improved in the probiotic-treated dogs. These results support previous findings of beneficial effects with the probiotic L. murinus LbP2 in dogs. Thus, combined with other therapeutic measures, probiotic treatment appears to be promising for the management of canine distemper-associated diarrhea.",
"title": ""
},
{
"docid": "50eaa44f8e89870750e279118a219d7a",
"text": "Fitbit fitness trackers record sensitive personal information, including daily step counts, heart rate profiles, and locations visited. By design, these devices gather and upload activity data to a cloud service, which provides aggregate statistics to mobile app users. The same principles govern numerous other Internet-of-Things (IoT) services that target different applications. As a market leader, Fitbit has developed perhaps the most secure wearables architecture that guards communication with end-to-end encryption. In this article, we analyze the complete Fitbit ecosystem and, despite the brand's continuous efforts to harden its products, we demonstrate a series of vulnerabilities with potentially severe implications to user privacy and device security. We employ a range of techniques, such as protocol analysis, software decompiling, and both static and dynamic embedded code analysis, to reverse engineer previously undocumented communication semantics, the official smartphone app, and the tracker firmware. Through this interplay and in-depth analysis, we reveal how attackers can exploit the Fitbit protocol to extract private information from victims without leaving a trace, and wirelessly flash malware without user consent. We demonstrate that users can tamper with both the app and firmware to selfishly manipulate records or circumvent Fitbit's walled garden business model, making the case for an independent, user-controlled, and more secure ecosystem. Finally, based on the insights gained, we make specific design recommendations that can not only mitigate the identified vulnerabilities, but are also broadly applicable to securing future wearable system architectures.",
"title": ""
},
{
"docid": "62ba312d26ffbbfdd52130c08031905f",
"text": "The effects of intravascular laser irradiation of blood (ILIB), with 405 and 632.8 nm on serum blood sugar (BS) level, were comparatively studied. Twenty-four diabetic type 2 patients received 14 sessions of ILIB with blue and red lights. BS was measured before and after therapy. Serum BS decreased highly significant after ILIB with both red and blue lights (p < 0.0001), but we did not find significant difference between red and blue lights. The ILIB effect would be of benefit in the clinical treatment of diabetic type 2 patients, irrespective of lasers (blue or red lights) that are used.",
"title": ""
},
{
"docid": "fc69f1c092bae3328ce9c5975929e92c",
"text": "In allusion to the “on-line beforehand decision-making, real time matching”, this paper proposes the stability control flow based on PMU for interconnected power system, which is a real-time stability control. In this scheme, preventive control, emergency control and corrective control are designed to a closed-loop rolling control process, it will protect the stability of power system. Then it ameliorates the corrective control process, and presents a new control method which is based on PMU and EEAC method. This scheme can ensure the real-time quality and advance the veracity for the corrective control.",
"title": ""
},
{
"docid": "8010361144a7bd9fc336aba88f6e8683",
"text": "Moving garments and other cloth objects exhibit dynamic, complex wrinkles. Generating such wrinkles in a virtual environment currently requires either a time-consuming manual design process, or a computationally expensive simulation, often combined with accurate parameter-tuning requiring specialized animator skills. Our work presents an alternative approach for wrinkle generation which combines coarse cloth animation with a post-processing step for efficient generation of realistic-looking fine dynamic wrinkles. Our method uses the stretch tensor of the coarse animation output as a guide for wrinkle placement. To ensure temporal coherence, the placement mechanism uses a space-time approach allowing not only for smooth wrinkle appearance and disappearance, but also for wrinkle motion, splitting, and merging over time. Our method generates believable wrinkle geometry using specialized curve-based implicit deformers. The method is fully automatic and has a single user control parameter that enables the user to mimic different fabrics.",
"title": ""
},
{
"docid": "fc96dea9865a5252b25addfc446e293b",
"text": "Over the last 20 years, the world of origami has been changed by the introduction of design algorithms that bear a close relationship to, if not outright ancestry from, computational geometry. One of the first robust algorithms for origami design was the circle/river method (also called the tree method) developed independently by Lang [7–9] and Meguro [12, 13]. This algorithm and its variants provide a systematic method for folding any structure that topologically resembles a graph theoretic weighted tree. Other algorithms followed, notably one by Tachi [15] that gives the crease pattern to fold an arbitrary 3D surface. Hopes of a general approach for efficiently solving all origami design problems were dashed early on, when Bern and Hayes showed in 1996 that the general problem of crease assignment — given an arbitrary crease pattern, determine whether each fold is mountain or valley — was NP-complete [1]. In fact, they showed more: given a complete crease assignment, simply determining the stacking order of the layers of paper was also NP-complete. Fortunately, while crease assignment in the general case is hard, the crease patterns generated by the various design algorithms carry with them significant extra information associated with each crease, enough extra information that the problem of crease assignment is typically only polynomial in difficulty. This is certainly the case for the tree method of design [3]. Designing a model using the tree method (or one of its variants) is a two-step process: the first step involves solving an optimization problem where one solves for certain key vertices of the crease pattern. The second step constructs creases following a geometric prescription and assigns their status as mountain, valley, or unfolded. The process of constructing the creases and assigning them is definitely polynomial in complexity; but, up to now, the computational complexity of the optimization was not established. There were reasons for believing that the optimization was, in principle, computationally intractable. The conditions on the vertex coordinates in the",
"title": ""
},
{
"docid": "98b6da9a1ab53b94c50a98b25cdf2da4",
"text": "There are many thousands of hereditary diseases in humans, each of which has a specific combination of phenotypic features, but computational analysis of phenotypic data has been hampered by lack of adequate computational data structures. Therefore, we have developed a Human Phenotype Ontology (HPO) with over 8000 terms representing individual phenotypic anomalies and have annotated all clinical entries in Online Mendelian Inheritance in Man with the terms of the HPO. We show that the HPO is able to capture phenotypic similarities between diseases in a useful and highly significant fashion.",
"title": ""
},
{
"docid": "8093219e7e2b4a7067f8d96118a5ea93",
"text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.",
"title": ""
},
{
"docid": "9065a59c349b0bcf36c47b3d51f87461",
"text": "The goal of this work is to compare different versions of three-dimensional product-presentations with two-dimensional ones. Basically the usability of these technologies will be compared and other user related factors will be integrated into the test as well. These factors were determined via a literature research. In order to achieve a generalizable conclusion about 3D-web-applications in e-commerce sample products from miscellaneous product categories are chosen for the study. This paper starts with the summary of the literature research about the factors for the study. It continues by shortly introducing research methods and strategies and explaining why certain methods are selected for this kind of study. The conception of the study is described in detail in the following paragraph. With the help of the generalized results of the study, recommendations for the usage of 3D-product-presentations in practical e-commerce environments are given.",
"title": ""
},
{
"docid": "07e93064b1971a32b5c85b251f207348",
"text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.",
"title": ""
},
{
"docid": "f02c2720bb61cb916643ca9708910c77",
"text": "This paper presents the NLP-TEA 2016 shared task for Chinese grammatical error diagnosis which seeks to identify grammatical error types and their range of occurrence within sentences written by learners of Chinese as foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 15 teams registered for this shared task, 9 teams developed the system and submitted a total of 36 runs. We expected this evaluation campaign could lead to the development of more advanced NLP techniques for educational applications, especially for Chinese error detection. All data sets with gold standards and scoring scripts are made publicly available to researchers.",
"title": ""
},
{
"docid": "8920515550badf76df21d46c12257c1b",
"text": "In order to provide a steady platform for scientific instruments, decrease the energy consumption, it is important to analyze and improve the mobility performance of lunar rovers. The mobility performance indexes are built based on four kinds of work conditions by summarizing the lunar surface terrain characteristics. The performance of overturning stability, load equalization of the wheels and trafficability are analyzed. The optimization mathematical model of the suspension parameters is established. Based on the sequence quadratic programming (SQP) algorithm, the optimal parameters of the suspension are obtained by taking the maximum energy consumption of motor and stationarity of mass center as the optimization objectives and overturning stability and trafficability parameters as constraints. The results indicate that the energy consumption of the motors, displacement and pitch angle of the mass center reduces significantly.",
"title": ""
},
{
"docid": "3aa0fe0c1fe501ec9df9178e11052c7f",
"text": "The most recent edition of the American Psychological Association (APA) Manual states that two spaces should follow the punctuation at the end of a sentence. This is in contrast to the one-space requirement from previous editions. However, to date, there has been no empirical support for either convention. In the current study, participants performed (1) a typing task to assess spacing usage and (2) an eye-tracking experiment to assess the effect that punctuation spacing has on reading performance. Although comprehension was not affected by punctuation spacing, the eye movement record suggested that initial processing of the text was facilitated when periods were followed by two spaces, supporting the change made to the APA Manual. Individuals' typing usage also influenced these effects such that those who use two spaces following a period showed the greatest overall facilitation from reading with two spaces.",
"title": ""
}
] |
scidocsrr
|
f250149f88277cefe25315d9e3a43548
|
Cooperative intersection collision-warning system based on vehicle-to-vehicle communication
|
[
{
"docid": "d962b838abea94f1a457c3ed767b5972",
"text": "This paper examines the potential effectiveness of the following three precollision system (PCS) algorithms: 1) forward collision warning only; 2) forward collision warning and precrash brake assist; and 3) forward collision warning, precrash brake assist, and autonomous precrash brake. Real-world rear-end crashes were extracted from a nationally representative sample of collisions in the United States. A sample of 1396 collisions, corresponding to 1.1 million crashes, were computationally simulated as if they occurred, with the driver operating a precollision-system-equipped vehicle. A probability-based framework was developed to account for the variable driver reaction to the warning system. As more components were added to the algorithms, greater benefits were realized. The results indicate that the exemplar PCS investigated in this paper could reduce the severity (i.e., ΔV) of the collision between 14% and 34%. The number of moderately to fatally injured drivers who wore their seat belts could have been reduced by 29% to 50%. These collision-mitigating algorithms could have prevented 3.2% to 7.7% of rear-end collisions. This paper shows the dramatic reductions in serious and fatal injuries that a PCS, which is one of the first intelligent vehicle technologies to be deployed in production cars, can bring to highway safety when available throughout the fleet. This paper also presents the framework of an innovative safety benefits methodology that, when adapted to other emerging active safety technologies, can be employed to estimate potential reductions in the frequency and severity of highway crashes.",
"title": ""
}
] |
[
{
"docid": "67deeb818dbd553afaa7ae3c21fc0dee",
"text": "Switched reluctance motors (SR Motors) attract attention as motor that use no rare earth materials. And it is a candidate technology for electric vehicle application. In addition, axial-gap structure has possibility of effective utilization of in-wheel flat motor space. This paper mainly discusses the design and the characteristics of axial-gap SR motors. This study focuses on the volumetric constrains of in-wheel drive system. First, the results of comparing the axialgap SR motors specification to the radial-gap one at same volume are shown that utilize the available active volume. By the results, a new flat volume axial-gap SR motor for in-wheel direct-drive EV is designed. Finally, a new support link structure is proposed for the large axial direction electromagnetic force of axial-gap SR motor.",
"title": ""
},
{
"docid": "d6995e9a0e97108095711c4dfb400022",
"text": "This paper investigates a novel method for the control of “morphing” aircraft. The concept consists of a pair of winglets with adjustable cant angle, independently actuated and mounted at the tips of a baseline flying wing. The general philosophybehind the conceptwas that for specificflight conditions such as a coordinated turn, the use of two control devices would be sufficient for adequate control. Computations with a vortex lattice model and subsequent wind-tunnel tests demonstrate the viability of the concept, with individual and/or dual winglet deflection producing multi-axis coupled control moments. Comparisons between the experimental and computational results showed reasonable to good agreement, with the major discrepancies thought to be due to wind-tunnel model aeroelastic effects.",
"title": ""
},
{
"docid": "959618d50b59ce316cebb24a18375cde",
"text": "Research experiences today are limited to a privileged few at select universities. Providing open access to research experiences would enable global upward mobility and increased diversity in the scientific workforce. How can we coordinate a crowd of diverse volunteers on open-ended research? How could a PI have enough visibility into each person's contributions to recommend them for further study? We present Crowd Research, a crowdsourcing technique that coordinates open-ended research through an iterative cycle of open contribution, synchronous collaboration, and peer assessment. To aid upward mobility and recognize contributions in publications, we introduce a decentralized credit system: participants allocate credits to each other, which a graph centrality algorithm translates into a collectively-created author order. Over 1,500 people from 62 countries have participated, 74% from institutions with low access to research. Over two years and three projects, this crowd has produced articles at top-tier Computer Science venues, and participants have gone on to leading graduate programs.",
"title": ""
},
{
"docid": "6c2a033b374b4318cd94f0a617ec705a",
"text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.",
"title": ""
},
{
"docid": "78eecb90bad21916621687d8eac0e557",
"text": "AIM\nThe aim of this paper is to present the Australian Spasticity Assessment Scale (ASAS) and to report studies of its interrater reliability. The ASAS identifies the presence of spasticity by confirming a velocity-dependent increased response to rapid passive movement and quantifies it using an ordinal scale.\n\n\nMETHOD\nThe rationale and procedure for the ASAS is described. Twenty-two participants with spastic CP (16 males; age range 1y 11mo-15y 3mo) who had not had botulinum neurotoxin-A within 4 months, or bony or soft tissue surgery within 12 months, were recruited from the spasticity management clinic of a tertiary paediatric teaching hospital. Fourteen muscles in each child were assessed by each of three experienced independent raters. ASAS was recorded for all muscles. Interrater reliability was calculated using the weighted kappa statistic (quadratic weighting; κqw) for individual muscles, for upper limbs, for lower limbs, and between raters.\n\n\nRESULTS\nThe weighted kappa ranged between 0.75 and 0.92 for individual muscle groups and was 0.87 between raters.\n\n\nINTERPRETATION\nThe ASAS complies with the definition of spasticity and is clinically feasible in paediatric settings. Our estimates of interrater reliability for the ASAS exceed that of the most commonly used spasticity scoring systems.",
"title": ""
},
{
"docid": "ceb6d99e16e2e93e57e65bf1ca89b44c",
"text": "The common use of smart devices encourages potential attackers to violate privacy. Sometimes taking control of one device allows the attacker to obtain secret data (such as password for home WiFi network) or tools to carry out DoS attack, and this, despite the limited resources of such devices. One of the solutions for gaining users’ confidence is to assign responsibility for detecting attacks to the service provider, particularly Internet Service Provider (ISP). It is possible, since ISP often provides also the Home Gateway (HG)—device that has multiple roles: residential router, entertainment center, and home’s “command and control” center which allows to manage the Smart Home entities. The ISP may extend this set of functionalities by implementing an intrusion detection software in HG provisioned to their customers. In this article we propose an Intrusion Detection System (IDS) distributed between devices residing at user’s and ISP’s premises. The Home Gateway IDS and the ISP’s IDS constitute together a distributed structure which allows spreading computations related to attacks against Smart Home ecosystem. On the other hand, it also leverages the operator’s knowledge of security incidents across the customer premises. This distributed structure is supported by the ISP’s expert system that helps to detect distributed attacks i.e., using botnets.",
"title": ""
},
{
"docid": "25cbc3f8f9ecbeb89c2c49c044e61c2a",
"text": "This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt.",
"title": ""
},
{
"docid": "a696fd5e0328b27d8d952bdadfd6f58c",
"text": "Aiming at the problem of low speed of 3D reconstruction of indoor scenes with monocular vision, the color images and depth images of indoor scenes based on ASUS Xtion monocular vision sensor were used for 3D reconstruction. The image feature extraction using the ORB feature detection algorithm, and compared the efficiency of several kinds of classic feature detection algorithm in image matching, Ransac algorithm and ICP algorithm are used to point cloud fusion. Through experiments, a fast 3D reconstruction method for indoor, simple and small-scale static environment is realized. Have good accuracy, robustness, real-time and flexibility.",
"title": ""
},
{
"docid": "a673945eaa9b5a350f7d7421c45ac238",
"text": "The intention of this study was to identify the bacterial pathogens infecting Oreochromis niloticus (Nile tilapia) and Clarias gariepinus (African catfish), and to establish the antibiotic susceptibility of fish bacteria in Uganda. A total of 288 fish samples from 40 fish farms (ponds, cages, and tanks) and 8 wild water sites were aseptically collected and bacteria isolated from the head kidney, liver, brain and spleen. The isolates were identified by their morphological characteristics, conventional biochemical tests and Analytical Profile Index test kits. Antibiotic susceptibility of selected bacteria was determined by the Kirby-Bauer disc diffusion method. The following well-known fish pathogens were identified at a farm prevalence of; Aeromonas hydrophila (43.8%), Aeromonas sobria (20.8%), Edwardsiella tarda (8.3%), Flavobacterium spp. (4.2%) and Streptococcus spp. (6.3%). Other bacteria with varying significance as fish pathogens were also identified including Plesiomonas shigelloides (25.0%), Chryseobacterium indoligenes (12.5%), Pseudomonas fluorescens (10.4%), Pseudomonas aeruginosa (4.2%), Pseudomonas stutzeri (2.1%), Vibrio cholerae (10.4%), Proteus spp. (6.3%), Citrobacter spp. (4.2%), Klebsiella spp. (4.2%) Serratia marcescens (4.2%), Burkholderia cepacia (2.1%), Comamonas testosteroni (8.3%) and Ralstonia picketti (2.1%). Aeromonas spp., Edwardsiella tarda and Streptococcus spp. were commonly isolated from diseased fish. Aeromonas spp. (n = 82) and Plesiomonas shigelloides (n = 73) were evaluated for antibiotic susceptibility. All isolates tested were susceptible to at-least ten (10) of the fourteen antibiotics evaluated. High levels of resistance were however expressed by all isolates to penicillin, oxacillin and ampicillin. This observed resistance is most probably intrinsic to those bacteria, suggesting minimal levels of acquired antibiotic resistance in fish bacteria from the study area. To our knowledge, this is the first study to establish the occurrence of several bacteria species infecting fish; and to determine antibiotic susceptibility of fish bacteria in Uganda. The current study provides baseline information for future reference and fish disease management in the country.",
"title": ""
},
{
"docid": "cc34a912fb5e1fbb2a1b87d1c79ac01f",
"text": "Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disorder characterized by death of motor neurons leading to muscle wasting, paralysis, and death, usually within 2-3 years of symptom onset. The causes of ALS are not completely understood, and the neurodegenerative processes involved in disease progression are diverse and complex. There is substantial evidence implicating oxidative stress as a central mechanism by which motor neuron death occurs, including elevated markers of oxidative damage in ALS patient spinal cord and cerebrospinal fluid and mutations in the antioxidant enzyme superoxide dismutase 1 (SOD1) causing approximately 20% of familial ALS cases. However, the precise mechanism(s) by which mutant SOD1 leads to motor neuron degeneration has not been defined with certainty, and the ultimate trigger for increased oxidative stress in non-SOD1 cases remains unclear. Although some antioxidants have shown potential beneficial effects in animal models, human clinical trials of antioxidant therapies have so far been disappointing. Here, the evidence implicating oxidative stress in ALS pathogenesis is reviewed, along with how oxidative damage triggers or exacerbates other neurodegenerative processes, and we review the trials of a variety of antioxidants as potential therapies for ALS.",
"title": ""
},
{
"docid": "2b4e99635b4b360e16de5cd6430c8f37",
"text": "We review recent theoretical and experimental advances in the elucidation of the dynamics of light harvesting in photosynthesis, focusing on recent theoretical developments in structure-based modeling of electronic excitations in photosynthetic complexes and critically examining theoretical models for excitation energy transfer. We then briefly describe two-dimensional electronic spectroscopy and its application to the study of photosynthetic complexes, in particular the Fenna-Matthews-Olson complex from green sulfur bacteria. This review emphasizes recent experimental observations of long-lasting quantum coherence in photosynthetic systems and the implications of quantum coherence in natural photosynthesis.",
"title": ""
},
{
"docid": "8acfcaaa00cbfe275f6809fdaa3c6a78",
"text": "Internet usage has drastically shifted from host-centric end-to-end communication to receiver-driven content retrieval. In order to adapt to this change, a handful of innovative information/content centric networking (ICN) architectures have recently been proposed. One common and important feature of these architectures is to leverage built-in network caches to improve the transmission efficiency of content dissemination. Compared with traditional Web Caching and CDN Caching, ICN Cache takes on several new characteristics: cache is transparent to applications, cache is ubiquitous, and content to be cached is more ine-grained. These distinguished features pose new challenges to ICN caching technologies. This paper presents a comprehensive survey of state-of-art techniques aiming to address these issues, with particular focus on reducing cache redundancy and improving the availability of cached content. As a new research area, this paper also points out several interesting yet challenging research directions in this subject. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "526854ab5bf3c01f9e88dee8aeaa8dda",
"text": "Key establishment in sensor networks is a challenging problem because asymmetric key cryptosystems are unsuitable for use in resource constrained sensor nodes, and also because the nodes could be physically compromised by an adversary. We present three new mechanisms for key establishment using the framework of pre-distributing a random set of keys to each node. First, in the q-composite keys scheme, we trade off the unlikeliness of a large-scale network attack in order to significantly strengthen random key predistribution’s strength against smaller-scale attacks. Second, in the multipath-reinforcement scheme, we show how to strengthen the security between any two nodes by leveraging the security of other links. Finally, we present the random-pairwise keys scheme, which perfectly preserves the secrecy of the rest of the network when any node is captured, and also enables node-to-node authentication and quorum-based revocation.",
"title": ""
},
{
"docid": "b8fbc833251af14511192f51d7d692e1",
"text": "Elliptic curve cryptography (ECC) is an alternative to traditional techniques for public key cryptography. It offers smaller key size without sacrificing security level. In a typical elliptic curve cryptosystem, elliptic curve point multiplication is the most computationally expensive component. So it would be more attractive to implement this unit using hardware than using software. In this paper, we propose an efficient FPGA implementation of the elliptic curve point multiplication in GF(2). We have designed and synthesized the elliptic curve point multiplication with Xilinx’s FPGA. Experimental results demonstrate that the FPGA implementation can speedup the point multiplication by 31.6 times compared to a software based implementation.",
"title": ""
},
{
"docid": "dfc7a31461a382f0574fadf36a8fd211",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Road Traffic Accident is very serious matter of life. The World Health Organization (WHO) reports that about 1.24 million people of the world die annually on the roads. The Institute for Health Metrics and Evaluation (IHME) estimated about 907,900, 1.3 million and 1.4 million deaths from road traffic injuries in 1990, 2010 and 2013, respectively. Uttar Pradesh in particular one of the state of India, experiences the highest rate of such accidents. Thus, methods to reduce accident severity are of great interest to traffic agencies and the public at large. In this paper, we applied data mining technologies to link recorded road characteristics to accident severity and developed a set of rules that could be used by the Indian Traffic Agency to improve safety and could help to save precious life.",
"title": ""
},
{
"docid": "09e07f66760c1216e6e01841af2e48b7",
"text": "Traditional approaches to rule-based information extraction (IE) have primarily been based on regular expression grammars. However, these grammar-based systems have difficulty scaling to large data sets and large numbers of rules. Inspired by traditional database research, we propose an algebraic approach to rule-based IE that addresses these scalability issues through query optimization. The operators of our algebra are motivated by our experience in building several rule-based extraction programs over diverse data sets. We present the operators of our algebra and propose several optimization strategies motivated by the text-specific characteristics of our operators. Finally we validate the potential benefits of our approach by extensive experiments over real-world blog data.",
"title": ""
},
{
"docid": "a85511bfaa47701350f4d97ec94453fd",
"text": "We propose a novel expression transfer method based on an analysis of the frequency of multi-expression facial images. We locate the facial features automatically and describe the shape deformations between a neutral expression and non-neutral expressions. The subtle expression changes are important visual clues to distinguish different expressions. These changes are more salient in the frequency domain than in the image domain. We extract the subtle local expression deformations for the source subject, coded in the wavelet decomposition. This information about expressions is transferred to a target subject. The resulting synthesized image preserves both the facial appearance of the target subject and the expression details of the source subject. This method is extended to dynamic expression transfer to allow a more precise interpretation of facial expressions. Experiments on Japanese Female Facial Expression (JAFFE), the extended Cohn-Kanade (CK+) and PIE facial expression databases show the superiority of our method over the state-of-the-art method.",
"title": ""
},
{
"docid": "c9ee0f9d3a8fb12eadfe177b8552eab8",
"text": "In rock climbing, discussing climbing techniques with others to master a specific route and getting practical advice from more experienced climbers is an inherent part of the culture and tradition of the sport. Spatial information, such as the position of holds, as well as learning complex body postures plays a major role in this process. A typical problem that occurs during advising is an alignment effect when trying to picture orientation-specific knowledge, e.g. explaining how to perform a certain self-climbed move to others. We propose betaCube, a self-calibrating camera-projection unit that features 3D tracking and distortion-free projection. The system enables a life-sized video replay and climbing route creation using augmented reality. We contribute an interface for automatic setup of mobile distortion-free projection, blob detection for climbing holds, as well as an automatic method for extracting planar trackables from artificial climbing walls.",
"title": ""
},
{
"docid": "8d584bb017f9cfd386b1bbc9fd0fd557",
"text": "Parking Assistance System (PAS) provides useful help to beginners or less experienced drivers in complicated urban parking scenarios. In recent years, ultrasonic sensor based PAS and rear-view camera based PAS have been proposed from different car manufacturers. However, ultrasonic sensors detection distance is less than 3 meters and results cannot be used to extract further information like obstacle recognition. Rear-view camera based systems cannot provide assistance to the circumstances like parallel parking which need a wider view. In this paper, we proposed a surround view based parking lot detection algorithm. An efficient tracking algorithm was proposed to solve the tracking problem when detected parking slots were falling out of the surround view. Experimental results on simulation and real outdoor environment showed the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "0ce4a0dfe5ea87fb87f5d39b13196e94",
"text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.",
"title": ""
}
] |
scidocsrr
|
39707751b2f6aaea677ef953aee4ed47
|
Bridgeless SEPIC PFC Rectifier With Reduced Components and Conduction Losses
|
[
{
"docid": "b79110b1145fc8a35f20efdf0029fbac",
"text": "In this paper, a new bridgeless single-phase AC-DC converter with an automatic power factor correction (PFC) is proposed. The proposed rectifier is based on the single-ended primary inductance converter (SEPIC) topology and it utilizes a bidirectional switch and two fast diodes. The absence of an input diode bridge and the presence of only one diode in the flowing-current path during each switching cycle result in less conduction loss and improved thermal management compared to existing PFC rectifiers. Other advantages include simple control circuitry, reduced switch voltage stress, and low electromagnetic-interference noise. Performance comparison between the proposed and the conventional SEPIC PFC rectifier is performed. Simulation and experimental results are presented to demonstrate the feasibility of the proposed technique.",
"title": ""
}
] |
[
{
"docid": "5d6c2580602945084d5a643c335c40f2",
"text": "Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends.",
"title": ""
},
{
"docid": "14f7eb98dc3d24c94eb733a438127893",
"text": "Web users exhibit a variety of navigational interests through clicking a sequence of Web pages. Analysis of Web usage data will lead to discover Web user access pattern and facilitate users locate more preferable Web pages via collaborative recommending technique. Meanwhile, latent semantic analysis techniques provide a powerful means to capture user access pattern and associated task space. In this paper, we propose a collaborative Web recommendation framework, which employs Latent Dirichlet Allocation (LDA) to model underlying topic-simplex space and discover the associations between user sessions and multiple topics via probability inference. Experiments conducted on real Website usage dataset show that this approach can achieve better recommendation accuracy in comparison to existing techniques. The discovered topic-simplex expression can also provide a better interpretation of user navigational preference",
"title": ""
},
{
"docid": "996ed1bfadc4363d4717c6bd4da6ab89",
"text": "The recognition of dyslexia as a neurodevelopmental disorder has been hampered by the belief that it is not a specific diagnostic entity because it has variable and culture-specific manifestations. In line with this belief, we found that Italian dyslexics, using a shallow orthography which facilitates reading, performed better on reading tasks than did English and French dyslexics. However, all dyslexics were equally impaired relative to their controls on reading and phonological tasks. Positron emission tomography scans during explicit and implicit reading showed the same reduced activity in a region of the left hemisphere in dyslexics from all three countries, with the maximum peak in the middle temporal gyrus and additional peaks in the inferior and superior temporal gyri and middle occipital gyrus. We conclude that there is a universal neurocognitive basis for dyslexia and that differences in reading performance among dyslexics of different countries are due to different orthographies.",
"title": ""
},
{
"docid": "4345ed089e019402a5a4e30497bccc8a",
"text": "BACKGROUND\nFluridil, a novel topical antiandrogen, suppresses the human androgen receptor. While highly hydrophobic and hydrolytically degradable, it is systemically nonresorbable. In animals, fluridil demonstrated high local and general tolerance.\n\n\nOBJECTIVE\nTo evaluate the safety and efficacy of a topical anti- androgen, fluridil, in male androgenetic alopecia.\n\n\nMETHODS\nIn 20 men, for 21 days, occlusive forearm patches with 2, 4, and 6% fluridil, isopropanol, and/or vaseline were applied. In 43 men with androgenetic alopecia (AGA), Norwood grade II-Va, 2% fluridil was evaluated in a double-blind, placebo-controlled study after 3 months clinically by phototrichograms, hematology, and blood chemistry including analysis for fluridil, and at 9 months by phototrichograms.\n\n\nRESULTS\nNeither fluridil nor isopropanol showed sensitization/irritation potential, unlike vaseline. In all AGA subjects, baseline anagen/telogen counts were equal. After 3 months, the average anagen percentage did not change in placebo subjects, but increased in fluridil subjects from 76% to 85%, and at 9 months to 87%. In former placebo subjects, fluridil increased the anagen percentage after 6 months from 76% to 85%. Sexual functions, libido, hematology, and blood chemistry values were normal throughout, except that at 3 months, in the spring, serum testosterone increased within the normal range equally in placebo and fluridil groups. No fluridil or its decomposition product, BP-34, was detectable in the serum at 0, 3, or 90 days.\n\n\nCONCLUSION\nTopical fluridil is nonirritating, nonsensitizing, nonresorbable, devoid of systemic activity, and anagen promoting after daily use in most AGA males.",
"title": ""
},
{
"docid": "962003dc153dcb7cce754be8846ad62b",
"text": "Though the growing popularity of software-based middleboxes raises new requirements for network stack functionality, existing network stack have fundamental challenges in supporting the development of high-performance middlebox applications in a fast and flexible manner. In this work, we design and implement an enriched, programmable, and extensible network stack and its API to support the various requirements of middlebox applications. mOS supports proxy and monitoring function as well as traditional end TCP stack function. Further, we allow applications extend TCP functionality by hooking in middle of TCP processing and define user-level events on TCP state. Meanwhile, Epoll-like API allows applications manipulate read/write from/to byte stream buffers in an efficient way. To support an efficient consolidation of multiple middlebox applications in a single machine, mOS will allow multiple middlebox applications share the same TCP processing context without duplicated IP/TCP processing. We show that mOS can support various middlebox applications in an easy and efficient way without building TCP functionality from scratch.",
"title": ""
},
{
"docid": "3e357c91292ba1e1055fc3a493aba4eb",
"text": "The study of online social networks has attracted increasing interest. However, concerns are raised for the privacy risks of user data since they have been frequently shared among researchers, advertisers, and application developers. To solve this problem, a number of anonymization algorithms have been recently developed for protecting the privacy of social graphs. In this article, we proposed a graph node similarity measurement in consideration with both graph structure and descriptive information, and a deanonymization algorithm based on the measurement. Using the proposed algorithm, we evaluated the privacy risks of several typical anonymization algorithms on social graphs with thousands of nodes from Microsoft Academic Search, LiveJournal, and the Enron email dataset, and a social graph with millions of nodes from Tencent Weibo. Our results showed that the proposed algorithm was efficient and effective to deanonymize social graphs without any initial seed mappings. Based on the experiments, we also pointed out suggestions on how to better maintain the data utility while preserving privacy.",
"title": ""
},
{
"docid": "554a0628270978757eda989c67ac3416",
"text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.",
"title": ""
},
{
"docid": "7b1a6768cc6bb975925a754343dc093c",
"text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.",
"title": ""
},
{
"docid": "3c1db6405945425c61495dd578afd83f",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "0fca0826e166ddbd4c26fe16086ff7ec",
"text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.",
"title": ""
},
{
"docid": "77ac1b0810b308cf9e957189c832f421",
"text": "We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS2.",
"title": ""
},
{
"docid": "fa471f49367e03e57e7739d253385eaf",
"text": "■ Abstract The literature on effects of habitat fragmentation on biodiversity is huge. It is also very diverse, with different authors measuring fragmentation in different ways and, as a consequence, drawing different conclusions regarding both the magnitude and direction of its effects. Habitat fragmentation is usually defined as a landscape-scale process involving both habitat loss and the breaking apart of habitat. Results of empirical studies of habitat fragmentation are often difficult to interpret because ( a) many researchers measure fragmentation at the patch scale, not the landscape scale and ( b) most researchers measure fragmentation in ways that do not distinguish between habitat loss and habitat fragmentation per se, i.e., the breaking apart of habitat after controlling for habitat loss. Empirical studies to date suggest that habitat loss has large, consistently negative effects on biodiversity. Habitat fragmentation per se has much weaker effects on biodiversity that are at least as likely to be positive as negative. Therefore, to correctly interpret the influence of habitat fragmentation on biodiversity, the effects of these two components of fragmentation must be measured independently. More studies of the independent effects of habitat loss and fragmentation per se are needed to determine the factors that lead to positive versus negative effects of fragmentation per se. I suggest that the term “fragmentation” should be reserved for the breaking apart of habitat, independent of habitat loss.",
"title": ""
},
{
"docid": "ff8fd8bebb7e86b8d636ae528901b57f",
"text": "The ICH quality vision introduced the concept of quality by design (QbD), which requires a greater understanding of the raw material attributes, of process parameters, of their variability and their interactions. Microcrystalline cellulose (MCC) is one of the most important tableting excipients thanks to its outstanding dry binding properties, enabling the manufacture of tablets by direct compression (DC). DC remains the most economical technique to produce large batches of tablets, however its efficacy is directly impacted by the raw material attributes. Therefore excipients' variability and their impact on drug product performance need to be thoroughly understood. To help with this process, this review article gathers prior knowledge on MCC, focuses on its use in DC and lists some of its potential critical material attributes (CMAs).",
"title": ""
},
{
"docid": "38a7f57900474553f6979131e7f39e5d",
"text": "A cascade switched-capacitor ΔΣ analog-to-digital converter, suitable for WLANs, is presented. It uses a double-sampling scheme with single set of DAC capacitors, and an improved low-distortion architecture with an embedded-adder integrator. The proposed architecture eliminates one active stage, and reduces the output swings in the loop-filter and hence the non-linearity. It was fabricated with a 0.18um CMOS process. The prototype chip achieves 75.5 dB DR, 74 dB SNR, 73.8 dB SNDR, −88.1 dB THD, and 90.2 dB SFDR over a 10 MHz signal band with an FoM of 0.27 pJ/conv-step.",
"title": ""
},
{
"docid": "187127dd1ab5f97b1158a77a25ddce91",
"text": "We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our approach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.",
"title": ""
},
{
"docid": "b4c8ebb06c527c81e568c82afb2d4b6d",
"text": "Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a welldefined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.",
"title": ""
},
{
"docid": "2a422c6047bca5a997d5c3d0ee080437",
"text": "Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.",
"title": ""
},
{
"docid": "626e4d90b16a4e874c391d79b3ec39fe",
"text": "We propose novel neural temporal models for predicting and synthesizing human motion, achieving state-of-theart in modeling long-term motion trajectories while being competitive with prior work in short-term prediction, with significantly less required computation. Key aspects of our proposed system include: 1) a novel, two-level processing architecture that aids in generating planned trajectories, 2) a simple set of easily computable features that integrate derivative information into the model, and 3) a novel multi-objective loss function that helps the model to slowly progress from the simpler task of next-step prediction to the harder task of multi-step closed-loop prediction. Our results demonstrate that these innovations facilitate improved modeling of long-term motion trajectories. Finally, we propose a novel metric, called Normalized Power Spectrum Similarity (NPSS), to evaluate the long-term predictive ability of motion synthesis models, complementing the popular mean-squared error (MSE) measure of the Euler joint angles over time. We conduct a user study to determine if the proposed NPSS correlates with human evaluation of longterm motion more strongly than MSE and find that it indeed does.",
"title": ""
}
] |
scidocsrr
|
881dccbf7e1eb78c1904275e03904671
|
Multimodal Prediction of Affective Dimensions and Depression in Human-Computer Interactions
|
[
{
"docid": "80bf80719a1751b16be2420635d34455",
"text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.",
"title": ""
}
] |
[
{
"docid": "d12d5344268cd0f1ff05608009b88c2f",
"text": "Guidelines, directives, and policy statements are usually presented in “linear” text form - word after word, page after page. However necessary, this practice impedes full understanding, obscures feedback dynamics, hides mutual dependencies and cascading effects and the like, — even when augmented with tables and diagrams. The net result is often a checklist response as an end in itself. All this creates barriers to intended realization of guidelines and undermines potential effectiveness. We present a solution strategy using text as “data”, transforming text into a structured model, and generate a network views of the text(s), that we then can use for vulnerability mapping, risk assessments and control point analysis. We apply this approach using two NIST reports on cybersecurity of smart grid, more than 600 pages of text. Here we provide a synopsis of approach, methods, and tools. (Elsewhere we consider (a) system-wide level, (b) aviation e-landscape, (c) electric vehicles, and (d) SCADA for smart grid).",
"title": ""
},
{
"docid": "ca7e7fa988bf2ed1635e957ea6cd810d",
"text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.",
"title": ""
},
{
"docid": "64acb2d16c23f2f26140c0bce1785c9b",
"text": "Physical forces of gravity, hemodynamic stresses, and movement play a critical role in tissue development. Yet, little is known about how cells convert these mechanical signals into a chemical response. This review attempts to place the potential molecular mediators of mechanotransduction (e.g. stretch-sensitive ion channels, signaling molecules, cytoskeleton, integrins) within the context of the structural complexity of living cells. The model presented relies on recent experimental findings, which suggests that cells use tensegrity architecture for their organization. Tensegrity predicts that cells are hard-wired to respond immediately to mechanical stresses transmitted over cell surface receptors that physically couple the cytoskeleton to extracellular matrix (e.g. integrins) or to other cells (cadherins, selectins, CAMs). Many signal transducing molecules that are activated by cell binding to growth factors and extracellular matrix associate with cytoskeletal scaffolds within focal adhesion complexes. Mechanical signals, therefore, may be integrated with other environmental signals and transduced into a biochemical response through force-dependent changes in scaffold geometry or molecular mechanics. Tensegrity also provides a mechanism to focus mechanical energy on molecular transducers and to orchestrate and tune the cellular response.",
"title": ""
},
{
"docid": "9175794d83b5f110fb9f08dc25a264b8",
"text": "We describe an investigation into e-mail content mining for author identification, or authorship attribution, for the purpose of forensic investigation. We focus our discussion on the ability to discriminate between authors for the case of both aggregated e-mail topics as well as across different e-mail topics. An extended set of e-mail document features including structural characteristics and linguistic patterns were derived and, together with a Support Vector Machine learning algorithm, were used for mining the e-mail content. Experiments using a number of e-mail documents generated by different authors on a set of topics gave promising results for both aggregated and multi-topic author categorisation.",
"title": ""
},
{
"docid": "18e75ca50be98af1d5a6a2fd22b610d3",
"text": "We propose a new type of saliency—context-aware saliency—which aims at detecting the image regions that represent the scene. This definition differs from previous definitions whose goal is to either identify fixation points or detect the dominant object. In accordance with our saliency definition, we present a detection algorithm which is based on four principles observed in the psychological literature. The benefits of the proposed approach are evaluated in two applications where the context of the dominant objects is just as essential as the objects themselves. In image retargeting, we demonstrate that using our saliency prevents distortions in the important regions. In summarization, we show that our saliency helps to produce compact, appealing, and informative summaries.",
"title": ""
},
{
"docid": "8bdc4b79e71f8bb9f001c99ec3b5e039",
"text": "The \"tragedy of the commons\" metaphor helps explain why people overuse shared resources. However, the recent proliferation of intellectual property rights in biomedical research suggests a different tragedy, an \"anticommons\" in which people underuse scarce resources because too many owners can block each other. Privatization of biomedical research must be more carefully deployed to sustain both upstream research and downstream product development. Otherwise, more intellectual property rights may lead paradoxically to fewer useful products for improving human health.",
"title": ""
},
{
"docid": "14e0664fcbc2e29778a1ccf8744f4ca5",
"text": "Mobile offloading migrates heavy computation from mobile devices to cloud servers using one or more communication network channels. Communication interfaces vary in speed, energy consumption and degree of availability. We assume two interfaces: WiFi, which is fast with low energy demand but not always present and cellular, which is slightly slower has higher energy consumption but is present at all times. We study two different communication strategies: one that selects the best available interface for each transmitted packet and the other multiplexes data across available communication channels. Since the latter may experience interrupts in the WiFi connection packets can be delayed. We call it interrupted strategy as opposed to the uninterrupted strategy that transmits packets only over currently available networks. Two key concerns of mobile offloading are the energy use of the mobile terminal and the response time experienced by the user of the mobile device. In this context, we investigate three different metrics that express the energy-performance tradeoff, the known Energy-Response time Weighted Sum (EWRS), the Energy-Response time Product (ERP) and the Energy-Response time Weighted Product (ERWP) metric. We apply the metrics to the two different offloading strategies and find that the conclusions drawn from the analysis depend on the considered metric. In particular, while an additive metric is not normalised, which implies that the term using smaller scale is always favoured, the ERWP metric, which is new in this paper, allows to assign importance to both aspects without being misled by different scales. It combines the advantages of an additive metric and a product. The interrupted strategy can save energy especially if the focus in the tradeoff metric lies on the energy aspect. In general one can say that the uninterrupted strategy is faster, while the interrupted strategy uses less energy. A fast connection improves the response time much more than the fast repair of a failed connection. In conclusion, a short down-time of the transmission channel can mostly be tolerated.",
"title": ""
},
{
"docid": "8b863cd49dfe5edc2d27a0e9e9db0429",
"text": "This paper presents an annotation scheme for adding entity and event target annotations to the MPQA corpus, a rich span-annotated opinion corpus. The new corpus promises to be a valuable new resource for developing systems for entity/event-level sentiment analysis. Such systems, in turn, would be valuable in NLP applications such as Automatic Question Answering. We introduce the idea of entity and event targets (eTargets), describe the annotation scheme, and present the results of an agreement study.",
"title": ""
},
{
"docid": "5679a329a132125d697369ca4d39b93e",
"text": "This paper proposes a method to explore the design space of FinFETs with double fin heights. Our study shows that if one fin height is sufficiently larger than the other and the greatest common divisor of their equivalent transistor widths is small, the fin height pair will incur less width quantization effect and lead to better area efficiency. We design a standard cell library based on this technology using a tailored FreePDK15. With respect to a standard cell library designed with FreePDK15, about 86% of the cells designed with FinFETs of double fin heights have a smaller delay and 54% of the cells take a smaller area. We also demonstrate the advantages of FinFETs with double fin heights through chip designs using our cell library.",
"title": ""
},
{
"docid": "80a61f27dab6a8f71a5c27437254778b",
"text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.",
"title": ""
},
{
"docid": "4e23bf1c89373abaf5dc096f76c893f3",
"text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.",
"title": ""
},
{
"docid": "7b627fa766382ead588c14e22541b766",
"text": "This book highlights the importance of anchoring education in an evidence base derived from neuroscience. For far too long has the brain been neglected in discussions on education and often information about neuroscientific research is not easy to access. Our aim was to provide a source book that conveys the excitement of neuroscience research that is relevant to learning and education. This research has largely, but not exclusively, been carried out using neuroimaging methods in the past decade or so, ranging from investigations of brain structure and function in dyslexia and dyscalculia to investigations of the changes in the hippocampus of London taxi drivers. To speak to teachers who might not have scientific backgrounds, we have tried to use nontechnical language as far as possible and have provided an appendix illustrating the main methods and techniques currently used and a glossary, defining terms from Acetylcholine, Action Potentials and ADHD to White Matter, Word Form Area and Working Memory. We start with the idea that the brain has evolved to educate and to be educated, often instinctively and effortlessly. We believe that understanding the brain mechanisms that underlie learning and teaching could transform educational strategies and enable us to design educational programmes that optimize learning for people of all ages and of all needs. For this reason the first two-thirds of the book follows a developmental framework. The rest of the book focuses on learning in the brain at all ages. There is a vast amount brain research of direct relevance to education practice and policy. And yet neuroscience has had little impact on education. This might in part be due to a lack of interaction between educators and brain scientists. This in turn might be because of difficulties of translating the neuroscience knowledge of how learning takes place in the brain into information of value to teachers. It is here where we try to fill a gap. Interdisciplinary dialogue needs a mediator to prevent one or other discipline dominating, and, notwithstanding John Bruer’s remarks that it is cognitive psychology that ‘bridges the gap’ between neuroscience and education (Bruer, 1997), we feel that now is the time to explore the implications of brain science itself for education.",
"title": ""
},
{
"docid": "b866e7e4d8522d820bd4fccc1a8fb0c0",
"text": "The domain of smart home environments is viewed as a key element of the future Internet, and many homes are becoming “smarter” by using Internet of Things (IoT) technology to improve home security, energy efficiency and comfort. At the same time, enforcing privacy in IoT environments has been identified as one of the main barriers for realizing the vision of the smart home. Based on the results of a risk analysis of a smart home automation system developed in collaboration with leading industrial actors, we outline the first steps towards a general model of privacy and security for smart homes. As such, it is envisioned as support for enforcing system security and enhancing user privacy, and it can thus help to further realize the potential in smart home environments.",
"title": ""
},
{
"docid": "147e0eecf649f96209056112269c2a73",
"text": "Due to the fast evolution of the information on the Internet, update summarization has received much attention in recent years. It is to summarize an evolutionary document collection at current time supposing the users have read some related previous documents. In this paper, we propose a graph-ranking-based method. It performs constrained reinforcements on a sentence graph, which unifies previous and current documents, to determine the salience of the sentences. The constraints ensure that the most salient sentences in current documents are updates to previous documents. Since this method is NP-hard, we then propose its approximate method, which is polynomial time solvable. Experiments on the TAC 2008 and 2009 benchmark data sets show the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "dd9b6b67f19622bfffbad427b93a1829",
"text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.",
"title": ""
},
{
"docid": "9a12ec03e4521a33a7e76c0c538b6b43",
"text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.",
"title": ""
},
{
"docid": "e3eb4019846f9add4e464462e1065119",
"text": "The internet – specifically its graphic interface, the world wide web – has had a major impact on all levels of (information) societies throughout the world. Specifically for journalism as it is practiced online, we can now identify the effect that this has had on the profession and its culture(s). This article defines four particular types of online journalism and discusses them in terms of key characteristics of online publishing – hypertextuality, interactivity, multimediality – and considers the current and potential impacts that these online journalisms can have on the ways in which one can define journalism as it functions in elective democracies worldwide. It is argued that the application of particular online characteristics not only has consequences for the type of journalism produced on the web, but that these characteristics and online journalisms indeed connect to broader and more profound changes and redefinitions of professional journalism and its (news) culture as a whole.",
"title": ""
},
{
"docid": "f1e03d9f810409cd470ae65683553a0d",
"text": "Emergency departments (ED) face significant challenges in delivering high quality and timely patient care on an ever-present background of increasing patient numbers and limited hospital resources. A mismatch between patient demand and the ED's capacity to deliver care often leads to poor patient flow and departmental crowding. These are associated with reduction in the quality of the care delivered and poor patient outcomes. A literature review was performed to identify evidence-based strategies to reduce the amount of time patients spend in the ED in order to improve patient flow and reduce crowding in the ED. The use of doctor triage, rapid assessment, streaming and the co-location of a primary care clinician in the ED have all been shown to improve patient flow. In addition, when used effectively point of care testing has been shown to reduce patient time in the ED. Patient flow and departmental crowding can be improved by implementing new patterns of working and introducing new technologies such as point of care testing in the ED.",
"title": ""
},
{
"docid": "aaf8f3e2eaf6487b9284ed54803bd889",
"text": "Intra- and subcorneal hematoma, a skin alteration seen palmar and plantar after trauma or physical exercise, can be challenging to distinguish from in situ or invasive acral lentiginous melanoma. Thus, careful examination including dermoscopic and histologic assessment may be necessary to make the correct diagnosis. We here present a case of a 67-year-old healthy female patient who presented with a pigmented plantar skin alteration. Differential diagnoses included benign skin lesions, for example, hematoma or melanocytic nevus, and also acral lentiginous melanoma or melanoma in situ. Since clinical and dermoscopic examinations did not rule out a malignant skin lesion, surgical excision was performed and confirmed an intracorneal hematoma. In summary, without adequate physical trigger, it may be clinically and dermoscopically challenging to make the correct diagnosis in pigmented palmar and plantar skin alterations. Thus, biopsy or surgical excision of the skin alteration may be necessary to rule out melanoma.",
"title": ""
}
] |
scidocsrr
|
eb47e0953346f2a60fb0486508773e87
|
Mobile Cloud Computing: A Comparison of Application Models
|
[
{
"docid": "31f838fb0c7db7e8b58fb1788d5554c8",
"text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.",
"title": ""
}
] |
[
{
"docid": "a52ae731397db5fb56bf6b65882ccc77",
"text": "This paper presents a class@cation of intrusions with respect to technique as well as to result. The taxonomy is intended to be a step on the road to an established taxonomy of intrusions for use in incident reporting, statistics, warning bulletins, intrusion detection systems etc. Unlike previous schemes, it takes the viewpoint of the system owner and should therefore be suitable to a wider community than that of system developers and vendors only. It is based on data from a tzalistic intrusion experiment, a fact that supports the practical applicability of the scheme. The paper also discusses general aspects of classification, and introduces a concept called dimension. After having made a broad survey of previous work in thejield, we decided to base our classification of intrusion techniques on a scheme proposed by Neumann and Parker in I989 and to further refine relevant parts of their scheme. Our classification of intrusion results is derived from the traditional three aspects of computer security: confidentiality, availability and integrity.",
"title": ""
},
{
"docid": "7c1be047bbb4fe3f988aaccfd0add70f",
"text": "We reviewed scientific literature pertaining to known and putative disease agents associated with the lone star tick, Amblyomma americanum. Reports in the literature concerning the role of the lone star tick in the transmission of pathogens of human and animal diseases have sometimes been unclear and even contradictory. This overview has indicated that A. americanum is involved in the ecology of several disease agents of humans and other animals, and the role of this tick as a vector of these diseases ranges from incidental to significant. Probably the clearest relationship is that of Ehrlichia chaffeensis and A. americanum. Also, there is a definite association between A. americanum and tularemia, as well as between the lone star tick and Theileria cervi to white-tailed deer. Evidence of Babesia cervi (= odocoilei) being transmitted to deer by A. americanum is largely circumstantial at this time. The role of A. americanum in cases of southern tick-associated rash illness (STARI) is currently a subject of intensive investigations with important implications. The lone star tick has been historically reported to be a vector of Rocky Mountain spotted fever rickettsiae, but current opinions are to the contrary. Evidence incriminated A. americanum as the vector of Bullis fever in the 1940s, but the disease apparently has disappeared. Q fever virus has been found in unfed A. americanum, but the vector potential, if any, is poorly understood at this time. Typhus fever and toxoplasmosis have been studied in the lone star tick, and several non-pathogenic organisms have been recovered. Implications of these tick-disease relationships are discussed.",
"title": ""
},
{
"docid": "e3e8ef3239fb6a7565a177cbceb1bee8",
"text": "A large number of studies analyse object detection and pose estimation at visual level in 2D, discussing the effects of challenges such as occlusion, clutter, texture, etc., on the performances of the methods, which work in the context of RGB modality. Interpreting the depth data, the study in this paper presents thorough multi-modal analyses. It discusses the above-mentioned challenges for full 6D object pose estimation in RGB-D images comparing the performances of several 6D detectors in order to answer the following questions: What is the current position of the computer vision community for maintaining “automation” in robotic manipulation? What next steps should the community take for improving “autonomy” in robotics while handling objects? Direct comparison of the detectors is difficult, since they are tested on multiple datasets with different characteristics and are evaluated using widely varying evaluation protocols. To deal with these issues, we follow a threefold strategy: five representative object datasets, mainly differing from the point of challenges that they involve, are collected. Then, two classes of detectors are tested on the collected datasets. Lastly, the baselines’ performances are evaluated using two different evaluation metrics under uniform scoring criteria. Regarding the experiments conducted, we analyse our observations on the baselines along with the challenges involved in the interested datasets, and we suggest a number of insights for the next steps to be taken, for improving the autonomy in robotics.",
"title": ""
},
{
"docid": "9eb0d79f9c13f30f53fb7214b337880d",
"text": "Many real world problems can be solved with Artificial Neural Networks in the areas of pattern recognition, signal processing and medical diagnosis. Most of the medical data set is seldom complete. Artificial Neural Networks require complete set of data for an accurate classification. This paper dwells on the various missing value techniques to improve the classification accuracy. The proposed system also investigates the impact on preprocessing during the classification. A classifier was applied to Pima Indian Diabetes Dataset and the results were improved tremendously when using certain combination of preprocessing techniques. The experimental system achieves an excellent classification accuracy of 99% which is best than before.",
"title": ""
},
{
"docid": "7de29b042513aaf1a3b12e71bee6a338",
"text": "The widespread use of deception in online sources has motivated the need for methods to automatically profile and identify deceivers. This work explores deception, gender and age detection in short texts using a machine learning approach. First, we collect a new open domain deception dataset also containing demographic data such as gender and age. Second, we extract feature sets including n-grams, shallow and deep syntactic features, semantic features, and syntactic complexity and readability metrics. Third, we build classifiers that aim to predict deception, gender, and age. Our findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task. We further explore the linguistic differences in deceptive content that relate to deceivers gender and age and find evidence that both age and gender play an important role in people’s word choices when fabricating lies.",
"title": ""
},
{
"docid": "64de73be55c4b594934b0d1bd6f47183",
"text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.",
"title": ""
},
{
"docid": "93afb696fa395a7f7c2a4f3fc2ac690d",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "f7d30db4b04b33676d386953aebf503c",
"text": "Microvascular free flap transfer currently represents one of the most popular methods for mandibularreconstruction. With the various free flap options nowavailable, there is a general consensus that no single kindof osseous or osteocutaneous flap can resolve the entire spectrum of mandibular defects. A suitable flap, therefore, should be selected according to the specific type of bone and soft tissue defect. We have developed an algorithm for mandibular reconstruction, in which the bony defect is termed as either “lateral” or “anterior” and the soft-tissue defect is classified as “none,” “skin or mucosal,” or “through-and-through.” For proper flap selection, the bony defect condition should be considered first, followed by the soft-tissue defect condition. When the bony defect is “lateral” and the soft tissue is not defective, the ilium is the best choice. When the bony defect is “lateral” and a small “skin or mucosal” soft-tissue defect is present, the fibula represents the optimal choice. When the bony defect is “lateral” and an extensive “skin or mucosal” or “through-and-through” soft-tissue defect exists, the scapula should be selected. When the bony defect is “anterior,” the fibula should always be selected. However, when an “anterior” bone defect also displays an “extensive” or “through-and-through” soft-tissue defect, the fibula should be usedwith other soft-tissue flaps. Flaps such as a forearm flap, anterior thigh flap, or rectus abdominis musculocutaneous flap are suitable, depending on the size of the soft-tissue defect.",
"title": ""
},
{
"docid": "130efef512294d14094a900693efebfd",
"text": "Metaphor comprehension involves an interaction between the meaning of the topic and the vehicle terms of the metaphor. Meaning is represented by vectors in a high-dimensional semantic space. Predication modifies the topic vector by merging it with selected features of the vehicle vector. The resulting metaphor vector can be evaluated by comparing it with known landmarks in the semantic space. Thus, metaphorical prediction is treated in the present model in exactly the same way as literal predication. Some experimental results concerning metaphor comprehension are simulated within this framework, such as the nonreversibility of metaphors, priming of metaphors with literal statements, and priming of literal statements with metaphors.",
"title": ""
},
{
"docid": "3c8ac7bd31d133b4d43c0d3a0f08e842",
"text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.",
"title": ""
},
{
"docid": "34913781debe37f36befc853d57eba0c",
"text": "Michael R. Benjamin Naval Undersea Warfare Center, Newport, Rhode Island 02841, and Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: michael.r.benjamin@navy.mil Henrik Schmidt Department of Mechanical Engineering, Laboratory for Autonomous Marine Sensing Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: henrik@mit.edu Paul M. Newman Department of Engineering Science, University of Oxford, Parks Road, Oxford OX1 3PJ, United Kingdom e-mail: pnewman@robots.ox.ac.uk John J. Leonard Department of Mechanical Engineering, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 e-mail: jleonard@mit.edu",
"title": ""
},
{
"docid": "a9e30e02bcbac0f117820d21bf9941da",
"text": "The question of how identity is affected when diagnosed with dementia is explored in this capstone thesis. With the rise of dementia diagnoses (Goldstein-Levitas, 2016) there is a need for understanding effective approaches to care as emotional components remain intact. The literature highlights the essence of personhood and how person-centered care (PCC) is essential to preventing isolation and impacting a sense of self and well-being (Killick, 2004). Meeting spiritual needs in the sense of hope and purpose may also improve quality of life and delay symptoms. Dance/movement therapy (DMT) is specifically highlighted as an effective approach as sessions incorporate the components to physically, emotionally, and spiritually stimulate the individual with dementia. A DMT intervention was developed and implemented at an assisted living facility in the Boston area within a specific unit dedicated to the care of residents who had a primary diagnosis of mild to severe dementia. A Chacian framework is used with sensory stimulation techniques to address physiological needs. Results indicated positive experiences from observations and merited the need to conduct more research to credit DMT’s effectiveness with geriatric populations.",
"title": ""
},
{
"docid": "e171be9168fc94527980e767742555d3",
"text": "OBJECTIVE\nRelatively minor abusive injuries can precede severe physical abuse in infants. Our objective was to determine how often abused infants have a previous history of \"sentinel\" injuries, compared with infants who were not abused.\n\n\nMETHODS\nCase-control, retrospective study of 401, <12-month-old infants evaluated for abuse in a hospital-based setting and found to have definite, intermediate concern for, or no abuse after evaluation by the hospital-based Child Protection Team. A sentinel injury was defined as a previous injury reported in the medical history that was suspicious for abuse because the infant could not cruise, or the explanation was implausible.\n\n\nRESULTS\nOf the 200 definitely abused infants, 27.5% had a previous sentinel injury compared with 8% of the 100 infants with intermediate concern for abuse (odds ratio: 4.4, 95% confidence interval: 2.0-9.6; P < .001). None of the 101 nonabused infants (controls) had a previous sentinel injury (P < .001). The type of sentinel injury in the definitely abused cohort was bruising (80%), intraoral injury (11%), and other injury (7%). Sentinel injuries occurred in early infancy: 66% at <3 months of age and 95% at or before the age of 7 months. Medical providers were reportedly aware of the sentinel injury in 41.9% of cases.\n\n\nCONCLUSIONS\nPrevious sentinel injuries are common in infants with severe physical abuse and rare in infants evaluated for abuse and found to not be abused. Detection of sentinel injuries with appropriate interventions could prevent many cases of abuse.",
"title": ""
},
{
"docid": "a753be5a5f81ae77bfcb997a2748d723",
"text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.",
"title": ""
},
{
"docid": "eb5208a4793fa5c5723b20da0421af26",
"text": "High-level synthesis promises a significant shortening of the FPGA design cycle when compared with design entry using register transfer level (RTL) languages. Recent evaluations report that C-to-RTL flows can produce results with a quality close to hand-crafted designs [1]. Algorithms which use dynamic, pointer-based data structures, which are common in software, remain difficult to implement well. In this paper, we describe a comparative case study using Xilinx Vivado HLS as an exemplary state-of-the-art high-level synthesis tool. Our test cases are two alternative algorithms for the same compute-intensive machine learning technique (clustering) with significantly different computational properties. We compare a data-flow centric implementation to a recursive tree traversal implementation which incorporates complex data-dependent control flow and makes use of pointer-linked data structures and dynamic memory allocation. The outcome of this case study is twofold: We confirm similar performance between the hand-written and automatically generated RTL designs for the first test case. The second case reveals a degradation in latency by a factor greater than 30× if the source code is not altered prior to high-level synthesis. We identify the reasons for this shortcoming and present code transformations that narrow the performance gap to a factor of four. We generalise our source-to-source transformations whose automation motivates research directions to improve high-level synthesis of dynamic data structures in the future.",
"title": ""
},
{
"docid": "39d4375dd9b8353241482bff577ee812",
"text": "Cellulose constitutes the most abundant renewable polymer resource available today. As a chemical raw material, it is generally well-known that it has been used in the form of fibers or derivatives for nearly 150 years for a wide spectrum of products and materials in daily life. What has not been known until relatively recently is that when cellulose fibers are subjected to acid hydrolysis, the fibers yield defect-free, rod-like crystalline residues. Cellulose nanocrystals (CNs) have garnered in the materials community a tremendous level of attention that does not appear to be relenting. These biopolymeric assemblies warrant such attention not only because of their unsurpassed quintessential physical and chemical properties (as will become evident in the review) but also because of their inherent renewability and sustainability in addition to their abundance. They have been the subject of a wide array of research efforts as reinforcing agents in nanocomposites due to their low cost, availability, renewability, light weight, nanoscale dimension, and unique morphology. Indeed, CNs are the fundamental constitutive polymeric motifs of macroscopic cellulosic-based fibers whose sheer volume dwarfs any known natural or synthetic biomaterial. Biopolymers such as cellulose and lignin and † North Carolina State University. ‡ Helsinki University of Technology. Dr. Youssef Habibi is a research assistant professor at the Department of Forest Biomaterials at North Carolina State University. He received his Ph.D. in 2004 in organic chemistry from Joseph Fourier University (Grenoble, France) jointly with CERMAV (Centre de Recherche sur les Macromolécules Végétales) and Cadi Ayyad University (Marrakesh, Morocco). During his Ph.D., he worked on the structural characterization of cell wall polysaccharides and also performed surface chemical modification, mainly TEMPO-mediated oxidation, of crystalline polysaccharides, as well as their nanocrystals. Prior to joining NCSU, he worked as assistant professor at the French Engineering School of Paper, Printing and Biomaterials (PAGORA, Grenoble Institute of Technology, France) on the development of biodegradable nanocomposites based on nanocrystalline polysaccharides. He also spent two years as postdoctoral fellow at the French Institute for Agricultural Research, INRA, where he developed new nanostructured thin films based on cellulose nanowiskers. Dr. Habibi’s research interests include the sustainable production of materials from biomass, development of high performance nanocomposites from lignocellulosic materials, biomass conversion technologies, and the application of novel analytical tools in biomass research. Chem. Rev. 2010, 110, 3479–3500 3479",
"title": ""
},
{
"docid": "77af12d87cd5827f35d92968d1888162",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "46b5f32b9f08dd5d1fbe2d6c2fe532ee",
"text": "As more recombinant human proteins become available on the market, the incidence of immunogenicity problems is rising. The antibodies formed against a therapeutic protein can result in serious clinical effects, such as loss of efficacy and neutralization of the endogenous protein with essential biological functions. Here we review the literature on the relations between the immunogenicity of the therapeutic proteins and their structural properties. The mechanisms by which protein therapeutics can induce antibodies as well as the models used to study immunogenicity are discussed. Examples of how the chemical structure (including amino acid sequence, glycosylation, and pegylation) can influence the incidence and level of antibody formation are given. Moreover, it is shown that physical degradation (especially aggregation) of the proteins as well as chemical decomposition (e.g., oxidation) may enhance the immune response. To what extent the presence of degradation products in protein formulations influences their immunogenicity still needs further investigation. Immunization of transgenic animals, tolerant for the human protein, with well-defined, artificially prepared degradation products of therapeutic proteins may shed more light on the structure-immunogenicity relationships of recombinant human proteins.",
"title": ""
},
{
"docid": "22cb0a390087efcb9fa2048c74e9845f",
"text": "This paper describes the early conception and latest developments of electroactive polymer (EAP)-based sensors, actuators, electronic components, and power sources, implemented as wearable devices for smart electronic textiles (e-textiles). Such textiles, functioning as multifunctional wearable human interfaces, are today considered relevant promoters of progress and useful tools in several biomedical fields, such as biomonitoring, rehabilitation, and telemedicine. After a brief outline on ongoing research and the first products on e-textiles under commercial development, this paper presents the most highly performing EAP-based devices developed by our lab and other research groups for sensing, actuation, electronics, and energy generation/storage, with reference to their already demonstrated or potential applicability to electronic textiles",
"title": ""
},
{
"docid": "d6e093ecc3325fcdd2e29b0b961b9b21",
"text": "[Context and motivation] Natural language is the main representation means of industrial requirements documents, which implies that requirements documents are inherently ambiguous. There exist guidelines for ambiguity detection, such as the Ambiguity Handbook [1]. In order to detect ambiguities according to the existing guidelines, it is necessary to train analysts. [Question/problem] Although ambiguity detection guidelines were extensively discussed in literature, ambiguity detection has not been automated yet. Automation of ambiguity detection is one of the goals of the presented paper. More precisely, the approach and tool presented in this paper have three goals: (1) to automate ambiguity detection, (2) to make plausible for the analyst that ambiguities detected by the tool represent genuine problems of the analyzed document, and (3) to educate the analyst by explaining the sources of the detected ambiguities. [Principal ideas/results] The presented tool provides reliable ambiguity detection, in the sense that it detects four times as many genuine ambiguities as than an average human analyst. Furthermore, the tool offers high precision ambiguity detection and does not present too many false positives to the human analyst. [Contribution] The presented tool is able both to detect the ambiguities and to explain ambiguity sources. Thus, besides pure ambiguity detection, it can be used to educate analysts, too. Furthermore, it provides a significant potential for considerable time and cost savings and at the same time quality improvements in the industrial requirements engineering.",
"title": ""
}
] |
scidocsrr
|
661c94e1afa6ce0abebe959556284d31
|
An information theoretic approach for extracting and tracing non-functional requirements
|
[
{
"docid": "221cd488d735c194e07722b1d9b3ee2a",
"text": "HURTS HELPS HURTS HELPS Data Type [Target System] Implicit HELPS HURTS HURTS BREAKS ? Invocation [Target System] Pipe & HELPS BREAKS BREAKS HELPS Filter WHEN [Target condl System] condl: size of data in domain is huge Figure 13.4. A generic Correlation Catalogue, based on [Garlan93]. Figure 13.3 shows a method which decomposes the topic on process, including algorithms as used in [Garlan93]. Decomposition methods for processes are also described in [Nixon93, 94a, 97a], drawing on implementations of processes [Chung84, 88]. These two method definitions are unparameterized. A fuller catalogue would include parameterized definitions too. Operationalization methods, which organize knowledge about satisficing NFR softgoals, are embedded in architectural designs when selected. For example, an ImplicitFunctionlnvocationRegime (based on [Garlan93]' architecture 3) can be used to hide implementation details in order to make an architectural 358 NON-FUNCTIONAL REQUIREMENTS IN SOFTWARE ENGINEERING design more extensible, thus contributing to one of the softgoals in the above decomposition. Argumentation methods and templates are used to organize principles and guidelines for making design rationale for or against design decisions (Cf. [J. Lee91]).",
"title": ""
},
{
"docid": "d95ee6cd088919de0df4087f5413eda5",
"text": "Wikipedia provides a knowledge base for computing word relatedness in a more structured fashion than a search engine and with more coverage than WordNet. In this work we present experiments on using Wikipedia for computing semantic relatedness and compare it to WordNet on various benchmarking datasets. Existing relatedness measures perform better using Wikipedia than a baseline given by Google counts, and we show that Wikipedia outperforms WordNet when applied to the largest available dataset designed for that purpose. The best results on this dataset are obtained by integrating Google, WordNet and Wikipedia based measures. We also show that including Wikipedia improves the performance of an NLP application processing naturally occurring texts.",
"title": ""
}
] |
[
{
"docid": "244c79d374bdbe44406fc514610e4ee7",
"text": "This article surveys some theoretical aspects of cellular automata CA research. In particular, we discuss classical and new results on reversibility, conservation laws, limit sets, decidability questions, universality and topological dynamics of CA. The selection of topics is by no means comprehensive and reflects the research interests of the author. The main goal is to provide a tutorial of CA theory to researchers in other branches of natural computing, to give a compact collection of known results with references to their proofs, and to suggest some open problems. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "356dbb5e8e576cfa49153962a6e3be93",
"text": "Knowing how many people occupy a building, and where they are located, is a key component of smart building services. Commercial, industrial and residential buildings often incorporate systems used to determine occupancy. However, relatively simple sensor technology and control algorithms limit the effectiveness of smart building services. In this paper we propose to replace sensor technology with time series models that can predict the number of occupants at a given location and time. We use Wi-Fi datasets readily available in abundance for smart building services and train Auto Regression Integrating Moving Average (ARIMA) models and Long Short-Term Memory (LSTM) time series models. As a use case scenario of smart building services, these models allow forecasting of the number of people at a given time and location in 15, 30 and 60 minutes time intervals at building as well as Access Point (AP) level. For LSTM, we build our models in two ways: a separate model for every time scale, and a combined model for the three time scales. Our experiments show that LSTM combined model reduced the computational resources with respect to the number of neurons by 74.48 % for the AP level, and by 67.13 % for the building level. Further, the root mean square error (RMSE) was reduced by 88.2%–93.4% for LSTM in comparison to ARIMA for the building levels models and by 80.9 %–87% for the AP level models.",
"title": ""
},
{
"docid": "cd068158b6bebadfb8242b6412ec5bbb",
"text": "artefacts, 65–67 built environments and, 67–69 object artefacts, 65–66 structuralism and, 66–67 See also Non–discursive technique Asymmetry, 88–89, 91 Asynchronous systems, 187 Autonomous architecture, 336–338",
"title": ""
},
{
"docid": "fb162c94248297f35825ff1022ad2c59",
"text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "6c2a033b374b4318cd94f0a617ec705a",
"text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.",
"title": ""
},
{
"docid": "024e9600707203ffcf35ca96dff42a87",
"text": "The blockchain technology is gaining momentum because of its possible application to other systems than the cryptocurrency one. Indeed, blockchain, as a de-centralized system based on a distributed digital ledger, can be utilized to securely manage any kind of assets, constructing a system that is independent of any authorization entity. In this paper, we briefly present blockchain and our work in progress, the VMOA blockchain, to secure virtual machine orchestration operations for cloud computing and network functions virtualization systems. Using tutorial examples, we describe our design choices and draw implementation plans.",
"title": ""
},
{
"docid": "92a112d7b6f668ece433e62a7fe4054c",
"text": "A new technique for stabilizing nonholonomic systems to trajectories is presented. It is well known (see [2]) that such systems cannot be stabilized to a point using smooth static-state feedback. In this note, we suggest the use of control laws for stabilizing a system about a trajectory, instead of a point. Given a nonlinear system and a desired (nominal) feasible trajectory, the note gives an explicit control law which will locally exponentially stabilize the system to the desired trajectory. The theory is applied to several examples, including a car-like robot.",
"title": ""
},
{
"docid": "26508379e41da5e3b38dd944fc9e4783",
"text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We describe three Photobook tools in particular: one that allows search based on grey-level appearance, one that uses 2-D shape, and a third that allows search based on textural properties.",
"title": ""
},
{
"docid": "b987f831f4174ad5d06882040769b1ac",
"text": "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 1 Summary Application trends, device technologies and the architecture of systems drive progress in information technologies. However,",
"title": ""
},
{
"docid": "9827fa3952b7ba4e5e777793cc241148",
"text": "We address the problem of segmenting a sequence of images of natural scenes into disjoint regions that are characterized by constant spatio-temporal statistics. We model the spatio-temporal dynamics in each region by Gauss-Markov models, and infer the model parameters as well as the boundary of the regions in a variational optimization framework. Numerical results demonstrate that – in contrast to purely texture-based segmentation schemes – our method is effective in segmenting regions that differ in their dynamics even when spatial statistics are identical.",
"title": ""
},
{
"docid": "cfd0cadbdf58ee01095aea668f0da4fe",
"text": "A unique and miniaturized dual-band coplanar waveguide (CPW)-fed antenna is presented. The proposed antenna comprises a rectangular patch that is surrounded by upper and lower ground-plane sections that are interconnected by a high-impedance microstrip line. The proposed antenna structure generates two separate impedance bandwidths to cover frequency bands of GSM and Wi-Fi/WLAN. The antenna realized is relatively small in size $(17\\times 20\\ {\\hbox{mm}}^{2})$ and operates over frequency ranges 1.60–1.85 and 4.95–5.80 GHz, making it suitable for GSM and Wi-Fi/WLAN applications. In addition, the antenna is circularly polarized in the GSM band. Experimental results show the antenna exhibits monopole-like radiation characteristics and a good antenna gain over its operating bands. The measured and simulated results presented show good agreement.",
"title": ""
},
{
"docid": "f782af034ef46a15d89637a43ad2849c",
"text": "Introduction: Evidence-based treatment of abdominal hernias involves the use of prosthetic mesh. However, the most commonly used method of treatment of diastasis of the recti involves plication with non-absorbable sutures as part of an abdominoplasty procedure. This case report describes single-port laparoscopic repair of diastasis of recti and umbilical hernia with prosthetic mesh after plication with slowly absorbable sutures combined with abdominoplasty. Technique Description: Our patient is a 36-year-old woman with severe diastasis of the recti, umbilical hernia and an excessive amount of redundant skin after two previous pregnancies and caesarean sections. After raising the upper abdominal flap, a single-port was placed in the left upper quadrant and the ligamenturn teres was divided. The diastasis of the recti and umbilical hernia were plicated under direct vision with continuous and interrupted slowly absorbable sutures before an antiadhesive mesh was placed behind the repair with 6 cm overlap, transfixed in 4 quadrants and tacked in place with non-absorbable tacks in a double-crown technique. The left upper quadrant wound was closed with slowly absorbable sutures. The excess skin was removed and fibrin sealant was sprayed in the subcutaneous space to minimize the risk of serorna formation without using drains. Discussion: Combining single-port laparoscopic repair of diastasis of recti and umbilical hemia repair minimizes inadvertent suturing of abdominal contents during plication, the risks of port site hernias associated with conventional multipart repair and permanently reinforced the midline weakness while achieving “scarless” surgery.",
"title": ""
},
{
"docid": "6e67329e4f678ae9dc04395ae0a5b832",
"text": "This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.",
"title": ""
},
{
"docid": "2d59fe09633ee41c60e9e951986e56a6",
"text": "Face alignment and 3D face reconstruction are traditionally accomplished as separated tasks. By exploring the strong correlation between 2D landmarks and 3D shapes, in contrast, we propose a joint face alignment and 3D face reconstruction method to simultaneously solve these two problems for 2D face images of arbitrary poses and expressions. This method, based on a summation model of 3D face shapes and cascaded regression in 2D and 3D face shape spaces, iteratively and alternately applies two cascaded regressors, one for updating 2D landmarks and the other for 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. Unlike existing methods, the proposed method can fully automatically generate both pose-and-expression-normalized (PEN) and expressive 3D face shapes and localize both visible and invisible 2D landmarks. Based on the PEN 3D face shapes, we devise a method to enhance face recognition accuracy across poses and expressions. Both linear and nonlinear implementations of the proposed method are presented and evaluated in this paper. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.",
"title": ""
},
{
"docid": "b12f1b1ff7618c1f54462c18c768dae8",
"text": "Retrieval is the key process for understanding learning and for promoting learning, yet retrieval is not often granted the central role it deserves. Learning is typically identified with the encoding or construction of knowledge, and retrieval is considered merely the assessment of learning that occurred in a prior experience. The retrieval-based learning perspective outlined here is grounded in the fact that all expressions of knowledge involve retrieval and depend on the retrieval cues available in a given context. Further, every time a person retrieves knowledge, that knowledge is changed, because retrieving knowledge improves one’s ability to retrieve it again in the future. Practicing retrieval does not merely produce rote, transient learning; it produces meaningful, long-term learning. Yet retrieval practice is a tool many students lack metacognitive awareness of and do not use as often as they should. Active retrieval is an effective but undervalued strategy for promoting meaningful learning.",
"title": ""
},
{
"docid": "37fcf6201c168e87d6ef218ecb71c211",
"text": "NASA-TLX is a multi-dimensional scale designed to obtain workload estimates from one or more operators while they are performing a task or immediately afterwards. The years of research that preceded subscale selection and the weighted averaging approach resulted in a tool that has proven to be reasonably easy to use and reliably sensitive to experimentally important manipulations over the past 20 years. Its use has spread far beyond its original application (aviation), focus (crew complement), and language (English). This survey of 550 studies in which NASA-TLX was used or reviewed was undertaken to provide a resource for a new generation of users. The goal was to summarize the environments in which it has been applied, the types of activities the raters performed, other variables that were measured that did (or did not) covary, methodological issues, and lessons learned",
"title": ""
},
{
"docid": "bb8ca605a714d71be903d46bf6e1fa40",
"text": "Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.",
"title": ""
},
{
"docid": "901debd94cb5749a9a1f06b0fd0cb155",
"text": "• Business process reengineering-the redesign of an organization's business processes to make them more efficient. • Coordination technology-an aid to managing dependencies among the agents within a business process, and provides automated support for the most routinized component processes. * Process-driven software development environments-an automated system for integrating the work of all software related management and staff; it provides embedded support for an orderly and defined software development process. These three applications share a growing requirement to represent the processes through which work is accomplished. To the extent that automation is involved, process representation becomes a vital issue in redesigning work and allocating responsibilities between humans and computers. This requirement reflects the growing use of distributed , networked systems to link the interacting agents responsible for executing a business process. To establish process modeling as a unique area, researchers must identify conceptual boundaries that distinguish their work from model-ing in other areas of information science. Process modeling is distinguished from other types of model-ing in computer science because many of the phenomena modeled must be enacted by a human rather than a machine. At least some mod-eling, however, in the area of human-machine system integration or information systems design has this 'human-executable' attribute. Rather than focusing solely on the user's behavior at the interface or the flow and transformation of data within the system, process model-ing also focuses on interacting behaviors among agents, regardless of whether a computer is involved in the transactions. Much of the research on process modeling has been conducted on software development organizations , since the software engineering community is already accustomed to formal modeling. Software process modeling, in particular , explicitly focuses on phenomena that occur during software creation and evolution, a domain different from that usually mod-eled in human-machine integration or information systems design. Software development is a challenging focus for process modeling because of the creative problem-solving involved in requirements analysis and design, and the coordination of team interactions during the development of a complex intellectual artifact. In this article, software process modeling will be used as an example application for describing the current status of process modeling, issues for practical use, and the research questions that remain ahead. Most software organizations possess several yards of software life cycle description, enough to wrap endlessly around the walls of project rooms. Often these descriptions do not correspond to the processes actually performed during software …",
"title": ""
},
{
"docid": "333e2df79425177f0ce2686bd5edbfbe",
"text": "The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.",
"title": ""
}
] |
scidocsrr
|
ba2d6e33064b61517dfb0593665c3c47
|
Graph Frequency Analysis of Brain Signals
|
[
{
"docid": "97490d6458ba9870ce22b3418c558c58",
"text": "The brain is expensive, incurring high material and metabolic costs for its size — relative to the size of the body — and many aspects of brain network organization can be mostly explained by a parsimonious drive to minimize these costs. However, brain networks or connectomes also have high topological efficiency, robustness, modularity and a 'rich club' of connector hubs. Many of these and other advantageous topological properties will probably entail a wiring-cost premium. We propose that brain organization is shaped by an economic trade-off between minimizing costs and allowing the emergence of adaptively valuable topological patterns of anatomical or functional connectivity between multiple neuronal populations. This process of negotiating, and re-negotiating, trade-offs between wiring cost and topological value continues over long (decades) and short (millisecond) timescales as brain networks evolve, grow and adapt to changing cognitive demands. An economical analysis of neuropsychiatric disorders highlights the vulnerability of the more costly elements of brain networks to pathological attack or abnormal development.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
}
] |
[
{
"docid": "846ae985f61a0dcdb1ff3a2226c1b41a",
"text": "OBJECTIVE\nThis article provides an overview of tactile displays. Its goal is to assist human factors practitioners in deciding when and how to employ the sense of touch for the purpose of information representation. The article also identifies important research needs in this area.\n\n\nBACKGROUND\nFirst attempts to utilize the sense of touch as a medium for communication date back to the late 1950s. For the next 35 years progress in this area was relatively slow, but recent years have seen a surge in the interest and development of tactile displays and the integration of tactile signals in multimodal interfaces. A thorough understanding of the properties of this sensory channel and its interaction with other modalities is needed to ensure the effective and robust use of tactile displays.\n\n\nMETHODS\nFirst, an overview of vibrotactile perception is provided. Next, the design of tactile displays is discussed with respect to available technologies. The potential benefit of including tactile cues in multimodal interfaces is discussed. Finally, research needs in the area of tactile information presentation are highlighted.\n\n\nRESULTS\nThis review provides human factors researchers and interface designers with the requisite knowledge for creating effective tactile interfaces. It describes both potential benefits and limitations of this approach to information presentation.\n\n\nCONCLUSION\nThe sense of touch represents a promising means of supporting communication and coordination in human-human and human-machine systems.\n\n\nAPPLICATION\nTactile interfaces can support numerous functions, including spatial orientation and guidance, attention management, and sensory substitution, in a wide range of domains.",
"title": ""
},
{
"docid": "942be0aa4dab5904139919351d6d63d4",
"text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.",
"title": ""
},
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
},
{
"docid": "55dbe73527f91af939e068a76d0200b7",
"text": "With an ageing population in an industrialised world, the global burden of stroke is staggering millions of strokes a year. Hemiparesis is one of the most.Lancet. Rehabilitation of hemiparesis after stroke with a mirror. Altschuler EL, Wisdom SB, Stone L, Foster C, Galasko D.Rehabilitation of the severely affected paretic arm after stroke represents a major challenge, especially in the presence of sensory impairment. Objective.in patients after stroke. This article reviews the evidence for motor imagery or.",
"title": ""
},
{
"docid": "652366f6feab8f3792c0fcb74318472d",
"text": "OBJECTIVE\nTo evaluate the prefrontal space ratio (PFSR) in second- and third-trimester euploid fetuses and fetuses with trisomy 21.\n\n\nMETHODS\nThis was a retrospective study utilizing stored mid-sagittal two-dimensional images of second- and third-trimester fetal faces that were recorded during prenatal ultrasound examinations at the Department of Prenatal Medicine at the University of Tuebingen, Germany and at a private center for prenatal medicine in Nuremberg, Germany. For the normal range, 279 euploid pregnancies between 15 and 40 weeks' gestation were included. The results were compared with 91 cases with trisomy 21 between 15 and 40 weeks. For the ratio measurement, a line was drawn between the leading edge of the mandible and the maxilla (MM line) and extended in front of the forehead. The ratio of the distance between the leading edge of the skull and the leading edge of the skin (d1) to the distance between the skin and the point where the MM line was intercepted (d2) was calculated. The PFSR was determined by dividing d2 by d1.\n\n\nRESULTS\nIn the euploid and trisomy 21 groups, the median gestational age at the time of ultrasound examination was 21.1 (range, 15.0-40.0) and 21.4 (range, 15.0-40.3) weeks, respectively. Multiple regression analysis showed that PFSR was independent of maternal and gestational age. In the euploid group, the mean PFSR was 0.97 ± 0.29. In fetuses with trisomy 21, the mean PFSR was 0.2 ± 0.38 (P < 0.0001). The PFSR was below the 5(th) centile in 14 (5.0%) euploid fetuses and in 72 (79.1%) fetuses with trisomy 21.\n\n\nCONCLUSION\nThe PFSR is a simple and effective marker in second- and third-trimester screening for trisomy 21.",
"title": ""
},
{
"docid": "3dd238bc2b51b3aaf9b8b6900fc82d12",
"text": "Nowadays many applications are generating streaming data for an example real-time surveillance, internet traffic, sensor data, health monitoring systems, communication networks, online transactions in the financial market and so on. Data Streams are temporally ordered, fast changing, massive, and potentially infinite sequence of data. Data Stream mining is a very challenging problem. This is due to the fact that data streams are of tremendous volume and flows at very high speed which makes it impossible to store and scan streaming data multiple time. Concept evolution in streaming data further magnifies the challenge of working with streaming data. Clustering is a data stream mining task which is very useful to gain insight of data and data characteristics. Clustering is also used as a pre-processing step in over all mining process for an example clustering is used for outlier detection and for building classification model. In this paper we will focus on the challenges and necessary features of data stream clustering techniques, review and compare the literature for data stream clustering by example and variable, describe some real world applications of data stream clustering, and tools for data stream clustering.",
"title": ""
},
{
"docid": "ce1d25b3d2e32f903ce29470514abcce",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "284c52c29b5a5c2d3fbd0a7141353e35",
"text": "This paper presents results of patient experiments using a new gait-phase detection sensor (GPDS) together with a programmable functional electrical stimulation (FES) system for subjects with a dropped-foot walking dysfunction. The GPDS (sensors and processing unit) is entirely embedded in a shoe insole and detects in real time four phases (events) during the gait cycle: stance, heel off, swing, and heel strike. The instrumented GPDS insole consists of a miniature gyroscope that measures the angular velocity of the foot and three force sensitive resistors that measure the force load on the shoe insole at the heel and the metatarsal bones. The extracted gait-phase signal is transmitted from the embedded microcontroller to the electrical stimulator and used in a finite state control scheme to time the electrical stimulation sequences. The electrical stimulations induce muscle contractions in the paralyzed muscles leading to a more physiological motion of the affected leg. The experimental results of the quantitative motion analysis during walking of the affected and nonaffected sides showed that the use of the combined insole and FES system led to a significant improvement in the gait-kinematics of the affected leg. This combined sensor and stimulation system has the potential to serve as a walking aid for rehabilitation training or permanent use in a wide range of gait disabilities after brain stroke, spinal-cord injury, or neurological diseases.",
"title": ""
},
{
"docid": "5275184686a8453a1922cec7a236b66d",
"text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.",
"title": ""
},
{
"docid": "0b8c51f823cb55cbccfae098e98f28b3",
"text": "In this study, we investigate whether the “out of body” vibrotactile illusion known as funneling could be applied to enrich and thereby improve the interaction performance on a tablet-sized media device. First, a series of pilot tests was taken to determine the appropriate operational conditions and parameters (such as the tablet size, holding position, minimal required vibration amplitude, and the effect of matching visual feedback) for a two-dimensional (2D) illusory tactile rendering method. Two main experiments were then conducted to validate the basic applicability and effectiveness of the rendering method, and to further demonstrate how the illusory tactile feedback could be deployed in an interactive application and actually improve user performance. Our results showed that for a tablet-sized device (e.g., iPad mini and iPad), illusory perception was possible (localization performance of up to 85%) using a rectilinear grid with a resolution of 5 $$\\times $$ × 7 (grid size: 2.5 cm) with matching visual feedback. Furthermore, the illusory feedback was found to be a significant factor in improving the user performance in a 2D object search/attention task.",
"title": ""
},
{
"docid": "77df82cf7a9ddca2038433fa96a43cef",
"text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "8b84dc47c6a9d39ef1d094aa173a954c",
"text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.",
"title": ""
},
{
"docid": "2c1f93d4e517fe56a5ebf668e8a0bc12",
"text": "The Internet was designed with the end-to-end principle where the network layer provided merely the best-effort forwarding service. This design makes it challenging to add new services into the Internet infrastructure. However, as the Internet connectivity becomes a commodity, users and applications increasingly demand new in-network services. This paper proposes PacketCloud, a cloudlet-based open platform to host in-network services. Different from standalone, specialized middleboxes, cloudlets can efficiently share a set of commodity servers among different services, and serve the network traffic in an elastic way. PacketCloud can help both Internet Service Providers (ISPs) and emerging application/content providers deploy their services at strategic network locations. We have implemented a proof-of-concept prototype of PacketCloud. PacketCloud introduces a small additional delay, and can scale well to handle high-throughput data traffic. We have evaluated PacketCloud in both a fully functional emulated environment, and the real Internet.",
"title": ""
},
{
"docid": "4f2ebb2640a36651fd8c01f3eeb0e13e",
"text": "This paper addresses pixel-level segmentation of a human body from a single image. The problem is formulated as a multi-region segmentation where the human body is constrained to be a collection of geometrically linked regions and the background is split into a small number of distinct zones. We solve this problem in a Bayesian framework for jointly estimating articulated body pose and the pixel-level segmentation of each body part. Using an image likelihood function that simultaneously generates and evaluates the image segmentation corresponding to a given pose, we robustly explore the posterior body shape distribution using a data-driven, coarse-to-fine Metropolis Hastings sampling scheme that includes a strongly data-driven proposal term.",
"title": ""
},
{
"docid": "6bc611936d412dde15999b2eb179c9e2",
"text": "Smith-Lemli-Opitz syndrome, a severe developmental disorder associated with multiple congenital anomalies, is caused by a defect of cholesterol biosynthesis. Low cholesterol and high concentrations of its direct precursor, 7-dehydrocholesterol, in plasma and tissues are the diagnostic biochemical hallmarks of the syndrome. The plasma sterol concentrations correlate with severity and disease outcome. Mutations in the DHCR7 gene lead to deficient activity of 7-dehydrocholesterol reductase (DHCR7), the final enzyme of the cholesterol biosynthetic pathway. The human DHCR7 gene is localised on chromosome 11q13 and its structure has been characterized. Ninetyone different mutations in the DHCR7 gene have been published to date. This paper is a review of the clinical, biochemical and molecular genetic aspects.",
"title": ""
},
{
"docid": "9825e8a24aba301c4c7be3b8b4c4dde5",
"text": "Being a cross-camera retrieval task, person re-identification suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle) adaptation. CamStyle can serve as a data augmentation approach that smooths the camera style disparities. Specifically, with CycleGAN, labeled training images can be style-transferred to each camera, and, along with the original training samples, form the augmented training set. This method, while increasing data diversity against over-fitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few-camera systems in which over-fitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of over-fitting. We also report competitive accuracy compared with the state of the art. Code is available at: https://github.com/zhunzhong07/CamStyle",
"title": ""
},
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
},
{
"docid": "88fa70ef8c6dfdef7d1c154438ff53c2",
"text": "There has been substantial progress in the field of text based sentiment analysis but little effort has been made to incorporate other modalities. Previous work in sentiment analysis has shown that using multimodal data yields to more accurate models of sentiment. Efforts have been made towards expressing sentiment as a spectrum of intensity rather than just positive or negative. Such models are useful not only for detection of positivity or negativity, but also giving out a score of how positive or negative a statement is. Based on the state of the art studies in sentiment analysis, prediction in terms of sentiment score is still far from accurate, even in large datasets [27]. Another challenge in sentiment analysis is dealing with small segments or micro opinions as they carry less context than large segments thus making analysis of the sentiment harder. This paper presents a Ph.D. thesis shaped towards comprehensive studies in multimodal micro-opinion sentiment intensity analysis.",
"title": ""
},
{
"docid": "9924e44d94d00a7a3dbd313409f5006a",
"text": "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty",
"title": ""
},
{
"docid": "08d8e372c5ae4eef9848552ee87fbd64",
"text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …",
"title": ""
}
] |
scidocsrr
|
eeff77ed1001e391788287a6cca55ea0
|
On the Dynamics of Social Media Popularity: A YouTube Case Study
|
[
{
"docid": "64c6012d2e97a1059161c295ae3b9cdb",
"text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.",
"title": ""
},
{
"docid": "b45d1003afac487dd3d5477621a85f74",
"text": "Creating, placing, and presenting social media content is a difficult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success. There are also interesting interactions between these factors. For example, the language of the title should be targeted to the community where the content is submitted, yet it should also highlight the distinctive nature of the content. In this paper, we examine how these factors interact to determine the popularity of social media content. We do so by studying resubmissions, i.e., content that has been submitted multiple times, with multiple titles, to multiple different communities. Such data allows us to ‘tease apart’ the extent to which each factor influences the success of that content. The models we develop help us understand how to better target social media content: by using the right title, for the right community, at the right time.",
"title": ""
},
{
"docid": "3d45de7d6ef9e162552698839550a6ee",
"text": "The queries people issue to a search engine and the results clicked following a query change over time. For example, after the earthquake in Japan in March 2011, the query japan spiked in popularity and people issuing the query were more likely to click government-related results than they would prior to the earthquake. We explore the modeling and prediction of such temporal patterns in Web search behavior. We develop a temporal modeling framework adapted from physics and signal processing and harness it to predict temporal patterns in search behavior using smoothing, trends, periodicities, and surprises. Using current and past behavioral data, we develop a learning procedure that can be used to construct models of users' Web search activities. We also develop a novel methodology that learns to select the best prediction model from a family of predictive models for a given query or a class of queries. Experimental results indicate that the predictive models significantly outperform baseline models that weight historical evidence the same for all queries. We present two applications where new methods introduced for the temporal modeling of user behavior significantly improve upon the state of the art. Finally, we discuss opportunities for using models of temporal dynamics to enhance other areas of Web search and information retrieval.",
"title": ""
},
{
"docid": "0d56b30aef52bfdf2cb6426a834126e5",
"text": "The wide adoption of social media has increased the competition among ideas for our finite attention. We employ a parsimonious agent-based model to study whether such a competition may affect the popularity of different memes, the diversity of information we are exposed to, and the fading of our collective interests for specific topics. Agents share messages on a social network but can only pay attention to a portion of the information they receive. In the emerging dynamics of information diffusion, a few memes go viral while most do not. The predictions of our model are consistent with empirical data from Twitter, a popular microblogging platform. Surprisingly, we can explain the massive heterogeneity in the popularity and persistence of memes as deriving from a combination of the competition for our limited attention and the structure of the social network, without the need to assume different intrinsic values among ideas.",
"title": ""
}
] |
[
{
"docid": "0d6e5e20d6a909a6450671feeb4ac261",
"text": "Rita bakalu, a new species, is described from the Godavari river system in peninsular India. With this finding, the genus Rita is enlarged to include seven species, comprising six species found in South Asia, R. rita, R. macracanthus, R. gogra, R. chrysea, R. kuturnee, R. bakalu, and one species R. sacerdotum from Southeast Asia. R. bakalu is distinguished from its congeners by a combination of the following characters: eye diameter 28–39% HL and 20–22 caudal fin rays; teeth in upper jaw uniformly villiform in two patches, interrupted at the midline; palatal teeth well-developed villiform, in two distinct patches located at the edge of the palate. The mtDNA cytochrome C oxidase I sequence analysis confirmed that the R. bakalu is distinct from the other congeners of Rita. Superficially, R. bakalu resembles R. kuturnee, reported from the Godavari and Krishna river systems; however, the two species are discriminated due to differences in the structure of their teeth patches on upper jaw and palate, anal fin originating before the origin of adipose fin, comparatively larger eye diameter, longer mandibular barbels, and vertebral count. The results conclude that the river Godavari harbors a different species of Rita, R. bakalu which is new to science.",
"title": ""
},
{
"docid": "4f68e4859a717833d214a431b8d796ad",
"text": "Time domain synchronous OFDM (TDS-OFDM) has higher spectral efficiency than cyclic prefix OFDM (CP-OFDM), but suffers from severe performance loss over fast fading channels. In this paper, a novel transmission scheme called time-frequency training OFDM (TFT-OFDM) is proposed. The time-frequency joint channel estimation for TFT-OFDM utilizes the time-domain training sequence without interference cancellation to merely acquire the time delay profile of the channel, while the path coefficients are estimated by using the frequency-domain group pilots. The redundant group pilots only occupy about 1% of the useful subcarriers, thus TFT-OFDM still has much higher spectral efficiency than CP-OFDM by about 10%. Simulation results also demonstrate that TFT-OFDM outperforms CP-OFDM and TDS-OFDM over time-varying channels.",
"title": ""
},
{
"docid": "2944000757568f330b495ba2a446b0a0",
"text": "In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70%. Our method has also been submitted for evaluation as part of the Menpo challenge.",
"title": ""
},
{
"docid": "a05d87b064ab71549d373599700cfcbf",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "86cc0465767c9e079465df61c52c8398",
"text": "Songbirds learn their songs by trial-and-error experimentation, producing highly variable vocal output as juveniles. By comparing their own sounds to the song of a tutor, young songbirds gradually converge to a stable song that can be a remarkably good copy of the tutor song. Here we show that vocal variability in the learning songbird is induced by a basal-ganglia-related circuit, the output of which projects to the motor pathway via the lateral magnocellular nucleus of the nidopallium (LMAN). We found that pharmacological inactivation of LMAN dramatically reduced acoustic and sequence variability in the songs of juvenile zebra finches, doing so in a rapid and reversible manner. In addition, recordings from LMAN neurons projecting to the motor pathway revealed highly variable spiking activity across song renditions, showing that LMAN may act as a source of variability. Lastly, pharmacological blockade of synaptic inputs from LMAN to its target premotor area also reduced song variability. Our results establish that, in the juvenile songbird, the exploratory motor behavior required to learn a complex motor sequence is dependent on a dedicated neural circuit homologous to cortico-basal ganglia circuits in mammals.",
"title": ""
},
{
"docid": "d2d16580335dcff2f0d05ca8a43438ef",
"text": "Evolutionary adaptation can be rapid and potentially help species counter stressful conditions or realize ecological opportunities arising from climate change. The challenges are to understand when evolution will occur and to identify potential evolutionary winners as well as losers, such as species lacking adaptive capacity living near physiological limits. Evolutionary processes also need to be incorporated into management programmes designed to minimize biodiversity loss under rapid climate change. These challenges can be met through realistic models of evolutionary change linked to experimental data across a range of taxa.",
"title": ""
},
{
"docid": "9a2b8f7e82647a7e63af839bff2412aa",
"text": "The user's understanding of information needs and the information available in the data collection can evolve during an exploratory search session. Search systems tailored for well-defined narrow search tasks may be suboptimal for exploratory search where the user can sequentially refine the expressions of her information needs and explore alternative search directions. A major challenge for exploratory search systems design is how to support such behavior and expose the user to relevant yet novel information that can be difficult to discover by using conventional query formulation techniques. We introduce IntentStreams, a system for exploratory search that provides interactive query refinement mechanisms and parallel visualization of search streams. The system models each search stream via an intent model allowing rapid user feedback. The user interface allows swift initiation of alternative and parallel search streams by direct manipulation that does not require typing. A study with 13 participants shows that IntentStreams provides better support for branching behavior compared to a conventional search system.",
"title": ""
},
{
"docid": "74d84a74edd2a18387d6ac73f2c2b8d5",
"text": "The continued increase in the atmospheric concentration of carbon dioxide due to anthropogenic emissions is predicted to lead to significant changes in climate. About half of the current emissions are being absorbed by the ocean and by land ecosystems, but this absorption is sensitive to climate as well as to atmospheric carbon dioxide concentrations, creating a feedback loop. General circulation models have generally excluded the feedback between climate and the biosphere, using static vegetation distributions and CO2 concentrations from simple carbon-cycle models that do not include climate change. Here we present results from a fully coupled, three-dimensional carbon–climate model, indicating that carbon-cycle feedbacks could significantly accelerate climate change over the twenty-first century. We find that under a ‘business as usual’ scenario, the terrestrial biosphere acts as an overall carbon sink until about 2050, but turns into a source thereafter. By 2100, the ocean uptake rate of 5 Gt C yr-1 is balanced by the terrestrial carbon source, and atmospheric CO2 concentrations are 250 p.p.m.v. higher in our fully coupled simulation than in uncoupled carbon models, resulting in a global-mean warming of 5.5 K, as compared to 4 K without the carbon-cycle feedback.",
"title": ""
},
{
"docid": "f6a19d26df9acabe9185c4c167520422",
"text": "OBJECTIVE Benign enlargement of the subarachnoid spaces (BESS) is a common finding on imaging studies indicated by macrocephaly in infancy. This finding has been associated with the presence of subdural fluid collections that are sometimes construed as suggestive of abusive head injury. The prevalence of BESS among infants with macrocephaly and the prevalence of subdural collections among infants with BESS are both poorly defined. The goal of this study was to determine the relative frequencies of BESS, hydrocephalus, and subdural collections in a large consecutive series of imaging studies performed for macrocephaly and to determine the prevalence of subdural fluid collections among patients with BESS. METHODS A text search of radiology requisitions identified studies performed for macrocephaly in patients ≤ 2 years of age. Studies of patients with hydrocephalus or acute trauma were excluded. Studies that demonstrated hydrocephalus or chronic subdural hematoma not previously recognized but responsible for macrocephaly were noted but not investigated further. The remaining studies were reviewed for the presence of incidental subdural collections and for measurement of the depth of the subarachnoid space. A 3-point scale was used to grade BESS: Grade 0, < 5 mm; Grade 1, 5-9 mm; and Grade 2, ≥ 10 mm. RESULTS After exclusions, there were 538 studies, including 7 cases of hydrocephalus (1.3%) and 1 large, bilateral chronic subdural hematoma (0.2%). There were incidental subdural collections in 21 cases (3.9%). Two hundred sixty-five studies (49.2%) exhibited Grade 1 BESS, and 46 studies (8.6%) exhibited Grade 2 BESS. The prevalence of incidental subdural collections among studies with BESS was 18 of 311 (5.8%). The presence of BESS was associated with a greater prevalence of subdural collections, and higher grades of BESS were associated with increasing prevalence of subdural collections. After controlling for imaging modality, the odds ratio of the association of BESS with subdural collections was 3.68 (95% CI 1.12-12.1, p = 0.0115). There was no association of race, sex, or insurance status with subdural collections. Patients with BESS had larger head circumference Z-scores, but there was no association of head circumference or age with subdural collections. Interrater reliability in the diagnosis and grading of BESS was only fair. CONCLUSIONS The current study confirms the association of BESS with incidental subdural collections and suggests that greater depth of the subarachnoid space is associated with increased prevalence of such collections. These observations support the theory that infants with BESS have a predisposition to subdural collections on an anatomical basis. Incidental subdural collections in the setting of BESS are not necessarily indicative of abusive head injury.",
"title": ""
},
{
"docid": "2d845ef6552b77fb4dd0d784233aa734",
"text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.",
"title": ""
},
{
"docid": "bea1aab100753e782527f631c1b110c1",
"text": "The great content diversity of real-world digital images poses a grand challenge to image quality assessment (IQA) models, which are traditionally designed and validated on a handful of commonly used IQA databases with very limited content variation. To test the generalization capability and to facilitate the wide usage of IQA techniques in real-world applications, we establish a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them. Instead of collecting the mean opinion score for each image via subjective testing, which is extremely difficult if not impossible, we present three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test (P-test). We compare 20 well-known IQA models using the proposed criteria, which not only provide a stronger test in a more challenging testing environment for existing models, but also demonstrate the additional benefits of using the proposed database. For example, in the P-test, even for the best performing no-reference IQA model, more than 6 million failure cases against the model are “discovered” automatically out of over 1 billion test pairs. Furthermore, we discuss how the new database may be exploited using innovative approaches in the future, to reveal the weaknesses of existing IQA models, to provide insights on how to improve the models, and to shed light on how the next-generation IQA models may be developed. The database and codes are made publicly available at: https://ece.uwaterloo.ca/~k29ma/exploration/.",
"title": ""
},
{
"docid": "dbdda952c63b7b7a4f8ce68f806e5238",
"text": "This paper examines how real-time information gathered as part of intelligent transportation systems can be used to predict link travel times for one through five time periods ahead (of 5-min duration). The study employed a spectral basis artificial neural network (SNN) that utilizes a sinusoidal transformation technique to increase the linear separability of the input features. Link travel times from Houston that had been collected as part of the automatic vehicle identification system of the TranStar system were used as a test bed. It was found that the SNN outperformed a conventional artificial neural network and gave similar results to that of modular neural networks. However, the SNN requires significantly less effort on the part of the modeler than modular neural networks. The results of the best SNN were compared with conventional link travel time prediction techniques including a Kalman filtering model, exponential smoothing model, historical profile, and realtime profile. It was found that the SNN gave the best overall results.",
"title": ""
},
{
"docid": "14077e87744089bb731085590be99a75",
"text": "The Vehicle Routing Problem (VRP) is an important problem occurring in many logistics systems. The objective of VRP is to serve a set of customers at minimum cost, such that every node is visited by exactly one vehicle only once. In this paper, we consider the Dynamic Vehicle Routing Problem (DVRP) which new customer demands are received along the day. Hence, they must be serviced at their locations by a set of vehicles in real time minimizing the total travel distance. The main goal of this research is to find a solution of DVRP using genetic algorithm. However we used some heuristics in addition during generation of the initial population and crossover for tuning the system to obtain better result. The computational experiments were applied to 22 benchmarks instances with up to 385 customers and the effectiveness of the proposed approach is validated by comparing the computational results with those previously presented in the literature.",
"title": ""
},
{
"docid": "b387476c4ff2b2b5ed92a23c7f065026",
"text": "In this article, I review the diagnostic criteria for Gender Identity Disorder (GID) in children as they were formulated in the DSM-III, DSM-III-R, and DSM-IV. The article focuses on the cumulative evidence for diagnostic reliability and validity. It does not address the broader conceptual discussion regarding GID as \"disorder,\" as this issue is addressed in a companion article by Meyer-Bahlburg (2009). This article addresses criticisms of the GID criteria for children which, in my view, can be addressed by extant empirical data. Based in part on reanalysis of data, I conclude that the persistent desire to be of the other gender should, in contrast to DSM-IV, be a necessary symptom for the diagnosis. If anything, this would result in a tightening of the diagnostic criteria and may result in a better separation of children with GID from children who display marked gender variance, but without the desire to be of the other gender.",
"title": ""
},
{
"docid": "fdf95905dd8d3d8dcb4388ac921b3eaa",
"text": "Relation classification is associated with many potential applications in the artificial intelligence area. Recent approaches usually leverage neural networks based on structure features such as syntactic or dependency features to solve this problem. However, high-cost structure features make such approaches inconvenient to be directly used. In addition, structure features are probably domaindependent. Therefore, this paper proposes a bidirectional long-short-term-memory recurrent-neuralnetwork (Bi-LSTM-RNN) model based on low-cost sequence features to address relation classification. This model divides a sentence or text segment into five parts, namely two target entities and their three contexts. It learns the representations of entities and their contexts, and uses them to classify relations. We evaluate our model on two standard benchmark datasets in different domains, namely SemEval-2010 Task 8 and BioNLP-ST 2016 Task BB3. In the former dataset, our model achieves comparable performance compared with other models using sequence features. In the latter dataset, our model obtains the third best results compared with other models in the official evaluation. Moreover, we find that the context between two target entities plays the most important role in relation classification. Furthermore, statistic experiments show that the context between two target entities can be used as an approximate replacement of the shortest dependency path when dependency parsing is not used.",
"title": ""
},
{
"docid": "c7351e8ce6d32b281d5bd33b245939c6",
"text": "In TREC 2002 the Berkeley group participated only in the English-Arabic cross-language retrieval (CLIR) track. One Arabic monolingual run and three English-Arabic cross-language runs were submitted. Our approach to the crosslanguage retrieval was to translate the English topics into Arabic using online English-Arabic machine translation systems. The four official runs are named as BKYMON, BKYCL1, BKYCL2, and BKYCL3. The BKYMON is the Arabic monolingual run, and the other three runs are English-to-Arabic cross-language runs. This paper reports on the construction of an Arabic stoplist and two Arabic stemmers, and the experiments on Arabic monolingual retrieval, English-to-Arabic cross-language retrieval.",
"title": ""
},
{
"docid": "8b2c83868c16536910e7665998b2d87e",
"text": "Nowadays organizations turn to any standard procedure to gain a competitive advantage. If sustainable, competitive advantage can bring about benefit to the organization. The aim of the present study was to introduce competitive advantage as well as to assess the impacts of the balanced scorecard as a means to measure the performance of organizations. The population under study included employees of organizations affiliated to the Social Security Department in North Khorasan Province, of whom a total number of 120 employees were selected as the participants in the research sample. Two researcher-made questionnaires with a 5-point Likert scale were used to measure the competitive advantage and the balanced scorecard. Besides, Cronbach's alpha coefficient was used to measure the reliability of the instruments that was equal to 0.74 and 0.79 for competitive advantage and the balanced scorecard, respectively. The data analysis was performed using the structural equation modeling and the results indicated the significant and positive impact of the implementation of the balanced scorecard on the sustainable competitive advantage. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "8308fe89676df668e66287a44103980b",
"text": "Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.",
"title": ""
},
{
"docid": "d3765112295d9a4591b438130df59a25",
"text": "This paper presents the design and mathematical model of a lower extremity exoskeleton device used to make paralyzed people walk again. The design takes into account the anatomy of standard human leg with a total of 11 Degrees of freedom (DoF). A CAD model in SolidWorks is presented along with its fabrication and a mathematical model in MATLAB.",
"title": ""
},
{
"docid": "cbc81f267b98cc3f3986552515657b0f",
"text": "Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana .",
"title": ""
}
] |
scidocsrr
|
8c108461114f056041167732a0fced25
|
Evolving Deep Recurrent Neural Networks Using Ant Colony Optimization
|
[
{
"docid": "83cace7cc84332bc30eeb6bc957ea899",
"text": "Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing decision makers in many areas. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, using ANNs to model linear problems have yielded mixed results, and hence; it is not wise to apply ANNs blindly to any type of data. Autoregressive integrated moving average (ARIMA) models are one of the most popular linear models in time series forecasting, which have been widely applied in order to construct more accurate hybrid models during the past decade. Although, hybrid techniques, which decompose a time series into its linear and nonlinear components, have recently been shown to be successful for single models, these models have some disadvantages. In this paper, a novel hybridization of artificial neural networks and ARIMA model is proposed in order to overcome mentioned limitation of ANNs and yield more general and more accurate forecasting model than traditional hybrid ARIMA-ANNs models. In our proposed model, the unique advantages of ARIMA models in linear modeling are used in order to identify and magnify the existing linear structure in data, and then a neural network is used in order to determine a model to capture the underlying data generating process and predict, using preprocessed data. Empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy ybrid achieved by traditional h",
"title": ""
}
] |
[
{
"docid": "d3049fee1ed622515f5332bcfa3bdd7b",
"text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.",
"title": ""
},
{
"docid": "0c9fa24357cb09cea566b7b2493390c4",
"text": "Conflict is a common phenomenon in interactions both between individuals, and between groups of individuals. As CSCW is concerned with the design of systems to support such interactions, an examination of conflict, and the various ways of dealing with it, would clearly be of benefit. This chapter surveys the literature that is most relevant to the CSCW community, covering many disciplines that have addressed particular aspects of conflict. The chapter is organised around a series of assertions, representing both commonly held beliefs about conflict, and hypotheses and theories drawn from the literature. In many cases no definitive statement can be made about the truth or falsity of an assertion: the empirical evidence both supporting and opposing is examined, and pointers are provided to further discussion in the literature. One advantage of organising the survey in this way is that it need not be read in order. Each assertion forms a self-contained essay, with cross-references to related assertions. Hence, treat the chapter as a resource to be dipped into rather than read in sequence. This introduction sets the scene by defining conflict, and providing a rationale for studying conflict in relation to CSCW. The assertions are presented in section 2, and form the main body of the chapter. Finally, section 3 relates the assertions to current work on CSCW systems.",
"title": ""
},
{
"docid": "fde0f116dfc929bf756d80e2ce69b1c7",
"text": "The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in a PSO algorithm developed by the authors at UCLA specifically for engineering optimizations (UCLA-PSO). Also discussed is recent progress in the development of the PSO and the special considerations needed for engineering implementation including suggestions for the selection of parameter values. Additionally, a study of boundary conditions is presented indicating the invisible wall technique outperforms absorbing and reflecting wall techniques. These concepts are then integrated into a representative example of optimization of a profiled corrugated horn antenna.",
"title": ""
},
{
"docid": "13daec7c27db2b174502d358b3c19f43",
"text": "The QRS complex of the ECG signal is the reference point for the most ECG applications. In this paper, we aim to describe the design and the implementation of an embedded system for detection of the QRS complexes in real-time. The design is based on the notorious algorithm of Pan & Tompkins, with a novel simple idea for the decision stage of this algorithm. The implementation uses a circuit of the current trend, i.e. the FPGA, and it is developed with the Xilinx design tool, System Generator for DSP. In the authors’ view, the specific feature, i.e. authenticity and simplicity of the proposed model, is that the threshold value is updated from the previous smallest peak; in addition, the model is entirely designed simply with MCode blocks. The hardware design is tested with five 30 minutes data records obtained from the MIT-BIH Arrhythmia database. Its accuracy exceeds 96%, knowing that four records among the five represent the worst cases in the database. In terms of the resources utilization, our implementation occupies around 30% of the used FPGA device, namely the Xilinx Spartan 3E XC3S500.",
"title": ""
},
{
"docid": "fa0c62b91643a45a5eff7c1b1fa918f1",
"text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.",
"title": ""
},
{
"docid": "512d29a398f51041466884f4decec84a",
"text": "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2",
"title": ""
},
{
"docid": "876e56a4c859e5fc7fa0038845317da4",
"text": "The rise of Web 2.0 with its increasingly popular social sites like Twitter, Facebook, blogs and review sites has motivated people to express their opinions publicly and more frequently than ever before. This has fueled the emerging field known as sentiment analysis whose goal is to translate the vagaries of human emotion into hard data. LCI is a social channel analysis platform that taps into what is being said to understand the sentiment with the particular ability of doing so in near real-time. LCI integrates novel algorithms for sentiment analysis and a configurable dashboard with different kinds of charts including dynamic ones that change as new data is ingested. LCI has been researched and prototyped at HP Labs in close interaction with the Business Intelligence Solutions (BIS) Division and a few customers. This paper presents an overview of the architecture and some of its key components and algorithms, focusing in particular on how LCI deals with Twitter and illustrating its capabilities with selected use cases.",
"title": ""
},
{
"docid": "cd5a267c1dac92e68ba677c4a2e06422",
"text": "Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach.",
"title": ""
},
{
"docid": "51487a368a572dc415a5a4c0d4621d4b",
"text": "Wireless sensor networks (WSNs) are an emerging technology for monitoring physical world. Different from the traditional wireless networks and ad hoc networks, the energy constraint of WSNs makes energy saving become the most important goal of various routing algorithms. For this purpose, a cluster based routing algorithm LEACH (low energy adaptive clustering hierarchy) has been proposed to organize a sensor network into a set of clusters so that the energy consumption can be evenly distributed among all the sensor nodes. Periodical cluster head voting in LEACH, however, consumes non-negligible energy and other resources. While another chain-based algorithm PEGASIS (powerefficient gathering in sensor information systems) can reduce such energy consumption, it causes a longer delay for data transmission. In this paper, we propose a routing algorithm called CCM (Chain-Cluster based Mixed routing), which makes full use of the advantages of LEACH and PEGASIS, and provide improved performance. It divides a WSN into a few chains and runs in two stages. In the first stage, sensor nodes in each chain transmit data to their own chain head node in parallel, using an improved chain routing protocol. In the second stage, all chain head nodes group as a cluster in a selforganized manner, where they transmit fused data to a voted cluster head using the cluster based routing. Experimental F. Tang (B) · M. Guo · Y. Ma Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: tang-fl@cs.sjtu.edu.cn I. You School of Information Science, Korean Bible University, Seoul, South Korea F. Tang · S. Guo School of Computer Science and Engineering, The University of Aizu, Fukushima 965-8580, Japan results demonstrate that our CCM algorithm outperforms both LEACH and PEGASIS in terms of the product of consumed energy and delay, weighting the overall performance of both energy consumption and transmission delay.",
"title": ""
},
{
"docid": "eccae386c0b8c053abda46537efbd792",
"text": "Software Defined Networking (SDN) has recently emerged as a new network management platform. The centralized control architecture presents many new opportunities. Among the network management tasks, measurement is one of the most important and challenging one. Researchers have proposed many solutions to better utilize SDN for network measurement. Among them, how to detect Distributed Denial-of-Services (DDoS) quickly and precisely is a very challenging problem. In this paper, we propose methods to detect DDoS attacks leveraging on SDN's flow monitoring capability. Our methods utilize measurement resources available in the whole SDN network to adaptively balance the coverage and granularity of attack detection. Through simulations we demonstrate that our methods can quickly locate potential DDoS victims and attackers by using a constrained number of flow monitoring rules.",
"title": ""
},
{
"docid": "27237bf03da7f6aea13c137668def5f0",
"text": "In deep learning community, gradient based methods are typically employed to train the proposed models. These methods generally operate in a mini-batch training manner wherein a small fraction of the training data is invoked to compute an approximative gradient. It is reported that models trained with large batch are prone to generalize worse than those trained with small batch. Several inspiring works are conducted to figure out the underlying reason of this phenomenon, but almost all of them focus on classification tasks. In this paper, we investigate the influence of batch size on regression task. More specifically, we tested the generalizability of deep auto-encoder trained with varying batch size and checked some well-known measures relating to model generalization. Our experimental results lead to three conclusions. First, there exist no obvious generalization gap in regression model such as auto-encoders. Second, with a same train loss as target, small batch generally lead to solutions closer to the starting point than large batch. Third, spectral norm of weight matrices is closely related to generalizability of the model, but different layers contribute variously to the generalization performance.",
"title": ""
},
{
"docid": "fc2a7c789f742dfed24599997845b604",
"text": "An axially symmetric power combiner, which utilizes a tapered conical impedance matching network to transform ten 50-Omega inputs to a central coaxial line over the X-band, is presented. The use of a conical line allows standard transverse electromagnetic design theory to be used, including tapered impedance matching networks. This, in turn, alleviates the problem of very low impedance levels at the common port of conical line combiners, which normally requires very high-precision manufacturing and assembly. The tapered conical line is joined to a tapered coaxial line for a completely smooth transmission line structure. Very few full-wave analyses are needed in the design process since circuit models are optimized to achieve a wide operating bandwidth. A ten-way prototype was developed at X-band with a 47% bandwidth, very low losses, and excellent agreement between simulated and measured results.",
"title": ""
},
{
"docid": "3cc6d54cb7a8507473f623a149c3c64b",
"text": "The measurement of loyalty is a topic of great interest for the marketing academic literature. The relation that loyalty has with the results of organizations has been tested by numerous studies and the search to retain profitable customers has become a maxim in firm management. Tourist destinations have not remained oblivious to this trend. However, the difficulty involved in measuring the loyalty of a tourist destination is a brake on its adoption by those in charge of destination management. The usefulness of measuring loyalty lies in being able to apply strategies which enable improving it, but that also impact on the enhancement of the organization’s results. The study of tourists’ loyalty to a destination is considered relevant for the literature and from the point of view of the management of the multiple actors involved in the tourist activity. Based on these considerations, this work proposes a synthetic indictor that allows the simple measurement of the tourist’s loyalty. To do so, we used as a starting point Best’s (2007) customer loyalty index adapted to the case of tourist destinations. We also employed a variable of results – the tourist’s overnight stays in the destination – to create a typology of customers according to their levels of loyalty and the number of their overnight stays. The data were obtained from a survey carried out with 2373 tourists of the city of Seville. In accordance with the results attained, the proposal of the synthetic indicator to measure tourist loyalty is viable, as it is a question of a simple index constructed from easily obtainable data. Furthermore, four groups of tourists have been identified, according to their degree of loyalty and profitability, using the number of overnight stays of the tourists in their visit to the destination. The study’s main contribution stems from the possibility of simply measuring loyalty and from establishing four profiles of tourists for which marketing strategies of differentiated relations can be put into practice and that contribute to the improvement of the destination’s results. © 2018 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/",
"title": ""
},
{
"docid": "14b0f4542d34a114fd84f14d1f0b53e8",
"text": "Selection the ideal mate is the most confusing process in the life of most people. To explore these issues to examine differences under graduates socio-economic status have on their preference of marriage partner selection in terms of their personality traits, socio-economic status and physical attractiveness. A total of 770 respondents participated in this study. The respondents were mainly college students studying in final year degree in professional and non professional courses. The result revealed that the respondents socio-economic status significantly influence preferences in marriage partners selection in terms of personality traits, socio-economic status and physical attractiveness.",
"title": ""
},
{
"docid": "69b831bb25e5ad0f18054d533c313b53",
"text": "In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.",
"title": ""
},
{
"docid": "148af36df5a403b33113ee5b9a7ad1d3",
"text": "The experience of interacting with a robot has been shown to be very different in comparison to people’s interaction experience with other technologies and artifacts, and often has a strong social or emotional component – a fact that raises concerns related to evaluation. In this paper we outline how this difference is due in part to the general complexity of robots’ overall context of interaction, related to their dynamic presence in the real world and their tendency to invoke a sense of agency. A growing body of work in Human-Robot Interaction (HRI) focuses on exploring this overall context and tries to unpack what exactly is unique about interaction with robots, often through leveraging evaluation methods and frameworks designed for more-traditional HCI. We raise the concern that, due to these differences, HCI evaluation methods should be applied to HRI with care, and we present a survey of HCI evaluation techJames E. Young University of Calgary, Canada, The University of Tokyo, Japan E-mail: jim.young@ucalgary.ca JaYoung Sung Georgia Institute of Technology, GA, U.S.A. E-mail: jsung@cc.gatech.edu Amy Voida University of Calgary, Canada E-mail: avoida@ucalgary.ca Ehud Sharlin University of Calgary, Canada E-mail: ehud@cpsc.ucalgary.ca Takeo Igarashi The University of Tokyo, Japan, JST ERATO, Japan E-mail: takeo@acm.org Henrik I. Christensen Georgia Institute of Technology, GA, U.S.A. E-mail: hic@cc.gatech.edu Rebecca E. Grinter Georgia Institute of Technology, GA, U.S.A. E-mail: beki@cc.gatech.edu niques from the perspective of the unique challenges of robots. Further, we have developed a new set of tools to aid evaluators in targeting and unpacking the holistic human-robot interaction experience. Our technique surrounds the development of a map of interaction experience possibilities and, as part of this, we present a set of three perspectives for targeting specific components of interaction experience, and demonstrate how these tools can be practically used in evaluation. CR Subject Classification H.1.2 [Models and principles]: user/machine systems–software psychology",
"title": ""
},
{
"docid": "00639757a1a60fe8e56b868bd6e2ff62",
"text": "Giant congenital melanocytic nevus is usually defined as a melanocytic lesion present at birth that will reach a diameter ≥ 20 cm in adulthood. Its incidence is estimated in <1:20,000 newborns. Despite its rarity, this lesion is important because it may associate with severe complications such as malignant melanoma, affect the central nervous system (neurocutaneous melanosis), and have major psychosocial impact on the patient and his family due to its unsightly appearance. Giant congenital melanocytic nevus generally presents as a brown lesion, with flat or mammilated surface, well-demarcated borders and hypertrichosis. Congenital melanocytic nevus is primarily a clinical diagnosis. However, congenital nevi are histologically distinguished from acquired nevi mainly by their larger size, the spread of the nevus cells to the deep layers of the skin and by their more varied architecture and morphology. Although giant congenital melanocytic nevus is recognized as a risk factor for the development of melanoma, the precise magnitude of this risk is still controversial. The estimated lifetime risk of developing melanoma varies from 5 to 10%. On account of these uncertainties and the size of the lesions, the management of giant congenital melanocytic nevus needs individualization. Treatment may include surgical and non-surgical procedures, psychological intervention and/or clinical follow-up, with special attention to changes in color, texture or on the surface of the lesion. The only absolute indication for surgery in giant congenital melanocytic nevus is the development of a malignant neoplasm on the lesion.",
"title": ""
},
{
"docid": "922c0a315751c90a11b018547f8027b2",
"text": "We propose a model for the recently discovered Θ+ exotic KN resonance as a novel kind of a pentaquark with an unusual color structure: a 3c ud diquark, coupled to 3c uds̄ triquark in a relative P -wave. The state has J P = 1/2+, I = 0 and is an antidecuplet of SU(3)f . A rough mass estimate of this pentaquark is close to experiment.",
"title": ""
},
{
"docid": "9b19f343a879430283881a69e3f9cb78",
"text": "Effective analysis of applications (shortly apps) is essential to understanding apps' behavior. Two analysis approaches, i.e., static and dynamic, are widely used; although, both have well known limitations. Static analysis suffers from obfuscation and dynamic code updates. Whereas, it is extremely hard for dynamic analysis to guarantee the execution of all the code paths in an app and thereby, suffers from the code coverage problem. However, from a security point of view, executing all paths in an app might be less interesting than executing certain potentially malicious paths in the app. In this work, we use a hybrid approach that combines static and dynamic analysis in an iterative manner to cover their shortcomings. We use targeted execution of interesting code paths to solve the issues of obfuscation and dynamic code updates. Our targeted execution leverages a slicing-based analysis for the generation of data-dependent slices for arbitrary methods of interest (MOI) and on execution of the extracted slices for capturing their dynamic behavior. Motivated by the fact that malicious apps use Inter Component Communications (ICC) to exchange data [19], our main contribution is the automatic targeted triggering of MOI that use ICC for passing data between components. We implement a proof of concept, TelCC, and report the results of our evaluation.",
"title": ""
}
] |
scidocsrr
|
ed58c8deffe1a3e40fe788fb4977fe11
|
An Authoring Tool for Location-Based Mobile Games with Augmented Reality Features
|
[
{
"docid": "0d5b33ce7e1a1af17751559c96fdcf0a",
"text": "Urban-related data and geographic information are becoming mainstream in the Linked Data community due also to the popularity of Location-based Services. In this paper, we introduce the UrbanMatch game, a mobile gaming application that joins data linkage and data quality/trustworthiness assessment in an urban environment. By putting together Linked Data and Human Computation, we create a new interaction paradigm to consume and produce location-specific linked data by involving and engaging the final user. The UrbanMatch game is also offered as an example of value proposition and business model of a new family of linked data applications based on gaming in Smart Cities.",
"title": ""
}
] |
[
{
"docid": "199d2f3d640fbb976ef27c8d129922ef",
"text": "Federated learning enables resource-constrained edge compute devices, such as mobile phones and IoT devices, to learn a shared model for prediction, while keeping the training data local. This decentralized approach to train models provides privacy, security, regulatory and economic benefits. In this work, we focus on the statistical challenge of federated learning when local data is non-IID. We first show that the accuracy of federated learning reduces significantly, by up to ~55% for neural networks trained for highly skewed non-IID data, where each client device trains only on a single class of data. We further show that this accuracy reduction can be explained by the weight divergence, which can be quantified by the earth mover’s distance (EMD) between the distribution over classes on each device and the population distribution. As a solution, we propose a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices. Experiments show that accuracy can be increased by ~30% for the CIFAR-10 dataset with only 5% globally shared data.",
"title": ""
},
{
"docid": "b0b90289271b471697b30bf16f22ceb4",
"text": "A general problem in acoustic scene classification task is the mismatched conditions between training and testing data, which significantly reduces the performance of the developed methods on classification accuracy. As a countermeasure, we present the first method of unsupervised adversarial domain adaptation for acoustic scene classification. We employ a model pre-trained on data from one set of conditions and by using data from other set of conditions, we adapt the model in order that its output cannot be used for classifying the set of conditions that input data belong to. We use a freely available dataset from the DCASE 2018 challenge Task 1, subtask B, that contains data from mismatched recording devices. We consider the scenario where the annotations are available for the data recorded from one device, but not for the rest. Our results show that with our model agnostic method we can achieve ∼ 10% increase at the accuracy on an unseen and unlabeled dataset, while keeping almost the same performance on the labeled dataset.",
"title": ""
},
{
"docid": "7a52fecf868040da5db3bd6fcbdcc0b2",
"text": "Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.",
"title": ""
},
{
"docid": "e3c77ede3d63708b138b6aa240fea57b",
"text": "We numerically investigated 3-dimensional (3D) sub-wavelength structured metallic nanohole films with various thicknesses using wavelength interrogation technique. The reflectivity and full-width at half maximum (FWHM) of the localized surface plasmon resonance (LSPR) spectra was calculated using finite-difference time domain (FDTD) method. Results showed that a 100nm-thick silver nanohole gave higher reflectivity of 92% at the resonance wavelength of 644nm. Silver, copper and aluminum structured thin films showed only a small difference in the reflectivity spectra for various metallic film thicknesses whereas gold thin films showed a reflectivity decrease as the film thickness was increased. However, all four types of metallic nanohole films exhibited increment in FWHM (broader curve) and the resonance wavelength was red-shifted as the film thicknesses were decreased.",
"title": ""
},
{
"docid": "9dfef5bc76b78e7577b9eb377b830a9e",
"text": "Patients with Parkinson's disease may have difficulties in speaking because of the reduced coordination of the muscles that control breathing, phonation, articulation and prosody. Symptoms that may occur because of changes are weakening of the volume of the voice, voice monotony, changes in the quality of the voice, speed of speech, uncontrolled repetition of words. The evaluation of some of the disorders mentioned can be achieved through measuring the variation of parameters in an objective manner. It may be done to evaluate the response to the treatments with intra-daily frequency pre / post-treatment, as well as in the long term. Software systems allow these measurements also by recording the patient's voice. This allows to carry out a large number of tests by means of a larger number of patients and a higher frequency of the measurements. The main goal of our work was to design and realize Voxtester, an effective and simple to use software system useful to measure whether changes in voice emission are sensitive to pharmacologic treatments. Doctors and speech therapists can easily use it without going into the technical details, and we think that this goal is reached only by Voxtester, up to date.",
"title": ""
},
{
"docid": "990c1e569bf489d23182d5778a3c1b3f",
"text": "The future Internet is an IPv6 network interconnecting traditional computers and a large number of smart objects. This Internet of Things (IoT) will be the foundation of many services and our daily life will depend on its availability and reliable operation. Therefore, among many other issues, the challenge of implementing secure communication in the IoT must be addressed. In the traditional Internet, IPsec is the established and tested way of securing networks. It is therefore reasonable to explore the option of using IPsec as a security mechanism for the IoT. Smart objects are generally added to the Internet using IPv6 over Low-power Wireless Personal Area Networks (6LoWPAN), which defines IP communication for resource-constrained networks. Thus, to provide security for the IoT based on the trusted and tested IPsec mechanism, it is necessary to define an IPsec extension of 6LoWPAN. In this paper, we present such a 6LoWPAN/IPsec extension and show the viability of this approach. We describe our 6LoWPAN/IPsec implementation, which we evaluate and compare with our implementation of IEEE 802.15.4 link-layer security. We also show that it is possible to reuse crypto hardware within existing IEEE 802.15.4 transceivers for 6LoWPAN/IPsec. The evaluation results show that IPsec is a feasible option for securing the IoT in terms of packet size, energy consumption, memory usage, and processing time. Furthermore, we demonstrate that in contrast to common belief, IPsec scales better than link-layer security as the data size and the number of hops grow, resulting in time and energy savings. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "9049805c56c9b7fc212fdb4c7f85dfe1",
"text": "Intentions (6) Do all the important errands",
"title": ""
},
{
"docid": "429f6a87ceebf0bd2b852c1a1ab91eb2",
"text": "BACKGROUND\nIn some countries extracts of the plant Hypericum perforatum L. (popularly called St. John's wort) are widely used for treating patients with depressive symptoms.\n\n\nOBJECTIVES\nTo investigate whether extracts of hypericum are more effective than placebo and as effective as standard antidepressants in the treatment of major depression; and whether they have fewer adverse effects than standard antidepressant drugs.\n\n\nSEARCH STRATEGY\nTrials were searched in computerised databases, by checking bibliographies of relevant articles, and by contacting manufacturers and researchers.\n\n\nSELECTION CRITERIA\nTrials were included if they: (1) were randomised and double-blind; (2) included patients with major depression; (3) compared extracts of St. John's wort with placebo or standard antidepressants; (4) included clinical outcomes assessing depressive symptoms.\n\n\nDATA COLLECTION AND ANALYSIS\nAt least two independent reviewers extracted information from study reports. The main outcome measure for assessing effectiveness was the responder rate ratio (the relative risk of having a response to treatment). The main outcome measure for adverse effects was the number of patients dropping out due to adverse effects.\n\n\nMAIN RESULTS\nA total of 29 trials (5489 patients) including 18 comparisons with placebo and 17 comparisons with synthetic standard antidepressants met the inclusion criteria. Results of placebo-controlled trials showed marked heterogeneity. In nine larger trials the combined response rate ratio (RR) for hypericum extracts compared with placebo was 1.28 (95% confidence interval (CI), 1.10 to 1.49) and from nine smaller trials was 1.87 (95% CI, 1.22 to 2.87). Results of trials comparing hypericum extracts and standard antidepressants were statistically homogeneous. Compared with tri- or tetracyclic antidepressants and selective serotonin reuptake inhibitors (SSRIs), respectively, RRs were 1.02 (95% CI, 0.90 to 1.15; 5 trials) and 1.00 (95% CI, 0.90 to 1.11; 12 trials). Both in placebo-controlled trials and in comparisons with standard antidepressants, trials from German-speaking countries reported findings more favourable to hypericum. Patients given hypericum extracts dropped out of trials due to adverse effects less frequently than those given older antidepressants (odds ratio (OR) 0.24; 95% CI, 0.13 to 0.46) or SSRIs (OR 0.53, 95% CI, 0.34-0.83).\n\n\nAUTHORS' CONCLUSIONS\nThe available evidence suggests that the hypericum extracts tested in the included trials a) are superior to placebo in patients with major depression; b) are similarly effective as standard antidepressants; c) and have fewer side effects than standard antidepressants. The association of country of origin and precision with effects sizes complicates the interpretation.",
"title": ""
},
{
"docid": "c2205d9998d8292f459816f75cef442c",
"text": "Interpersonal ties are responsible for the structure of social networks and the transmission of information through these networks. Different types of social ties have essentially different influences on people. Awareness of the types of social ties can benefit many applications, such as recommendation and community detection. For example, our close friends tend to move in the same circles that we do, while our classmates may be distributed into different communities. Though a bulk of research has focused on inferring particular types of relationships in a specific social network, few publications systematically study the generalization of the problem of predicting social ties across multiple heterogeneous networks.\n In this work, we develop a framework referred to as TranFG for classifying the type of social relationships by learning across heterogeneous networks. The framework incorporates social theories into a factor graph model, which effectively improves the accuracy of predicting the types of social relationships in a target network by borrowing knowledge from a different source network. We also present several active learning strategies to further enhance the inferring performance. To scale up the model to handle really large networks, we design a distributed learning algorithm for the proposed model.\n We evaluate the proposed framework (TranFG) on six different networks and compare with several existing methods. TranFG clearly outperforms the existing methods on multiple metrics. For example, by leveraging information from a coauthor network with labeled advisor-advisee relationships, TranFG is able to obtain an F1-score of 90% (8%--28% improvements over alternative methods) for predicting manager-subordinate relationships in an enterprise email network. The proposed model is efficient. It takes only a few minutes to train the proposed transfer model on large networks containing tens of thousands of nodes.",
"title": ""
},
{
"docid": "5b7ff78bc563c351642e5f316a6d895b",
"text": "OBJECTIVE\nTo determine an albino population's expectations from an outreach albino clinic, understanding of skin cancer risk, and attitudes toward sun protection behavior.\n\n\nDESIGN\nSurvey, June 1, 1997, to September 30, 1997.\n\n\nSETTING\nOutreach albino clinics in Tanzania.\n\n\nPARTICIPANTS\nAll albinos 13 years and older and accompanying adults of younger children attending clinics. Unaccompanied children younger than 13 years and those too sick to answer questions were excluded. Ninety-four questionnaires were completed in 5 villages, with a 100% response rate.\n\n\nINTERVENTIONS\nInterview-based questionnaire with scoring system for pictures depicting poorly sun-protected albinos.\n\n\nRESULTS\nThe most common reasons for attending the clinic were health education and skin examination. Thirteen respondents (14%) believed albinism was inherited; it was more common to believe in superstitious causes of albinism than inheritance. Seventy-three respondents (78%) believed skin cancer was preventable, and 60 (63%) believed skin cancer was related to the sun. Seventy-two subjects (77%) thought sunscreen provided protection from the sun; 9 (10%) also applied it at night. Reasons for not wearing sun-protective clothing included fashion, culture, and heat. The hats provided were thought to have too soft a brim, to shrink, and to be ridiculed. Suggestions for additional clinic services centered on education and employment. Albinos who had read the educational booklet had no better understanding of sun avoidance than those who had not (P =.49).\n\n\nCONCLUSIONS\nThere was a reasonable understanding of risks of skin cancer and sun-avoidance methods. Clinical advice was often not followed for cultural reasons. The hats provided were unsuitable, and there was some confusion about the use of sunscreen. A lack of understanding of the cause of albinism led to many superstitions.",
"title": ""
},
{
"docid": "790de0f792c81b9e26676f800e766759",
"text": "The ubiquity of online fashion shopping demands effective recommendation services for customers. In this paper, we study two types of fashion recommendation: (i) suggesting an item that matches existing components in a set to form a stylish outfit (a collection of fashion items), and (ii) generating an outfit with multimodal (images/text) specifications from a user. To this end, we propose to jointly learn a visual-semantic embedding and the compatibility relationships among fashion items in an end-to-end fashion. More specifically, we consider a fashion outfit to be a sequence (usually from top to bottom and then accessories) and each item in the outfit as a time step. Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM. The trained network can not only perform the aforementioned recommendations effectively but also predict the compatibility of a given outfit. We conduct extensive experiments on our newly collected Polyvore dataset, and the results provide strong qualitative and quantitative evidence that our framework outperforms alternative methods.",
"title": ""
},
{
"docid": "ef1bc2fc31f465300ed74863c350298a",
"text": "Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018). This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives. The primary results of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models. However, a closer look at these results reveals worryingly strong baselines and strikingly varied results across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.",
"title": ""
},
{
"docid": "387827eae5fb528506c83d5fb161cd63",
"text": "Distinction work task power-matching control strategy was adapted to excavator for improving fuel efficiency; the accuracy of rotate engine speed at each work task was core to excavator for saving energy. 21t model excavator ZG3210-9 was taken as the study object to analyze the rotate speed setting and control method, linear position feedback throttle motor was employed to control the governor of engine to adjust rotate speed. Improved double closed loop PID method was adapted to control the engine, feedback of rotate speed and throttle position was taken as the input of the PID control mode. Control system was designed in CoDeSys platform with G16 controller, throttle motor control experiment and engine auto control experiment were carried on the excavator for tuning PID parameters. The result indicated that the double closed-loop PID method can take control and set the engine rotate speed automatically with the maximum error of 8 rpm. The linear model between throttle feedback position and rotate speed is established, which provides the control basis for dynamic energy saving of excavator.",
"title": ""
},
{
"docid": "aa4f5fe7af1b3a25bf4ba206fdd62fb0",
"text": "We study active object tracking, where a tracker takes visual observations (i.e., frame sequences) as inputs and produces the corresponding camera control signals as outputs (e.g., move forward, turn left, etc.). Conventional methods tackle tracking and camera control tasks separately, and the resulting system is difficult to tune jointly. Such an approach also requires significant human efforts for image labeling and expensive trial-and-error system tuning in real-world. To address these issues, we propose, in this paper, an end-to-end solution via deep reinforcement learning. A ConvNet-LSTM function approximator is adopted for the direct frame-to-action prediction. We further propose environment augmentation techniques and a customized reward function which are crucial for successful training. The tracker trained in simulators (ViZDoom, Unreal Engine) demonstrates good generalization behaviors in the case of unseen object moving paths, unseen object appearances, unseen backgrounds, and distracting objects. The system is robust and can restore tracking after occasional lost of the target being tracked. We also find that the tracking ability, obtained solely from simulators, can potentially transfer to real-world scenarios. We demonstrate successful examples of such transfer, via experiments over the VOT dataset and the deployment of a real-world robot using the proposed active tracker trained in simulation.",
"title": ""
},
{
"docid": "7e3bee96d9f3ce9cd46a8d70f9db9b3b",
"text": "In the modern era game of professional and amateur basketball, automated statistical positional marking, referral in actual time and video footage analysis for the same have become a part and parcel for the game. Every pre-match discussion is dominated by extensive study of the opponent's defensive and offensive formation, plays and metrics. A computerized video analysis is required for this reason as it will provide a concise guide for analysis. In addition there is a serious impact on the game due to dubious call by referees on real time judgemental calls. A video analysis will make real time video referrals a possibility. Thus in a nutshell a player positional marking system can generate statistical data for strategic planning and achieve proper rule enforcement. This research presents survey on available sports system which tries to place the point of sportsperson.",
"title": ""
},
{
"docid": "1b820143d38afa66e3ccf9da80654200",
"text": "Through virtualization, single physical data planes can logically support multiple networking contexts. We propose HyPer4 as a portable virtualization solution. HyPer4 provides a general purpose program, written in the P4 dataplane programming language, that may be dynamically configured to adopt behavior that is functionally equivalent to other P4 programs. HyPer4 extends, through software, the following features to diverse P4-capable devices: the ability to logically store multiple programs and either run them in parallel (network slicing) or as hot-swappable snapshots; and virtual networking between programs (supporting program composition or multi-tenant service interaction). HyPer4 permits modifying the set of programs, as well as the virtual network connecting them, at runtime, without disrupting currently active programs. We show that realistic ASICs-based hardware would be capable of running HyPer4 today.",
"title": ""
},
{
"docid": "00b73790bb0bb2b828e1d443d3e13cf4",
"text": "Grippers and robotic hands are an important field in robotics. Recently, the combination of grasping devices and haptic feedback has been a promising avenue for many applications such as laparoscopic surgery and spatial telemanipulation. This paper presents the work behind a new selfadaptive, a.k.a. underactuated, gripper with a proprioceptive haptic feedback in which the apparent stiffness of the gripper as seen by its actuator is used to estimate contact location. This system combines many technologies and concepts in an integrated mechatronic tool. Among them, underactuated grasping, haptic feedback, compliant joints and a differential seesaw mechanism are used. Following a theoretical modeling of the gripper based on the virtual work principle, the authors present numerical data used to validate this model. Then, a presentation of the practical prototype is given, discussing the sensors, controllers, and mechanical architecture. Finally, the control law and the experimental validation of the haptic feedback are presented.",
"title": ""
},
{
"docid": "cf506587f2699d88e4a2e0be36ccac41",
"text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.",
"title": ""
},
{
"docid": "338a998da4a1d3cd8b491c893f51bd18",
"text": "Class imbalance (i.e., scenarios in which classes are unequally represented in the training data) occurs in many real-world learning tasks. Yet despite its practical importance, there is no established theory of class imbalance, and existing methods for handling it are therefore not well motivated. In this work, we approach the problem of imbalance from a probabilistic perspective, and from this vantage identify dataset characteristics (such as dimensionality, sparsity, etc.) that exacerbate the problem. Motivated by this theory, we advocate the approach of bagging an ensemble of classifiers induced over balanced bootstrap training samples, arguing that this strategy will often succeed where others fail. Thus in addition to providing a theoretical understanding of class imbalance, corroborated by our experiments on both simulated and real datasets, we provide practical guidance for the data mining practitioner working with imbalanced data.",
"title": ""
},
{
"docid": "06654ef57e96d2e7cd969d271240371d",
"text": "The construction industry has been facing a paradigm shift to (i) increase; productivity, efficiency, infrastructure value, quality and sustainability, (ii) reduce; lifecycle costs, lead times and duplications, via effective collaboration and communication of stakeholders in construction projects. Digital construction is a political initiative to address low productivity in the sector. This seeks to integrate processes throughout the entire lifecycle by utilising building information modelling (BIM) systems. The focus is to create and reuse consistent digital information by the stakeholders throughout the lifecycle. However, implementation and use of BIM systems requires dramatic changes in the current business practices, bring new challenges for stakeholders e.g., the emerging knowledge and skill gap. This paper reviews and discusses the status of implementation of the BIM systems around the globe and their implications to the industry. Moreover, based on the lessons learnt, it will provide a guide to tackle these challenges and to facilitate successful transition towards utilizing BIM systems in construction projects.",
"title": ""
}
] |
scidocsrr
|
d08dcc782dee5f9474939925134c4e18
|
Evaluation of hierarchical clustering algorithms for document datasets
|
[
{
"docid": "2e2960942966d92ac636fa0be2e9410e",
"text": "Clustering is a powerful technique for large-scale topic discovery from text. It involves two phases: first, feature extraction maps each document or record to a point in high-dimensional space, then clustering algorithms automatically group the points into a hierarchy of clusters. We describe an unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase. We introduce a methodology for measuring the quality of a cluster hierarchy in terms of FMeasure, and present the results of experiments comparing different algorithms. The evaluation considers some feature selection parameters (tfidfand feature vector length) but focuses on the clustering algorithms, namely techniques from Scatter/Gather (buckshot, fractionation, and split/join) and kmeans. Our experiments suggest that continuous center adjustment contributes more to cluster quality than seed selection does. It follows that using a simpler seed selection algorithm gives a better time/quality tradeoff. We describe a refinement to center adjustment, “vector average damping,” that further improves cluster quality. We also compare the near-linear time algorithms to a group average greedy agglomerative clustering algorithm to demonstrate the time/quality tradeoff quantitatively.",
"title": ""
}
] |
[
{
"docid": "a11030c2031f96608eb3c2836c91a599",
"text": "Existing deep learning methods of video recognition usually require a large number of labeled videos for training. But for a new task, videos are often unlabeled and it is also time-consuming and labor-intensive to annotate them. Instead of human annotation, we try to make use of existing fully labeled images to help recognize those videos. However, due to the problem of domain shifts and heterogeneous feature representations, the performance of classifiers trained on images may be dramatically degraded for video recognition tasks. In this paper, we propose a novel method, called Hierarchical Generative Adversarial Networks (HiGAN), to enhance recognition in videos (i.e., target domain) by transferring knowledge from images (i.e., source domain). The HiGAN model consists of a low-level conditional GAN and a high-level conditional GAN. By taking advantage of these two-level adversarial learning, our method is capable of learning a domaininvariant feature representation of source images and target videos. Comprehensive experiments on two challenging video recognition datasets (i.e. UCF101 and HMDB51) demonstrate the effectiveness of the proposed method when compared with the existing state-of-the-art domain adaptation methods.",
"title": ""
},
{
"docid": "fa88546c3bbdc8de012ed7cadc552533",
"text": "The aim of this paper is to discuss new solutions in the design of insulated gate bipolar transistor (IGBT) gate drivers with advanced protections such as two-level turn-on to reduce peak current when turning on the device, two-level turn-off to limit over-voltage when the device is turned off, and an active Miller clamp function that acts against cross conduction phenomena. Afterwards, we describe a new circuit which includes a two-level turn-off driver and an active Miller clamp function. Tests and results for these advanced functions are discussed, with particular emphasis on the influence of an intermediate level in a two-level turn-off driver on overshoot across the IGBT.",
"title": ""
},
{
"docid": "f0a7f1f36c10cdd84f88f5e1c266f78d",
"text": "We connect a broad class of generative models through their shared reliance on sequential decision making. Motivated by this view, we develop extensions to an existing model, and then explore the idea further in the context of data imputation – perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as an MDP and develop models capable of representing effective policies for it. We construct the models using neural networks and train them using a form of guided policy search [11]. Our models generate predictions through an iterative process of feedback and refinement. We show that this approach can learn effective policies for imputation problems of varying difficulty and across multiple datasets.",
"title": ""
},
{
"docid": "483c3e0bd9406baef7040cdc3399442d",
"text": "Composite resins have been shown to be susceptible to discolouration on exposure to oral environment over a period of time. Discolouration of composite resins can be broadly classified as intrinsic or extrinsic. Intrinsic discolouration involves physico-chemical alteration within the material, while extrinsic stains are a result of surface discolouration by extrinsic compounds. Although the effects of various substances on the colour stability of composite resins have been extensively investigated, little has been published on the methods of removing the composite resins staining. The purpose of this paper is to provide a brief literature review on the colour stability of composite resins and clinical approaches in the stain removal.",
"title": ""
},
{
"docid": "d1072bc9960fc3697416c9d982ed5a9c",
"text": "We compared face identification by humans and machines using images taken under a variety of uncontrolled illumination conditions in both indoor and outdoor settings. Natural variations in a person's day-to-day appearance (e.g., hair style, facial expression, hats, glasses, etc.) contributed to the difficulty of the task. Both humans and machines matched the identity of people (same or different) in pairs of frontal view face images. The degree of difficulty introduced by photometric and appearance-based variability was estimated using a face recognition algorithm created by fusing three top-performing algorithms from a recent international competition. The algorithm computed similarity scores for a constant set of same-identity and different-identity pairings from multiple images. Image pairs were assigned to good, moderate, and poor accuracy groups by ranking the similarity scores for each identity pairing, and dividing these rankings into three strata. This procedure isolated the role of photometric variables from the effects of the distinctiveness of particular identities. Algorithm performance for these constant identity pairings varied dramatically across the groups. In a series of experiments, humans matched image pairs from the good, moderate, and poor conditions, rating the likelihood that the images were of the same person (1: sure same - 5: sure different). Algorithms were more accurate than humans in the good and moderate conditions, but were comparable to humans in the poor accuracy condition. To date, these are the most variable illumination- and appearance-based recognition conditions on which humans and machines have been compared. The finding that machines were never less accurate than humans on these challenging frontal images suggests that face recognition systems may be ready for applications with comparable difficulty. We speculate that the superiority of algorithms over humans in the less challenging conditions may be due to the algorithms' use of detailed, view-specific identity information. Humans may consider this information less important due to its limited potential for robust generalization in suboptimal viewing conditions.",
"title": ""
},
{
"docid": "7e68ac0eee3ab3610b7c68b69c27f3b6",
"text": "When digitizing a document into an image, it is common to include a surrounding border region to visually indicate that the entire document is present in the image. However, this border should be removed prior to automated processing. In this work, we present a deep learning system, PageNet, which identifies the main page region in an image in order to segment content from both textual and non-textual border noise. In PageNet, a Fully Convolutional Network obtains a pixel-wise segmentation which is post-processed into a quadrilateral region. We evaluate PageNet on 4 collections of historical handwritten documents and obtain over 94% mean intersection over union on all datasets and approach human performance on 2 collections. Additionally, we show that PageNet can segment documents that are overlayed on top of other documents.",
"title": ""
},
{
"docid": "72fa771855a178d8901d29c72acf5300",
"text": "Aspect extraction identifies relevant features of an entity from a textual description and is typically targeted to product reviews, and other types of short text, as an enabling task for, e.g., opinion mining and information retrieval. Current aspect extraction methods mostly focus on aspect terms, often neglecting associated modifiers or embedding them in the aspect terms without proper distinction. Moreover, flat syntactic structures are often assumed, resulting in inaccurate extractions of complex aspects. This paper studies the problem of structured aspect extraction, a variant of traditional aspect extraction aiming at a fine-grained extraction of complex (i.e., hierarchical) aspects. We propose an unsupervised and scalable method for structured aspect extraction consisting of statistical noun phrase clustering, cPMI-based noun phrase segmentation, and hierarchical pattern induction. Our evaluation shows a substantial improvement over existing methods in terms of both quality and computational efficiency.",
"title": ""
},
{
"docid": "b8f50ba62325ffddcefda7030515fd22",
"text": "The following statement is intended to provide an understanding of the governance and legal structure of the University of Sheffield. The University is an independent corporation whose legal status derives from a Royal Charter granted in 1905. It is an educational charity, with exempt status, regulated by the Office for Students in its capacity as Principal Regulator. The University has charitable purposes and applies them for the public benefit. It must comply with the general law of charity. The University’s objectives, powers and governance framework are set out in its Charter and supporting Statutes and Regulations.",
"title": ""
},
{
"docid": "3394eb51b71e5def4e4637963da347ab",
"text": "In this paper we present a model of e-learning suitable for teacher training sessions. The main purpose of our work is to define the components of the educational system which influences the successful adoption of e-learning in the field of education. We also present the factors of the readiness of e-learning mentioned in the literature available and classifies them into the 3 major categories that constitute the components of every organization and consequently that of education. Finally, we present an implementation model of e-learning through the use of virtual private networks, which lends an added value to the realization of e-learning.",
"title": ""
},
{
"docid": "2cd2a85598c0c10176a34c0bd768e533",
"text": "BACKGROUND\nApart from skills, and knowledge, self-efficacy is an important factor in the students' preparation for clinical work. The Physiotherapist Self-Efficacy (PSE) questionnaire was developed to measure physical therapy (TP) students' self-efficacy in the cardiorespiratory, musculoskeletal, and neurological clinical areas. The aim of this study was to establish the measurement properties of the Dutch PSE questionnaire, and to explore whether self-efficacy beliefs in students are clinical area specific.\n\n\nMETHODS\nMethodological quality of the PSE was studied using COSMIN guidelines. Item analysis, structural validity, and internal consistency of the PSE were determined in 207 students. Test-retest reliability was established in another sample of 60 students completing the PSE twice. Responsiveness of the scales was determined in 80 students completing the PSE at the start and the end of the second year. Hypothesis testing was used to determine construct validity of the PSE.\n\n\nRESULTS\nExploratory factor analysis resulted in three meaningful components explaining similar proportions of variance (25%, 21%, and 20%), reflecting the three clinical areas. Internal consistency of each of the three subscales was excellent (Cronbach's alpha > .90). Intra Class Correlation Coefficient was good (.80). Hypothesis testing confirmed construct validity of the PSE.\n\n\nCONCLUSION\nThe PSE shows excellent measurement properties. The component structure of the PSE suggests that self-efficacy about physiotherapy in PT students is not generic, but specific for a clinical area. As self-efficacy is considered a predictor of performance in clinical settings, enhancing self-efficacy is an explicit goal of educational interventions. Further research is needed to determine if the scale is specific enough to assess the effect of educational interventions on student self-efficacy.",
"title": ""
},
{
"docid": "45ea01d82897401058492bc2f88369b3",
"text": "Reduction in greenhouse gas emissions from transportation is essential in combating global warming and climate change. Eco-routing enables drivers to use the most eco-friendly routes and is effective in reducing vehicle emissions. The EcoTour system assigns eco-weights to a road network based on GPS and fuel consumption data collected from vehicles to enable ecorouting. Given an arbitrary source-destination pair in Denmark, EcoTour returns the shortest route, the fastest route, and the eco-route, along with statistics for the three routes. EcoTour also serves as a testbed for exploring advanced solutions to a range of challenges related to eco-routing.",
"title": ""
},
{
"docid": "41353a12a579f72816f1adf3cba154dd",
"text": "The crux of our initialization technique is n-gram selection, which assists neural networks to extract important n-gram features at the beginning of the training process. In the following tables, we illustrate those selected n-grams of different classes and datasets to understand our technique intuitively. Since all of MR, SST-1, SST-2, CR, and MPQA are sentiment classification datasets, we only report the selected n-grams of SST-1 (Table 1). N-grams selected by our method in SUBJ and TREC are shown in Table 2 and Table 3.",
"title": ""
},
{
"docid": "79a8281500227799d18d4f841af08795",
"text": "Fluctuating power is of serious concern in grid connected wind systems and energy storage systems are being developed to help alleviate this. This paper describes how additional energy storage can be provided within the existing wind turbine system by allowing the turbine speed to vary over a wider range. It also addresses the stability issue due to the modified control requirements. A control algorithm is proposed for a typical doubly fed induction generator (DFIG) arrangement and a simulation model is used to assess the ability of the method to smooth the output power. The disadvantage of the method is that there is a reduction in energy capture relative to a maximum power tracking algorithm. This aspect is evaluated using a typical turbine characteristic and wind profile and is shown to decrease by less than 1%. In contrast power fluctuations at intermediate frequency are reduced by typically 90%.",
"title": ""
},
{
"docid": "c0a1b48688cd0269b787a17fa5d15eda",
"text": "Animating human character has become an active research area in computer graphics. It is really important for development of virtual environment applications such as computer games and virtual reality. One of the popular methods to animate the character is by using motion graph. Since motion graph is the main focus of this research, we investigate the preliminary work of motion graph and discuss about the main components of motion graph like distance metrics and motion transition. These two components will be taken into consideration during the process of development of motion graph. In this paper, we will also present a general framework and future plan of this study.",
"title": ""
},
{
"docid": "8418c151e724d5e23662a9d70c050df1",
"text": "The issuing of pseudonyms is an established approach for protecting the privacy of users while limiting access and preventing sybil attacks. To prevent pseudonym deanonymization through continuous observation and correlation, frequent and unlinkable pseudonym changes must be enabled. Existing approaches for realizing sybil-resistant pseudonymization and pseudonym change (PPC) are either inherently dependent on trusted third parties (TTPs) or involve significant computation overhead at end-user devices. In this paper, we investigate a novel, TTP-independent approach towards sybil-resistant PPC. Our proposal is based on the use of cryptocurrency block chains as general-purpose, append-only bulletin boards. We present a general approach as well as BitNym, a specific design based on the unmodified Bitcoin network. We discuss and propose TTP-independent mechanisms for realizing sybil-free initial access control, pseudonym validation and pseudonym mixing. Evaluation results demonstrate the practical feasibility of our approach and show that anonymity sets encompassing nearly the complete user population are easily achievable.",
"title": ""
},
{
"docid": "4d5e8e1c8942256088f1c5ef0e122c9f",
"text": "Cybercrime and cybercriminal activities continue to impact communities as the steady growth of electronic information systems enables more online business. The collective views of sixty-six computer users and organizations, that have an exposure to cybercrime, were analyzed using concept analysis and mapping techniques in order to identify the major issues and areas of concern, and provide useful advice. The findings of the study show that a range of computing stakeholders have genuine concerns about the frequency of information security breaches and malware incursions (including the emergence of dangerous security and detection avoiding malware), the need for e-security awareness and education, the roles played by law and law enforcement, and the installation of current security software and systems. While not necessarily criminal in nature, some stakeholders also expressed deep concerns over the use of computers for cyberbullying, particularly where younger and school aged users are involved. The government’s future directions and recommendations for the technical and administrative management of cybercriminal activity were generally observed to be consistent with stakeholder concerns, with some users also taking practical steps to reduce cybercrime risks. a 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a8aa8c24c794bc6187257d264e2586a0",
"text": "Bayesian optimization is a powerful framework for minimizing expensive objective functions while using very few function evaluations. It has been successfully applied to a variety of problems, including hyperparameter tuning and experimental design. However, this framework has not been extended to the inequality-constrained optimization setting, particularly the setting in which evaluating feasibility is just as expensive as evaluating the objective. Here we present constrained Bayesian optimization, which places a prior distribution on both the objective and the constraint functions. We evaluate our method on simulated and real data, demonstrating that constrained Bayesian optimization can quickly find optimal and feasible points, even when small feasible regions cause standard methods to fail.",
"title": ""
},
{
"docid": "b84fc12cfc3de65109f789d2a871a38a",
"text": "OBJECTIVE\nTo describe studies evaluating 3 generations of three-dimensional (3D) displays over the course of 20 years.\n\n\nSUMMARY BACKGROUND DATA\nMost previous studies have analyzed performance differences during 3D and two-dimensional (2D) laparoscopy without using appropriate controls that equated conditions in all respects except for 3D or 2D viewing.\n\n\nMETHODS\nDatabases search consisted of MEDLINE and PubMed. The reference lists for all relevant articles were also reviewed for additional articles. The search strategy employed the use of keywords \"3D,\" \"Laparoscopic,\" \"Laparoscopy,\" \"Performance,\" \"Education,\" \"Learning,\" and \"Surgery\" in appropriate combinations.\n\n\nRESULTS\nOur current understanding of the performance metrics between 3D and 2D laparoscopy is mostly from the research with flawed study designs. This review has been written in a qualitative style to explain in detail how prior research has underestimated the potential benefit of 3D displays and the improvements that must be made in future experiments comparing 3D and 2D displays to better determine any advantage of using one display or the other.\n\n\nCONCLUSIONS\nIndividual laparoscopic performance in 3D may be affected by a multitude of factors. It is crucial for studies to measure participant stereoscopic ability, control for system crosstalk, and use validated measures of performance.",
"title": ""
},
{
"docid": "fad4ff82e9b11f28a70749d04dfbf8ca",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.",
"title": ""
},
{
"docid": "e7eb15df383c92fcd5a4edc7e27b5265",
"text": "This article presents a new model for word sense disambiguation formulated in terms of evolutionary game theory, where each word to be disambiguated is represented as a node on a graph whose edges represent word relations and senses are represented as classes. The words simultaneously update their class membership preferences according to the senses that neighboring words are likely to choose. We use distributional information to weigh the influence that each word has on the decisions of the others and semantic similarity information to measure the strength of compatibility among the choices. With this information we can formulate the word sense disambiguation problem as a constraint satisfaction problem and solve it using tools derived from game theory, maintaining the textual coherence. The model is based on two ideas: Similar words should be assigned to similar classes and the meaning of a word does not depend on all the words in a text but just on some of them. The article provides an in-depth motivation of the idea of modeling the word sense disambiguation problem in terms of game theory, which is illustrated by an example. The conclusion presents an extensive analysis on the combination of similarity measures to use in the framework and a comparison with state-of-the-art systems. The results show that our model outperforms state-of-the-art algorithms and can be applied to different tasks and in different scenarios.",
"title": ""
}
] |
scidocsrr
|
c4fecb931da091a5614c02f88718a6a7
|
Major Traits / Qualities of Leadership
|
[
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
}
] |
[
{
"docid": "5fe1fa98c953d778ee27a104802e5f2b",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "b252aea38a537a22ab34fdf44e9443d2",
"text": "The objective of this study is to describe the case of a patient presenting advanced epidermoid carcinoma of the penis associated to myiasis. A 41-year-old patient presenting with a necrotic lesion of the distal third of the penis infested with myiasis was attended in the emergency room of our hospital and was submitted to an urgent penectomy. This is the first case of penile cancer associated to myiasis described in the literature. This case reinforces the need for educative campaigns to reduce the incidence of this disease in developing countries.",
"title": ""
},
{
"docid": "e6db8cbbb3f7bac211f672ffdef44fb6",
"text": "This paper aims to develop a benchmarking framework that evaluates the cold chain performance of a company, reveals its strengths and weaknesses and finally identifies and prioritizes potential alternatives for continuous improvement. A Delphi-AHP-TOPSIS based methodology has divided the whole benchmarking into three stages. The first stage is Delphi method, where identification, synthesis and prioritization of key performance factors and sub-factors are done and a novel consistent measurement scale is developed. The second stage is Analytic Hierarchy Process (AHP) based cold chain performance evaluation of a selected company against its competitors, so as to observe cold chain performance of individual factors and sub-factors, as well as overall performance index. And, the third stage is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based assessment of possible alternatives for the continuous improvement of the company’s cold chain performance. Finally a demonstration of proposed methodology in a retail industry is presented for better understanding. The proposed framework can assist managers to comprehend the present strengths and weaknesses of their cold. They can identify good practices from the market leader and can benchmark them for improving weaknesses keeping in view the current operational conditions and strategies of the company. This framework also facilitates the decision makers to better understand the complex relationships of the relevant cold chain performance factors in decision-making. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "72420289372499b50e658ef0957a3ad9",
"text": "A ripple current cancellation technique injects AC current into the output voltage bus of a converter that is equal and opposite to the normal converter ripple current. The output current ripple is ideally zero, leading to ultra-low noise converter output voltages. The circuit requires few additional components, no active circuits are required. Only an additional filter inductor winding, an auxiliary inductor, and small capacitor are required. The circuit utilizes leakage inductance of the modified filter inductor as all or part of the required auxiliary inductance. Ripple cancellation is independent of switching frequency, duty cycle, and other converter parameters. The circuit eliminates ripple current in both continuous conduction mode and discontinuous conduction mode. Experimental results provide better than an 80/spl times/ ripple current reduction.",
"title": ""
},
{
"docid": "19f1a6c9c5faf73b8868164e8bb310c6",
"text": "Holoprosencephaly refers to a spectrum of craniofacial malformations including cyclopia, ethmocephaly, cebocephaly, and premaxillary agenesis. Etiologic heterogeneity is well documented. Chromosomal, genetic, and teratogenic factors have been implicated. Recognition of holoprosencephaly as a developmental field defect stresses the importance of close scrutiny of relatives for mild forms such as single median incisor, hypotelorism, bifid uvula, or pituitary deficiency.",
"title": ""
},
{
"docid": "c0b40058d003cdaa80d54aa190e48bc2",
"text": "Visual tracking plays an important role in many computer vision tasks. A common assumption in previous methods is that the video frames are blur free. In reality, motion blurs are pervasive in the real videos. In this paper we present a novel BLUr-driven Tracker (BLUT) framework for tracking motion-blurred targets. BLUT actively uses the information from blurs without performing debluring. Specifically, we integrate the tracking problem with the motion-from-blur problem under a unified sparse approximation framework. We further use the motion information inferred by blurs to guide the sampling process in the particle filter based tracking. To evaluate our method, we have collected a large number of video sequences with significatcant motion blurs and compared BLUT with state-of-the-art trackers. Experimental results show that, while many previous methods are sensitive to motion blurs, BLUT can robustly and reliably track severely blurred targets.",
"title": ""
},
{
"docid": "ea42c551841cc53c84c63f72ee9be0ae",
"text": "Phishing is a prevalent issue of today’s Internet. Previous approaches to counter phishing do not draw on a crucial factor to combat the threat the users themselves. We believe user education about the dangers of the Internet is a further key strategy to combat phishing. For this reason, we developed an Android app, a game called –NoPhish–, which educates the user in the detection of phishing URLs. It is crucial to evaluate NoPhish with respect to its effectiveness and the users’ knowledge retention. Therefore, we conducted a lab study as well as a retention study (five months later). The outcomes of the studies show that NoPhish helps users make better decisions with regard to the legitimacy of URLs immediately after playing NoPhish as well as after some time has passed. The focus of this paper is on the description and the evaluation of both studies. This includes findings regarding those types of URLs that are most difficult to decide on as well as ideas to further improve NoPhish.",
"title": ""
},
{
"docid": "b468726c2901146f1ca02df13936e968",
"text": "Chinchillas have been successfully maintained in captivity for almost a century. They have only recently been recognized as excellent, long-lived, and robust pets. Most of the literature on diseases of chinchillas comes from farmed chinchillas, whereas reports of pet chinchilla diseases continue to be sparse. This review aims to provide information on current, poorly reported disorders of pet chinchillas, such as penile problems, urolithiasis, periodontal disease, otitis media, cardiac disease, pseudomonadal infections, and giardiasis. This review is intended to serve as a complement to current veterinary literature while providing valuable and clinically relevant information for veterinarians treating chinchillas.",
"title": ""
},
{
"docid": "872370f375d779435eb098571f3ab763",
"text": "The aim of this study was to explore the potential of fused-deposition 3-dimensional printing (FDM 3DP) to produce modified-release drug loaded tablets. Two aminosalicylate isomers used in the treatment of inflammatory bowel disease (IBD), 5-aminosalicylic acid (5-ASA, mesalazine) and 4-aminosalicylic acid (4-ASA), were selected as model drugs. Commercially produced polyvinyl alcohol (PVA) filaments were loaded with the drugs in an ethanolic drug solution. A final drug-loading of 0.06% w/w and 0.25% w/w was achieved for the 5-ASA and 4-ASA strands, respectively. 10.5mm diameter tablets of both PVA/4-ASA and PVA/5-ASA were subsequently printed using an FDM 3D printer, and varying the weight and densities of the printed tablets was achieved by selecting the infill percentage in the printer software. The tablets were mechanically strong, and the FDM 3D printing was shown to be an effective process for the manufacture of the drug, 5-ASA. Significant thermal degradation of the active 4-ASA (50%) occurred during printing, however, indicating that the method may not be appropriate for drugs when printing at high temperatures exceeding those of the degradation point. Differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA) of the formulated blends confirmed these findings while highlighting the potential of thermal analytical techniques to anticipate drug degradation issues in the 3D printing process. The results of the dissolution tests conducted in modified Hank's bicarbonate buffer showed that release profiles for both drugs were dependent on both the drug itself and on the infill percentage of the tablet. Our work here demonstrates the potential role of FDM 3DP as an efficient and low-cost alternative method of manufacturing individually tailored oral drug dosage, and also for production of modified-release formulations.",
"title": ""
},
{
"docid": "1b30c14536db1161b77258b1ce213fbb",
"text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.",
"title": ""
},
{
"docid": "ae800ced5663d320fcaca2df6f6bf793",
"text": "Stowage planning for container vessels concerns the core competence of the shipping lines. As such, automated stowage planning has attracted much research in the past two decades, but with few documented successes. In an ongoing project, we are developing a prototype stowage planning system aiming for large containerships. The system consists of three modules: the stowage plan generator, the stability adjustment module, and the optimization engine. This paper mainly focuses on the stability adjustment module. The objective of the stability adjustment module is to check the global ship stability of the stowage plan produced by the stowage plan generator and resolve the stability issues by applying a heuristic algorithm to search for alternative feasible locations for containers that violate some of the stability criteria. We demonstrate that the procedure proposed is capable of solving the stability problems for a large containership with more than 5000 TEUs. Keywords— Automation, Stowage Planning, Local Search, Heuristic algorithm, Stability Optimization",
"title": ""
},
{
"docid": "f289b58d16bf0b3a017a9b1c173cbeb6",
"text": "All hospitalisations for pulmonary arterial hypertension (PAH) in the Scottish population were examined to determine the epidemiological features of PAH. These data were compared with expert data from the Scottish Pulmonary Vascular Unit (SPVU). Using the linked Scottish Morbidity Record scheme, data from all adults aged 16-65 yrs admitted with PAH (idiopathic PAH, pulmonary hypertension associated with congenital heart abnormalities and pulmonary hypertension associated with connective tissue disorders) during the period 1986-2001 were identified. These data were compared with the most recent data in the SPVU database (2005). Overall, 374 Scottish males and females aged 16-65 yrs were hospitalised with incident PAH during 1986-2001. The annual incidence of PAH was 7.1 cases per million population. On December 31, 2002, there were 165 surviving cases, giving a prevalence of PAH of 52 cases per million population. Data from the SPVU were available for 1997-2006. In 2005, the last year with a complete data set, the incidence of PAH was 7.6 cases per million population and the corresponding prevalence was 26 cases per million population. Hospitalisation data from the Scottish Morbidity Record scheme gave higher prevalences of pulmonary arterial hypertension than data from the expert centres (Scotland and France). The hospitalisation data may overestimate the true frequency of pulmonary arterial hypertension in the population, but it is also possible that the expert centres underestimate the true frequency.",
"title": ""
},
{
"docid": "99dcde334931eeb8e20ce7aa3c7982d5",
"text": "We describe a framework for multiscale image analysis in which line segments play a role analogous to the role played by points in wavelet analysis. The framework has five key components. The beamlet dictionary is a dyadicallyorganized collection of line segments, occupying a range of dyadic locations and scales, and occurring at a range of orientations. The beamlet transform of an image f(x, y) is the collection of integrals of f over each segment in the beamlet dictionary; the resulting information is stored in a beamlet pyramid. The beamlet graph is the graph structure with pixel corners as vertices and beamlets as edges; a path through this graph corresponds to a polygon in the original image. By exploiting the first four components of the beamlet framework, we can formulate beamlet-based algorithms which are able to identify and extract beamlets and chains of beamlets with special properties. In this paper we describe a four-level hierarchy of beamlet algorithms. The first level consists of simple procedures which ignore the structure of the beamlet pyramid and beamlet graph; the second level exploits only the parent-child dependence between scales; the third level incorporates collinearity and co-curvity relationships; and the fourth level allows global optimization over the full space of polygons in an image. These algorithms can be shown in practice to have suprisingly powerful and apparently unprecedented capabilities, for example in detection of very faint curves in very noisy data. We compare this framework with important antecedents in image processing (Brandt and Dym; Horn and collaborators; Götze and Druckenmiller) and in geometric measure theory (Jones; David and Semmes; and Lerman).",
"title": ""
},
{
"docid": "faa1a49f949d5ba997f4285ef2e708b2",
"text": "Appendiceal mucinous neoplasms sometimes present with peritoneal dissemination, which was previously a lethal condition with a median survival of about 3 years. Traditionally, surgical treatment consisted of debulking that was repeated until no further benefit could be achieved; systemic chemotherapy was sometimes used as a palliative option. Now, visible disease tends to be removed through visceral resections and peritonectomy. To avoid entrapment of tumour cells at operative sites and to destroy small residual mucinous tumour nodules, cytoreductive surgery is combined with intraperitoneal chemotherapy with mitomycin at 42 degrees C. Fluorouracil is then given postoperatively for 5 days. If the mucinous neoplasm is minimally invasive and cytoreduction complete, these treatments result in a 20-year survival of 70%. In the absence of a phase III study, this new combined treatment should be regarded as the standard of care for epithelial appendiceal neoplasms and pseudomyxoma peritonei syndrome.",
"title": ""
},
{
"docid": "981e88bd1f4187972f8a3d04960dd2dd",
"text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.",
"title": ""
},
{
"docid": "26dc59c30371f1d0b2ff2e62a96f9b0f",
"text": "Hindi is very complex language with large number of phonemes and being used with various ascents in different regions in India. In this manuscript, speaker dependent and independent isolated Hindi word recognizers using the Hidden Markov Model (HMM) is implemented, under noisy environment. For this study, a set of 10 Hindi names has been chosen as a test set for which the training and testing is performed. The scheme instigated here implements the Mel Frequency Cepstral Coefficients (MFCC) in order to compute the acoustic features of the speech signal. Then, K-means algorithm is used for the codebook generation by performing clustering over the obtained feature space. Baum Welch algorithm is used for re-estimating the parameters, and finally for deciding the recognized Hindi word whose model likelihood is highest, Viterbi algorithm has been implemented; for the given HMM. This work resulted in successful recognition with 98. 6% recognition rate for speaker dependent recognition, for total of 10 speakers (6 male, 4 female) and 97. 5% for speaker independent isolated word recognizer for 10 speakers (male).",
"title": ""
},
{
"docid": "58702f835df43337692f855f35a9f903",
"text": "A dual-mode wide-band transformer based VCO is proposed. The two port impedance of the transformer based resonator is analyzed to derive the optimum primary to secondary capacitor load ratio, for robust mode selectivity and minimum power consumption. Fabricated in a 16nm FinFET technology, the design achieves 2.6× continuous tuning range spanning 7-to-18.3 GHz using a coil area of 120×150 μm2. The absence of lossy switches helps in maintaining phase noise of -112 to -100 dBc/Hz at 1 MHz offset, across the entire tuning range. The VCO consumes 3-4.4 mW and realizes power frequency tuning normalized figure of merit of 12.8 and 2.4 dB at 7 and 18.3 GHz respectively.",
"title": ""
},
{
"docid": "4d8c869c9d6e1d7ba38f56a124b84412",
"text": "We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated an nealing algorithm to optimize radial basis function (RBF) networks. This algorithm enables us to maximize the joint posterior distribution of the network parameters and the number of basis functions. It performs a global search in the joint space of the pa rameters and number of parameters, thereby surmounting the problem of local minima. We also show that by calibrating a Bayesian model, we can obtain the classical AIC, BIC and MDL model selection criteria within a penalized likelihood framework. Finally, we show theoretically and empirically that the algorithm converges to the modes of the full posterior distribution in an efficient way.",
"title": ""
},
{
"docid": "ceb59133deb7828edaf602308cb3450a",
"text": "Abstract While there has been a great deal of interest in the modelling of non-linearities and regime shifts in economic time series, there is no clear consensus regarding the forecasting abilities of these models. In this paper we develop a general approach to predict multiple time series subject to Markovian shifts in the regime. The feasibility of the proposed forecasting techniques in empirical research is demonstrated and their forecast accuracy is evaluated.",
"title": ""
},
{
"docid": "55ffe87f74194ab3de60fea9d888d9ad",
"text": "A new priority queue implementation for the future event set problem is described in this article. The new implementation is shown experimentally to be O(1) in queue size for the priority increment distributions recently considered by Jones in his review article. It displays hold times three times shorter than splay trees for a queue size of 10,000 events. The new implementation, called a calendar queue, is a very simple structure of the multiple list variety using a novel solution to the overflow problem.",
"title": ""
}
] |
scidocsrr
|
833d74bce7f77e03444634e6d7bb835d
|
Explore Multi-Step Reasoning in Video Question Answering
|
[
{
"docid": "20b203f50a14a150703bfa8279d2ed54",
"text": "We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler \"Who\" did \"What\" to \"Whom\", to \"Why\" and \"How\" certain events occurred. Each question comes with a set of five possible answers, a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information - video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.",
"title": ""
},
{
"docid": "352c61af854ffc6dab438e7a1be56fcb",
"text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.",
"title": ""
},
{
"docid": "4fbc692a4291a92c6fa77dc78913e587",
"text": "Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a questiondependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-ofthe-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.",
"title": ""
}
] |
[
{
"docid": "9d775637b3ed678a6de2e41a53a0a19a",
"text": "Research Article Kevin K.Y. Kuan The University of Sydney kevin.kuan@sydney.edu.au Kai-Lung Hui Hong Kong University of Science and Technology klhui@ust.hk Many online review systems adopt a voluntary voting mechanism to identify helpful reviews to support consumer purchase decisions. While several studies have looked at what makes an online review helpful (review helpfulness), little is known on what makes an online review receive votes (review voting). Drawing on information processing theories and the related literature, we investigated the effects of a select set of review characteristics, including review length and readability, review valence, review extremity, and reviewer credibility on two outcomes—review voting and review helpfulness. We examined and analyzed a large set of review data from Amazon with the sample selection model. Our results indicate that there are systematic differences between voted and non-voted reviews, suggesting that helpful reviews with certain characteristics are more likely to be observed and identified in an online review system than reviews without the characteristics. Furthermore, when review characteristics had opposite effects on the two outcomes (i.e. review voting and review helpfulness), ignoring the selection effects due to review voting would result in the effects on review helpfulness being over-estimated, which increases the risk of committing a type I error. Even when the effects on the two outcomes are in the same direction, ignoring the selection effects due to review voting would increase the risk of committing type II error that cannot be mitigated with a larger sample. We discuss the implications of the findings on research and practice.",
"title": ""
},
{
"docid": "2de75d4b75d2215a55538d71cc618dde",
"text": "Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction.",
"title": ""
},
{
"docid": "d13ecf582ac820cdb8ea6353c44c535f",
"text": "We have previously shown that, while the intrinsic quality of the oocyte is the main factor affecting blastocyst yield during bovine embryo development in vitro, the main factor affecting the quality of the blastocyst is the postfertilization culture conditions. Therefore, any improvement in the quality of blastocysts produced in vitro is likely to derive from the modification of the postfertilization culture conditions. The objective of this study was to examine the effect of the presence or absence of serum and the concentration of BSA during the period of embryo culture in vitro on 1) cleavage rate, 2) the kinetics of embryo development, 3) blastocyst yield, and 4) blastocyst quality, as assessed by cryotolerance and gene expression patterns. The quantification of all gene transcripts was carried out by real-time quantitative reverse transcription-polymerase chain reaction. Bovine blastocysts from four sources were used: 1) in vitro culture in synthetic oviduct fluid (SOF) supplemented with 3 mg/ml BSA and 10% fetal calf serum (FCS), 2) in vitro culture in SOF + 3 mg/ml BSA in the absence of serum, 3) in vitro culture in SOF + 16 mg/ml BSA in the absence of serum, and 4) in vivo blastocysts. There was no difference in overall blastocyst yield at Day 9 between the groups. However, significantly more blastocysts were present by Day 6 in the presence of 10% serum (20.0%) compared with 3 mg/ml BSA (4.6%, P < 0.001) or 16 mg/ml BSA (11.6%, P < 0.01). By Day 7, however, this difference had disappeared. Following vitrification, there was no difference in survival between blastocysts produced in the presence of 16 mg/ml BSA or those produced in the presence of 10% FCS; the survival of both groups was significantly lower than the in vivo controls at all time points and in terms of hatching rate. In contrast, survival of blastocysts produced in SOF + 3 mg/ml BSA in the absence of serum was intermediate, with no difference remaining at 72 h when compared with in vivo embryos. Differences in relative mRNA abundance among the two groups of blastocysts analyzed were found for genes related to apoptosis (Bax), oxidative stress (MnSOD, CuZnSOD, and SOX), communication through gap junctions (Cx31 and Cx43), maternal recognition of pregnancy (IFN-tau), and differentiation and implantation (LIF and LR-beta). The presence of serum during the culture period resulted in a significant increase in the level of expression of MnSOD, SOX, Bax, LIF, and LR-beta. The level of expression of Cx31 and Cu/ZnSOD also tended to be increased, although the difference was not significant. In contrast, the level of expression of Cx43 and IFN-tau was decreased in the presence of serum. In conclusion, using a combination of measures of developmental competence (cleavage and blastocyst rates) and qualitative measures such as cryotolerance and relative mRNA abundance to give a more complete picture of the consequences of modifying medium composition on the embryo, we have shown that conditions of postfertilization culture, in particular, the presence of serum in the medium, can affect the speed of embryo development and the quality of the resulting blastocysts. The reduced cryotolerance of blastocysts generated in the presence of serum is accompanied by deviations in the relative abundance of developmentally important gene transcripts. Omission of serum during the postfertilization culture period can significantly improve the cryotolerance of the blastocysts to a level intermediate between serum-generated blastocysts and those derived in vivo. The challenge now is to try and bridge this gap.",
"title": ""
},
{
"docid": "e85cf5b993cc4d82a1dea47f9ce5d18b",
"text": "We recently proposed an approach inspired by Sparse Component Analysis for real-time localisation of multiple sound sources using a circular microphone array. The method was based on identifying time-frequency zones where only one source is active, reducing the problem to single-source localisation in these zones. A histogram of estimated Directions of Arrival (DOAs) was formed and then processed to obtain improved DOA estimates, assuming that the number of sources was known. In this paper, we extend our previous work by proposing a new method for the final DOA estimations, that outperforms our previous method at lower SNRs and in the case of six simultaneous speakers. In keeping with the spirit of our previous work, the new method is very computationally efficient, facilitating its use in real-time systems.",
"title": ""
},
{
"docid": "b0bfa683c37ad25600c414c7c082962b",
"text": "Complexity in modern vehicles has increased dramatically during the last years due to new features and applications. Modern vehicles are connected to the Internet as well as to other vehicles in close proximity and to the environment for different novel comfort services and safety-related applications. Enabling such services and applications requires wireless interfaces to the vehicle and therefore leads to open interfaces to the outside world. Attackers can use those interfaces to impair the privacy of the vehicle owner or to take control (of parts of) the vehicle, which strongly endangers the safety of the passengers as well as other road users. To avoid such attacks and to ensure the safety of modern vehicles, sophisticated structured processes and methods are needed. In this paper we propose a security metric to analyse cyberphysical systems (CPS) in a structured way. Its application leads to a secure system configuration with comparable as well as reusable results. Additionally, the security metric can be used to support the conceptual phase for the development of CPS specified in the new SAE security standard SAE J3061. A case study has been carried out to illustrate the application of the security metric.",
"title": ""
},
{
"docid": "1075d158d34ebbe2999fb2436a908fc8",
"text": "Magnetic Resonance Imaging (MRI) is a powerful visualization tool that permits to acquire images of internal anatomy of human body in a secure and non-invasive manner. The important task in the diagnosis of brain tumor is to determine the exact location, orientation and area of the abnormal tissues. This paper presents a performance analysis of image segmentation techniques, viz., Genetic algorithm, K-Means Clustering and Fuzzy C-Means clustering for detection of brain tumor from brain MRI images. The performance evaluation of these techniques is carried out on the real time database on the basis of error percentage compared to ground truth.",
"title": ""
},
{
"docid": "8e3cc3937f91c12bb5d515f781928f8b",
"text": "As the size of data set in cloud increases rapidly, how to process large amount of data efficiently has become a critical issue. MapReduce provides a framework for large data processing and is shown to be scalable and fault-tolerant on commondity machines. However, it has higher learning curve than SQL-like language and the codes are hard to maintain and reuse. On the other hand, traditional SQL-based data processing is familiar to user but is limited in scalability. In this paper, we propose a hybrid approach to fill the gap between SQL-based and MapReduce data processing. We develop a data management system for cloud, named SQLMR. SQLMR complies SQL-like queries to a sequence of MapReduce jobs. Existing SQL-based applications are compatible seamlessly with SQLMR and users can manage Tera to PataByte scale of data with SQL-like queries instead of writing MapReduce codes. We also devise a number of optimization techniques to improve the performance of SQLMR. The experiment results demonstrate both performance and scalability advantage of SQLMR compared to MySQL and two NoSQL data processing systems, Hive and HadoopDB.",
"title": ""
},
{
"docid": "6841f2fb1dbe8246f184affed49fe6c3",
"text": "Instructional designers and educators recognize the potential of mobile technologies as a learning tool for students and have incorporated them into the distance learning environment. However, little research has been done to categorize the numerous examples of mobile learning in the context of distance education, and few instructional design guidelines based on a solid theoretical framework for mobile learning exist. In this paper I compare mobile learning (m-learning) with electronic learning (e-learning) and ubiquitous learning (u-learning) and describe the technological attributes and pedagogical affordances of mobile learning presented in previous studies. I modify transactional distance (TD) theory and adopt it as a relevant theoretical framework for mobile learning in distance education. Furthermore, I attempt to position previous studies into four types of mobile learning: 1) high transactional distance socialized m-learning, 2) high transactional distance individualized m-learning, 3) low transactional distance socialized mlearning and 4) low transactional distance individualized m-learning. As a result, this paper can be used by instructional designers of open and distance learning to learn about the concepts of mobile learning and how mobile technologies can be incorporated into their teaching and learning more effectively.",
"title": ""
},
{
"docid": "319f681b2956c058bd7777f0372c7e2c",
"text": "We present the data model, architecture, and evaluation of LightDB, a database management system designed to efficiently manage virtual, augmented, and mixed reality (VAMR) video content. VAMR video differs from its two-dimensional counterpart in that it is spherical with periodic angular dimensions, is nonuniformly and continuously sampled, and applications that consume such videos often have demanding latency and throughput requirements. To address these challenges, LightDB treats VAMR video data as a logically-continuous six-dimensional light field. Furthermore, LightDB supports a rich set of operations over light fields, and automatically transforms declarative queries into executable physical plans. We have implemented a prototype of LightDB and, through experiments with VAMR applications in the literature, we find that LightDB offers up to 4× throughput improvements compared with prior work. PVLDB Reference Format: Brandon Haynes, Amrita Mazumdar, Armin Alaghi, Magdalena Balazinska, Luis Ceze, Alvin Cheung. LightDB: A DBMS for Virtual Reality Video. PVLDB, 11 (10): 1192-1205, 2018. DOI: https://doi.org/10.14778/3231751.3231768",
"title": ""
},
{
"docid": "c697ce69b5ba77cce6dce93adaba7ee0",
"text": "Online social networks play a major role in modern societies, and they have shaped the way social relationships evolve. Link prediction in social networks has many potential applications such as recommending new items to users, friendship suggestion and discovering spurious connections. Many real social networks evolve the connections in multiple layers (e.g. multiple social networking platforms). In this article, we study the link prediction problem in multiplex networks. As an example, we consider a multiplex network of Twitter (as a microblogging service) and Foursquare (as a location-based social network). We consider social networks of the same users in these two platforms and develop a meta-path-based algorithm for predicting the links. The connectivity information of the two layers is used to predict the links in Foursquare network. Three classical classifiers (naive Bayes, support vector machines (SVM) and K-nearest neighbour) are used for the classification task. Although the networks are not highly correlated in the layers, our experiments show that including the cross-layer information significantly improves the prediction performance. The SVM classifier results in the best performance with an average accuracy of 89%.",
"title": ""
},
{
"docid": "135f4008d9c7edc3d7ab8c7f9eb0c85e",
"text": "Organizations deploy gamification in CSCW systems to enhance motivation and behavioral outcomes of users. However, gamification approaches often cause competition between users, which might be inappropriate for working environments that seek cooperation. Drawing on the social interdependence theory, this paper provides a classification for gamification features and insights about the design of cooperative gamification. Using the example of an innova-tion community of a German engineering company, we present the design of a cooperative gamification approach and results from a first experimental evaluation. The findings indicate that the developed gamification approach has positive effects on perceived enjoyment and the intention towards knowledge sharing in the considered innovation community. Besides our conceptual contribu-tion, our findings suggest that cooperative gamification may be beneficial for cooperative working environments and represents a promising field for future research.",
"title": ""
},
{
"docid": "3bba459c9f4cae50db1aa4e8104891f1",
"text": "Unstructured human environments present a substantial challenge to effective robotic operation. Mobile manipulation in typical human environments requires dealing with novel unknown objects, cluttered workspaces, and noisy sensor data. We present an approach to mobile pick and place in such environments using a combination of 2D and 3D visual processing, tactile and proprioceptive sensor data, fast motion planning, reactive control and monitoring, and reactive grasping. We demonstrate our approach by using a two-arm mobile manipulation system to pick and place objects. Reactive components allow our system to account for uncertainty arising from noisy sensors, inaccurate perception (e.g. object detection or registration) or dynamic changes in the environment. We also present a set of tools that allow our system to be easily configured within a short time for a new robotic system.",
"title": ""
},
{
"docid": "598a45d251ae032d97db0162a9de347f",
"text": "In this paper, a 2×2 broadside array of 3D printed half-wave dipole antennas is presented. The array design leverages direct digital manufacturing (DDM) technology to realize a shaped substrate structure that is used to control the array beamwidth. The non-planar substrate allows the element spacing to be changed without affecting the length of the feed network or the distance to the underlying ground plane. The 4-element array has a broadside gain that varies between 7.0–8.5 dBi depending on the out-of-plane angle of the substrate. Acrylonitrile Butadiene Styrene (ABS) is deposited using fused deposition modeling to form the array structure (relative permittivity of 2.7 and loss tangent of 0.008) and Dupont CB028 silver paste is used to form the conductive traces.",
"title": ""
},
{
"docid": "f2a1e5d8e99977c53de9f2a82576db69",
"text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.",
"title": ""
},
{
"docid": "e60d699411055bf31316d468226b7914",
"text": "Tabular data is difficult to analyze and to search through, yielding for new tools and interfaces that would allow even non tech-savvy users to gain insights from open datasets without resorting to specialized data analysis tools and without having to fully understand the dataset structure. The goal of our demonstration is to showcase answering natural language questions from tabular data, and to discuss related system configuration and model training aspects. Our prototype is publicly available and open-sourced (see demo )",
"title": ""
},
{
"docid": "8c2c54207fa24358552bc30548bec5bc",
"text": "This paper proposes an edge bundling approach applied on parallel coordinates to improve the visualization of cluster information directly from the overview. Lines belonging to a cluster are bundled into a single curve between axes, where the horizontal and vertical positioning of the bundling intersection (known as bundling control points) to encode pertinent information about the cluster in a given dimension, such as variance, standard deviation, mean, median, and so on. The hypothesis is that adding this information to the overview improves the visualization overview at the same that it does not prejudice the understanding in other aspects. We have performed tests with participants to compare our approach with classic parallel coordinates and other consolidated bundling technique. The results showed most of the initially proposed hypotheses to be confirmed at the end of the study, as the tasks were performed successfully in the majority of tasks maintaining a low response time in average, as well as having more aesthetic pleasing according to participants' opinion.",
"title": ""
},
{
"docid": "6206968905f6e211b07e896f49ecdc57",
"text": "We present here a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters. The method, however, requires the input of a number of seeds, either individual pixels or regions, which will control the formation of regions into which the image will be segmented. In this correspondence, we present the algorithm, discuss briefly its properties, and suggest two ways in which it can be employed, namely, by using manual seed selection or by automated procedures.",
"title": ""
},
{
"docid": "d6146614330de1da7ae1a4842e2768c1",
"text": "Series-connected power switch provides a viable solution to implement high voltage and high frequency converters. By using the commercially available 1200V Silicon Carbide (SiC) Junction Field Effect Transistor (JFET) and Metal Oxide semiconductor Filed-effect Transistor (MOSFET), a 6 kV SiC hybrid power switch concept and its application are demonstrated. To solve the parameter deviation issue in the series device structure, an optimized voltage control method is introduced, which can guarantee the equal voltage sharing under both static and dynamic state. Without Zener diode arrays, this strategy can significantly reduce the turn-off switching loss. Moreover, this hybrid MOSFET-JFETs concept is also presented to suppress the silicon MOSFET parasitic capacitance effect. In addition, the positive gate drive voltage greatly accelerates turn-on speed and decreases the switching loss. Compared with the conventional super-JFETs, the proposed scheme is suitable for series-connected device, and can achieve better performance. The effectiveness of this method is validated by simulations and experiments, and promising results are obtained.",
"title": ""
},
{
"docid": "60a5897f0cad1812d7ce03641db3b2ee",
"text": "This paper describes a high power (2W) distributed amplifier (DA) MMIC. DA MMIC was fabricated using an Lg=0.25μm GaAs PHEMT process. DA MMIC contains an impedance transformer and heavily tapered gate periphery design for constant output power performance over 0.1 to 22GHz operational frequency. To obtain high voltage operation, the DA MMIC employed a three stacked FET topology. A 7-section DA demonstrated 2 W saturated output power and 12 dB small signal gain from 0.1 GHz to 22 GHz with peak output power of 3.5 W with power added efficiency (PAE) of 27%. Those test results exceeded recently reported GaN based power DA performance [4] with large margins.",
"title": ""
},
{
"docid": "aaec79a58537f180aba451ea825ed013",
"text": "In my March 2006 CACM article I used the term \" computational thinking \" to articulate a vision that everyone, not just those who major in computer science, can benefit from thinking like a computer scientist [Wing06]. So, what is computational thinking? Here is a definition that Jan use; it is inspired by an email exchange I had with Al Aho of Columbia University: Computational Thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent [CunySnyderWing10] Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine, or more generally, by combinations of humans and machines. When I use the term computational thinking, my interpretation of the words \" problem \" and \" solution \" is broad; in particular, I mean not just mathematically well-defined problems whose solutions are completely analyzable, e.g., a proof, an algorithm, or a program, but also real-world problems whose solutions might be in the form of large, complex software systems. Thus, computational thinking overlaps with logical thinking and systems thinking. It includes algorithmic thinking and parallel thinking, which in turn engage other kinds of thought processes, e.g., compositional reasoning, pattern matching, procedural thinking, and recursive thinking. Computational thinking is used in the design and analysis of problems and their solutions, broadly interpreted. The most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types. Abstraction gives us the power to scale and deal with complexity. Recursively applying abstraction gives us the ability to build larger and larger systems, with the base case (at least for computer science) being bits (0's …",
"title": ""
}
] |
scidocsrr
|
c24da78b14df6173474ad114a6163879
|
First-Person Action-Object Detection with EgoNet
|
[
{
"docid": "c2402cea6e52ee98bc0c3de084580194",
"text": "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.",
"title": ""
},
{
"docid": "4b8bc69ff0edde314efbe626e334ea12",
"text": "We present a novel dataset and novel algorithms for the problem of detecting activities of daily living (ADL) in firstperson camera views. We have collected a dataset of 1 million frames of dozens of people performing unscripted, everyday activities. The dataset is annotated with activities, object tracks, hand positions, and interaction events. ADLs differ from typical actions in that they can involve long-scale temporal structure (making tea can take a few minutes) and complex object interactions (a fridge looks different when its door is open). We develop novel representations including (1) temporal pyramids, which generalize the well-known spatial pyramid to approximate temporal correspondence when scoring a model and (2) composite object models that exploit the fact that objects look different when being interacted with. We perform an extensive empirical evaluation and demonstrate that our novel representations produce a two-fold improvement over traditional approaches. Our analysis suggests that real-world ADL recognition is “all about the objects,” and in particular, “all about the objects being interacted with.”",
"title": ""
}
] |
[
{
"docid": "b5e19f1609aaaf1ad1c91eb3a846609c",
"text": "In this paper, Radial Basis Function (RBF) neural Network has been implemented on eight directional values of gradient features for handwritten Hindi character recognition. The character recognition system was trained by using different samples in different handwritings collected of various people of different age groups. The Radial Basis Function network with one input and one output layer has been used for the training of RBF Network. Experiment has been performed to study the recognition accuracy, training time and classification time of RBF neural network. The recognition accuracy, training time and classification time achieved by implementing the RBF network have been compared with the result achieved in previous related work i.e. Back propagation Neural Network. Comparative result shows that the RBF with directional feature provides slightly less recognition accuracy, reduced training and classification time.",
"title": ""
},
{
"docid": "83c9945f61900f4f15c09ff20eee09bc",
"text": "Rendering the user's body in virtual reality increases immersion and presence the illusion of \"being there\". Recent technology enables determining the pose and position of the hands to render them accordingly while interacting within the virtual environment. Virtual reality applications often use realistic male or female hands, mimic robotic hands, or cartoon hands. However, it is unclear how users perceive different hand styles. We conducted a study with 14 male and 14 female participants in virtual reality to investigate the effect of gender on the perception of six different hands. Quantitative and qualitative results show that women perceive lower levels of presence while using male avatar hands and male perceive lower levels of presence using non-human avatar hands. While women dislike male hands, men accept and feel presence with avatar hands of both genders. Our results highlight the importance of considering the users' diversity when designing virtual reality experiences.",
"title": ""
},
{
"docid": "6fb416991c80cb94ad09bc1bb09f81c7",
"text": "Children with Autism Spectrum Disorder often require therapeutic interventions to support engagement in effective social interactions. In this paper, we present the results of a study conducted in three public schools that use an educational and behavioral intervention for the instruction of social skills in changing situational contexts. The results of this study led to the concept of interaction immediacy to help children maintain appropriate spatial boundaries, reply to conversation initiators, disengage appropriately at the end of an interaction, and identify potential communication partners. We describe design principles for Ubicomp technologies to support interaction immediacy and present an example design. The contribution of this work is twofold. First, we present an understanding of social skills in mobile and dynamic contexts. Second, we introduce the concept of interaction immediacy and show its effectiveness as a guiding principle for the design of Ubicomp applications.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "dc8b19649f217d7fde46bb458d186923",
"text": "Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans. Word Count: 148 Anthropomorphism Increases Trust, 3 Technology is an increasingly common substitute for humanity. Sophisticated machines now perform tasks that once required a thoughtful human mind, from grading essays to diagnosing cancer to driving a car. As engineers overcome design barriers to creating such technology, important psychological barriers that users will face when using this technology emerge. Perhaps most important, will people be willing to trust competent technology to replace a human mind, such as a teacher’s mind when grading essays, or a doctor’s mind when diagnosing cancer, or their own mind when driving a car? Our research tests one important theoretical determinant of trust in any nonhuman agent: anthropomorphism (Waytz, Cacioppo, & Epley, 2010). Anthropomorphism is a process of inductive inference whereby people attribute to nonhumans distinctively human characteristics, particularly the capacity for rational thought (agency) and conscious feeling (experience; Gray, Gray, & Wegner, 2007). Philosophical definitions of personhood focus on these mental capacities as essential to being human (Dennett, 1978; Locke, 1841/1997). Furthermore, studies examining people’s lay theories of humanness show that people define humanness in terms of emotions that implicate higher order mental process such as self-awareness and memory (e.g., humiliation, nostalgia; Leyens et al., 2000) and traits that involve cognition and emotion (e.g., analytic, insecure; Haslam, 2006). Anthropomorphizing a nonhuman does not simply involve attributing superficial human characteristics (e.g., a humanlike face or body) to it, but rather attributing essential human characteristics to the agent (namely a humanlike mind, capable of thinking and feeling). Trust is a multifaceted concept that can refer to belief that another will behave with benevolence, integrity, predictability, or competence (McKnight & Chervany, 2001). Our prediction that anthropomorphism will increase trust centers on this last component of trust in another's competence (akin to confidence) (Siegrist, Earle, & Gutscher, 2003; Twyman, Harvey, & Harries, Anthropomorphism Increases Trust, 4 2008). Just as a patient would trust a thoughtful doctor to diagnose cancer more than a thoughtless one, or would rely on mindful cab driver to navigate through rush hour traffic more than a mindless cab driver, this conceptualization of anthropomorphism predicts that people would trust easily anthropomorphized technology to perform its intended function more than seemingly mindless technology. An autonomous vehicle (one that that drives itself) for instance, should seem better able to navigate through traffic when it seems able to think and sense its surroundings than when it seems to be simply mindless machinery. Or a “warbot” intended to kill should seem more lethal and sinister when it appears capable of thinking and planning than when it seems to be simply a computer mindlessly following an operator’s instructions. The more technology seems to have humanlike mental capacities, the more people should trust it to perform its intended function competently, regardless of the valence of its intended function (Epley, Caruso, & Bazerman, 2006; Pierce, Kilduff, Galinsky, & Sivanathan, 2013). This prediction builds on the common association between people’s perceptions of others’ mental states and of competent action. Because mindful agents appear capable of controlling their own actions, people judge others to be more responsible for successful actions they perform with conscious awareness, foresight, and planning (Cushman, 2008; Malle & Knobe, 1997) than for actions they perform mindlessly (see Alicke, 2000; Shaver, 1985; Weiner, 1995). Attributing a humanlike mind to a nonhuman agent should therefore more make the agent seem better able to control its own actions, and therefore better able to perform its intended functions competently. Our prediction also advances existing research on the consequences of anthropomorphism by articulating the psychological processes by which anthropomorphism could affect trust in technology (Nass & Moon, 2000), and by both experimentally manipulating anthropomorphism as well as measuring it as a critical mediator. Some experiments have manipulated the humanlike appearance of robots and Anthropomorphism Increases Trust, 5 assessed measures indirectly related to trust. However, such studies have not measured whether such superficial manipulations actually increases the attribution of essential humanlike qualities to that agent (the attribution we predict is critical for trust in technology; Hancock, Billings, Schaeffer, Chen, De Visser, 2011), and therefore cannot explain factors found ad-hoc to moderate the apparent effect of anthropomorphism on trust (Pak, Fink, Price, Bass, & Sturre, 2012). Another study found that individual differences in anthropomorphism predicted differences in willingness to trust technology in hypothetical scenarios (Waytz et al., 2010), but did not manipulate anthropomorphism experimentally. Our experiment is therefore the first to test our theoretical model of how anthropomorphism affects trust in technology. We conducted our experiment in a domain of practical relevance: people’s willingness to trust an autonomous vehicle. Autonomous vehicles—cars that control their own steering and speed—are expected to account for 75% of vehicles on the road by 2040 (Newcomb, 2012). Employing these autonomous features means surrendering personal control of the vehicle and trusting technology to drive safely. We manipulated the ease with which a vehicle, approximated by a driving simulator, could be anthropomorphized by merely giving it independent agency, or by also giving it a name, gender, and a human voice. We predicted that independent agency alone would make the car seem more mindful than a normal car, and that adding further anthropomorphic qualities would make the vehicle seem even more mindful. More important, we predicted that these relative increases in anthropomorphism would increase physiological, behavioral, and psychological measures of trust in the vehicle’s ability to drive effectively. Because anthropomorphism increases trust in the agent’s ability to perform its job, we also predicted that increased anthropomorphism of an autonomous agent would mitigate blame for an agent’s involvement in an undesirable outcome. To test this, we implemented a virtually unavoidable Anthropomorphism Increases Trust, 6 accident during the driving simulation in which participants were struck by an oncoming car, an accident clearly caused by the other driver. We implemented this to maintain experimental control over participants’ experience because everyone in the autonomous vehicle conditions would get into the same accident, one clearly caused by the other driver. Indeed, when two people are potentially responsible for an outcome, the agent seen to be more competent tends to be credited for a success whereas the agent seen to be less competent tends to be blamed for a failure (Beckman, 1970; Wetzel, 1972). Because we predicted that anthropomorphism would increase trust in the vehicle’s competence, we also predicted that it would reduce blame for an accident clear caused by another vehicle. Experiment Method One hundred participants (52 female, Mage=26.39) completed this experiment using a National Advanced Driving Simulator. Once in the simulator, the experimenter attached physiological equipment to participants and randomly assigned them to condition: Normal, Agentic, or Anthropomorphic. Participants in the Normal condition drove the vehicle themselves, without autonomous features. Participants in the Agentic condition drove a vehicle capable of controlling its steering and speed (an “autonomous vehicle”). The experimenter followed a script describing the vehicle's features, suggesting when to use the autonomous features, and describing what was about to happen. Participants in the Anthropomorphic condition drove the same autonomous vehicle, but with additional anthropomorphic features beyond mere agency—the vehicle was referred to by name (Iris), was given a gender (female), and was given a voice through human audio files played at predetermined times throughout the course. The voice files followed the same script used by the experimenter in the Agentic condition, modified where necessary (See Supplemental Online Material [SOM]). Anthropomorphism Increases Trust, 7 All participants first completed a driving history questionnaire and a measure of dispositional anthropomorphism (Waytz et al., 2010). Scores on this measure did not vary significantly by condition, so we do not discuss them further. Participants in the Agentic and Anthropomorphic conditions then drove a short practice course to familiarize themselves with the car’s autonomous features. Participants coul",
"title": ""
},
{
"docid": "b4a5ebf335cc97db3790c9e2208e319d",
"text": "We examine whether conservative white males are more likely than are other adults in the U.S. general public to endorse climate change denial. We draw theoretical and analytical guidance from the identityprotective cognition thesis explaining the white male effect and from recent political psychology scholarship documenting the heightened system-justification tendencies of political conservatives. We utilize public opinion data from ten Gallup surveys from 2001 to 2010, focusing specifically on five indicators of climate change denial. We find that conservative white males are significantly more likely than are other Americans to endorse denialist views on all five items, and that these differences are even greater for those conservative white males who self-report understanding global warming very well. Furthermore, the results of our multivariate logistic regression models reveal that the conservative white male effect remains significant when controlling for the direct effects of political ideology, race, and gender as well as the effects of nine control variables. We thus conclude that the unique views of conservative white males contribute significantly to the high level of climate change denial in the United States. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "93c2ed30659e6b9c2020866cd3670705",
"text": "Longitudinal melanonychia (LM) is a pigmented longitudinal band of the nail unit, which results from pigment deposition, generally melanin, in the nail plate. Such lesion is frequently observed in specific ethnic groups, such as Asians and African Americans, typically affecting multiple nails. When LM involves a single nail plate, it may be the sign of a benign lesion within the matrix, such as a melanocytic nevus, simple lentigo, or nail matrix melanocyte activation. However, the possibility of melanoma must be considered. Nail melanoma in children is exceptionally rare and only 2 cases have been reported in fairskinned Caucasian individuals.",
"title": ""
},
{
"docid": "a4037343fa0df586946d8034b0bf8a5b",
"text": "Security researchers are applying software reliability models to vulnerability data, in an attempt to model the vulnerability discovery process. I show that most current work on these vulnerability discovery models (VDMs) is theoretically unsound. I propose a standard set of definitions relevant to measuring characteristics of vulnerabilities and their discovery process. I then describe the theoretical requirements of VDMs and highlight the shortcomings of existing work, particularly the assumption that vulnerability discovery is an independent process.",
"title": ""
},
{
"docid": "821b6ce6e6d51e9713bb44c4c9bf8cf0",
"text": "Rapidly destructive arthritis (RDA) of the shoulder is a rare disease. Here, we report two cases, with different destruction patterns, which were most probably due to subchondral insufficiency fractures (SIFs). Case 1 involved a 77-year-old woman with right shoulder pain. Rapid destruction of both the humeral head and glenoid was seen within 1 month of the onset of shoulder pain. We diagnosed shoulder RDA and performed a hemiarthroplasty. Case 2 involved a 74-year-old woman with left shoulder pain. Humeral head collapse was seen within 5 months of pain onset, without glenoid destruction. Magnetic resonance imaging showed a bone marrow edema pattern with an associated subchondral low-intensity band, typical of SIF. Total shoulder arthroplasty was performed in this case. Shoulder RDA occurs as a result of SIF in elderly women; the progression of the joint destruction is more rapid in cases with SIFs of both the humeral head and the glenoid. Although shoulder RDA is rare, this disease should be included in the differential diagnosis of acute onset shoulder pain in elderly female patients with osteoporosis and persistent joint effusion.",
"title": ""
},
{
"docid": "91b49384769b178b300f2e3a4bd0b265",
"text": "The recently proposed self-ensembling methods have achieved promising results in deep semi-supervised learning, which penalize inconsistent predictions of unlabeled data under different perturbations. However, they only consider adding perturbations to each single data point, while ignoring the connections between data samples. In this paper, we propose a novel method, called Smooth Neighbors on Teacher Graphs (SNTG). In SNTG, a graph is constructed based on the predictions of the teacher model, i.e., the implicit self-ensemble of models. Then the graph serves as a similarity measure with respect to which the representations of \"similar\" neighboring points are learned to be smooth on the low-dimensional manifold. We achieve state-of-the-art results on semi-supervised learning benchmarks. The error rates are 9.89%, 3.99% for CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular, the improvements are significant when the labels are fewer. For the non-augmented MNIST with only 20 labels, the error rate is reduced from previous 4.81% to 1.36%. Our method also shows robustness to noisy labels.",
"title": ""
},
{
"docid": "e28ab50c2d03402686cc9a465e1231e7",
"text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.",
"title": ""
},
{
"docid": "392fc4decf7a474277ec0fe596e19145",
"text": "This paper proposes an approach to establish cooperative behavior within traffic scenarios involving only autonomously driving vehicles. The main idea is to employ principles of auction-based control to determine driving strategies by which the vehicles reach their driving goals, while adjusting their paths to each other and adhering to imposed constraints like traffic rules. Driving plans (bids) are repetitively negotiated among the control units of the vehicles (the auction) to obtain a compromise between separate (local) vehicle goals and the global objective to resolve the considered traffic scenario. The agreed driving plans serve as reference trajectories for local model-predictive controllers of the vehicles to realize the driving behavior. The approach is illustrated for a cooperative overtaking scenario comprising three vehicles.",
"title": ""
},
{
"docid": "695264db0ca1251ab0f63b04d41c68cd",
"text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.",
"title": ""
},
{
"docid": "e9858b151a3f042f198184cda0917639",
"text": "Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domainor representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.",
"title": ""
},
{
"docid": "d9605c1cde4c40d69c2faaea15eb466c",
"text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.",
"title": ""
},
{
"docid": "28533f1b8aa1e6191efb818d4e93fb66",
"text": "Pelvic tilt is often quantified using the angle between the horizontal and a line connecting the anterior superior iliac spine (ASIS) and the posterior superior iliac spine (PSIS). Although this angle is determined by the balance of muscular and ligamentous forces acting between the pelvis and adjacent segments, it could also be influenced by variations in pelvic morphology. The primary objective of this anatomical study was to establish how such variation may affect the ASIS-PSIS measure of pelvic tilt. In addition, we also investigated how variability in pelvic landmarks may influence measures of innominate rotational asymmetry and measures of pelvic height. Thirty cadaver pelves were used for the study. Each specimen was positioned in a fixed anatomical reference position and the angle between the ASIS and PSIS measured bilaterally. In addition, side-to-side differences in the height of the innominate bone were recorded. The study found a range of values for the ASIS-PSIS of 0-23 degrees, with a mean of 13 and standard deviation of 5 degrees. Asymmetry of pelvic landmarks resulted in side-to-side differences of up to 11 degrees in ASIS-PSIS tilt and 16 millimeters in innominate height. These results suggest that variations in pelvic morphology may significantly influence measures of pelvic tilt and innominate rotational asymmetry.",
"title": ""
},
{
"docid": "f98b1b9808b3eb41f3d60f207854ec79",
"text": "The newly emerging event-based social networks (EBSNs) connect online and offline social interactions, offering a great opportunity to understand behaviors in the cyber-physical space. While existing efforts have mainly focused on investigating user behaviors in traditional social network services (SNS), this paper aims to exploit individual behaviors in EBSNs, which remains an unsolved problem. In particular, our method predicts activity attendance by discovering a set of factors that connect the physical and cyber spaces and influence individual's attendance of activities in EBSNs. These factors, including content preference, context (spatial and temporal) and social influence, are extracted using different models and techniques. We further propose a novel Singular Value Decomposition with Multi-Factor Neighborhood (SVD-MFN) algorithm to predict activity attendance by integrating the discovered heterogeneous factors into a single framework, in which these factors are fused through a neighborhood set. Experiments based on real-world data from Douban Events demonstrate that the proposed SVD-MFN algorithm outperforms the state-of-the-art prediction methods.",
"title": ""
},
{
"docid": "7a1f244aae5f28cd9fb2d5ba54113c28",
"text": "Next generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy. Sequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform. All three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support.",
"title": ""
}
] |
scidocsrr
|
94057608623a7644e71b477a75cdfeda
|
Exponentiated Gradient Exploration for Active Learning
|
[
{
"docid": "cce513c48e630ab3f072f334d00b67dc",
"text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press",
"title": ""
}
] |
[
{
"docid": "ac96b284847f58c7683df92e13157f40",
"text": "Falls are dangerous for the aged population as they can adversely affect health. Therefore, many fall detection systems have been developed. However, prevalent methods only use accelerometers to isolate falls from activities of daily living (ADL). This makes it difficult to distinguish real falls from certain fall-like activities such as sitting down quickly and jumping, resulting in many false positives. Body orientation is also used as a means of detecting falls, but it is not very useful when the ending position is not horizontal, e.g. falls happen on stairs. In this paper we present a novel fall detection system using both accelerometers and gyroscopes. We divide human activities into two categories: static postures and dynamic transitions. By using two tri-axial accelerometers at separate body locations, our system can recognize four kinds of static postures: standing, bending, sitting, and lying. Motions between these static postures are considered as dynamic transitions. Linear acceleration and angular velocity are measured to determine whether motion transitions are intentional. If the transition before a lying posture is not intentional, a fall event is detected. Our algorithm, coupled with accelerometers and gyroscopes, reduces both false positives and false negatives, while improving fall detection accuracy. In addition, our solution features low computational cost and real-time response.",
"title": ""
},
{
"docid": "6cbd51bbef3b56df6d97ec7b4348cd94",
"text": "This study reviews human clinical experience to date with several synthetic cannabinoids, including nabilone, levonantradol, ajulemic acid (CT3), dexanabinol (HU-211), HU-308, and SR141716 (Rimonabant®). Additionally, the concept of “clinical endogenous cannabinoid deficiency” is explored as a possible factor in migraine, idiopathic bowel disease, fibromyalgia and other clinical pain states. The concept of analgesic synergy of cannabinoids and opioids is addressed. A cannabinoid-mediated improvement in night vision at the retinal level is discussed, as well as its potential application to treatment of retinitis pigmentosa and other conditions. Additionally noted is the role of cannabinoid treatment in neuroprotection and its application to closed head injury, cerebrovascular accidents, and CNS degenerative diseases including Alzheimer, Huntington, Parkinson diseases and ALS. Excellent clinical results employing cannabis based medicine extracts (CBME) in spasticity and spasms of MS suggests extension of such treatment to other spasmodic and dystonic conditions. Finally, controversial areas of cannabinoid treatment in obstetrics, gynecology and pediatrics are addressed along with a rationale for such interventions. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <docdelivery@haworthpress. com> Website: <http://www.HaworthPress.com> 2003 by The Haworth Press, Inc. All rights reserved.]",
"title": ""
},
{
"docid": "56a7243414824a2e4ab3993dc3a90fbe",
"text": "The primary objectives of periodontal therapy are to maintain and to obtain health and integrity of the insertion apparatus and to re-establish esthetics by means of the quantitative and qualitative restoration of the gingival margin. Esthetics can be considered essential to the success of any dental procedure. However, in cleft lip and palate patients gingival esthetics do not play a relevant role, since most patients present little gingiva exposure (Mikami, 1990). The treatment protocol for cleft palate patients is complex and often requires a myriad of surgical and rehabilitative procedures that last until adulthood. In order to rehabilitate these patients and provide them with adequate physical and psychological conditions for a good quality of life, plastic surgery has been taking place since the 19th century, with the development of new techniques. By the age of six months the patients have undergone lip repair procedures (Bill, 1956; Jolleys, 1954), followed by palatoplasty at the age of 1218 months. As a consequence of these surgical interventions, the formation of innumerous scars and fibrous tissue in the anterior region may cause some sequels, such as orofacial growth alterations (Quarta and Koch, 1989; Ozawa, 2001), a shallow vestibule with lack of attached gingiva and gingival margin mobility (Falcone, 1966). A shallow vestibule in the cleft lip and palate patient is associated with the contraction of the upper lip during healing (Iino et al, 2001), which causes deleterious effects on growth, facial expression, speech, orthodontic and prosthetic treatment problems, diminished keratinized gingiva, bone graft resorption and changes in the upper lip muscle pattern. The surgical protocol at the Hospital for Rehabilitation of Craniofacial Anomalies (HRCA) in Bauru consists of carrying out primary surgeries (cheiloplasty and palatoplasty) during the first months of Periodontal Health Re-Establishment in Cleft Lip and Palate Patients through Vestibuloplasty Associated with Free Gingival Graft",
"title": ""
},
{
"docid": "ec58915a7fd321bcebc748a369153509",
"text": "For wireless charging of electric vehicle (EV) batteries, high-frequency magnetic fields are generated from magnetically coupled coils. The large air-gap between two coils may cause high leakage of magnetic fields and it may also lower the power transfer efficiency (PTE). For the first time, in this paper, we propose a new set of coil design formulas for high-efficiency and low harmonic currents and a new design procedure for low leakage of magnetic fields for high-power wireless power transfer (WPT) system. Based on the proposed design procedure, a pair of magnetically coupled coils with magnetic field shielding for a 1-kW-class golf-cart WPT system is optimized via finite-element simulation and the proposed design formulas. We built a 1-kW-class wireless EV charging system for practical measurements of the PTE, the magnetic field strength around the golf cart, and voltage/current spectrums. The fabricated system has achieved a PTE of 96% at the operating frequency of 20.15 kHz with a 156-mm air gap between the coils. At the same time, the highest magnetic field strength measured around the golf cart is 19.8 mG, which is far below the relevant electromagnetic field safety guidelines (ICNIRP 1998/2010). In addition, the third harmonic component of the measured magnetic field is 39 dB lower than the fundamental component. These practical measurement results prove the effectiveness of the proposed coil design formulas and procedure of a WPT system for high-efficiency and low magnetic field leakage.",
"title": ""
},
{
"docid": "a5001e03007f3fd166e15db37dcd3bc7",
"text": "Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models.",
"title": ""
},
{
"docid": "6300f94dbfa58583e15741e5c86aa372",
"text": "In this paper, we study the problem of retrieving a ranked list of top-N items to a target user in recommender systems. We first develop a novel preference model by distinguishing different rating patterns of users, and then apply it to existing collaborative filtering (CF) algorithms. Our preference model, which is inspired by a voting method, is well-suited for representing qualitative user preferences. In particular, it can be easily implemented with less than 100 lines of codes on top of existing CF algorithms such as user-based, item-based, and matrix-factorizationbased algorithms. When our preference model is combined to three kinds of CF algorithms, experimental results demonstrate that the preference model can improve the accuracy of all existing CF algorithms such as ATOP and NDCG@25 by 3%–24% and 6%–98%, respectively.",
"title": ""
},
{
"docid": "cd42f9eba7e1018f8a21c8830400af59",
"text": "This chapter proposes a conception of lexical meaning as use-potential, in contrast to prevailing atomistic and reificational views. The issues are illustrated on the example of spatial expressions, pre-eminently prepositions. It is argued that the dichotomy between polysemy and semantic generality is a false one, with expressions occupying points on a continuum from full homonymy to full monosemy, and with typical cases of polysemy falling in between. The notion of use-potential is explored in connectionist models of spatial categorization. Some possible objections to the use-potential approach are also addressed.",
"title": ""
},
{
"docid": "c91cb54598965e1111020ab70f9fbe94",
"text": "This paper proposes a parameter estimation method for doubly-fed induction generators (DFIGs) in variable-speed wind turbine systems (WTS). The proposed method employs an extended Kalman filter (EKF) for estimation of all electrical parameters of the DFIG, i.e., the stator and rotor resistances, the leakage inductances of stator and rotor, and the mutual inductance. The nonlinear state space model of the DFIG is derived and the design procedure of the EKF is described. The observability matrix of the linearized DFIG model is computed and the observability is checked online for different operation conditions. The estimation performance of the EKF is illustrated by simulation results. The estimated parameters are plotted against their actual values. The estimation performance of the EKF is also tested under variations of the DFIG parameters to investigate the estimation accuracy for changing parameters.",
"title": ""
},
{
"docid": "7f9b9bef62aed80a918ef78dcd15fb2a",
"text": "Transferring image-based object detectors to domain of videos remains a challenging problem. Previous efforts mostly exploit optical flow to propagate features across frames, aiming to achieve a good trade-off between performance and computational complexity. However, introducing an extra model to estimate optical flow would significantly increase the overall model size. The gap between optical flow and high-level features can hinder it from establishing the spatial correspondence accurately. Instead of relying on optical flow, this paper proposes a novel module called Progressive Sparse Local Attention (PSLA), which establishes the spatial correspondence between features across frames in a local region with progressive sparse strides and uses the correspondence to propagate features. Based on PSLA, Recursive Feature Updating (RFU) and Dense feature Transforming (DFT) are introduced to model temporal appearance and enrich feature representation respectively. Finally, a novel framework for video object detection is proposed. Experiments on ImageNet VID are conducted. Our framework achieves a state-of-the-art speedaccuracy trade-off with significantly reduced model capacity.",
"title": ""
},
{
"docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "67755a3dd06b09f458d1ee013e18c8ef",
"text": "Spiking neural networks are naturally asynchronous and use pulses to carry information. In this paper, we consider implementing such networks on a digital chip. We used an event-based simulator and we started from a previously established simulation, which emulates an analog spiking neural network, that can extract complex and overlapping, temporally correlated features. We modified this simulation to allow an easier integration in an embedded digital implementation. We first show that a four bits synaptic weight resolution is enough to achieve the best performance, although the network remains functional down to a 2 bits weight resolution. Then we show that a linear leak could be implemented to simplify the neurons leakage calculation. Finally, we demonstrate that an order-based STDP with a fixed number of potentiated synapses as low as 200 is efficient for features extraction. A simulation including these modifications, which lighten and increase the efficiency of digital spiking neural network implementation shows that the learning behavior is not affected, with a recognition rate of 98% in a cars trajectories detection application.",
"title": ""
},
{
"docid": "af98839cc3e28820c8d79403d58d903a",
"text": "Annotating the increasing amounts of user-contributed images in a personalized manner is in great demand. However, this demand is largely ignored by the mainstream of automated image annotation research. In this paper we aim for personalizing automated image annotation by jointly exploiting personalized tag statistics and content-based image annotation. We propose a cross-entropy based learning algorithm which personalizes a generic annotation model by learning from a user's multimedia tagging history. Using cross-entropy-minimization based Monte Carlo sampling, the proposed algorithm optimizes the personalization process in terms of a performance measurement which can be flexibly chosen. Automatic image annotation experiments with 5,315 realistic users in the social web show that the proposed method compares favorably to a generic image annotation method and a method using personalized tag statistics only. For 4,442 users the performance improves, where for 1,088 users the absolute performance gain is at least 0.05 in terms of average precision. The results show the value of the proposed method.",
"title": ""
},
{
"docid": "e4ce06c8e1dba5f9ec537dc137acf3ec",
"text": "Hemangiomas are relatively common benign proliferative lesion of vascular tissue origin. They are often present at birth and may become more apparent throughout life. They are seen on facial skin, tongue, lips, buccal mucosa and palate as well as muscles. Hemangiomas occur more common in females than males. This case report presents a case of capillary hemangioma in maxillary anterior region in a 10-year-old boy. How to cite this article: Satish V, Bhat M, Maganur PC, Shah P, Biradar V. Capillary Hemangioma in Maxillary Anterior Region: A Case Report. Int J Clin Pediatr Dent 2014;7(2):144-147.",
"title": ""
},
{
"docid": "a6f9dc745682efb871e338b63c0cbbc4",
"text": "Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.",
"title": ""
},
{
"docid": "ffa974993a412ddba571e65f8b87f7df",
"text": "Synthetic gene switches are basic building blocks for the construction of complex gene circuits that transform mammalian cells into useful cell-based machines for next-generation biotechnological and biomedical applications. Ligand-responsive gene switches are cellular sensors that are able to process specific signals to generate gene product responses. Their involvement in complex gene circuits results in sophisticated circuit topologies that are reminiscent of electronics and that are capable of providing engineered cells with the ability to memorize events, oscillate protein production, and perform complex information-processing tasks. Microencapsulated mammalian cells that are engineered with closed-loop gene networks can be implanted into mice to sense disease-related input signals and to process this information to produce a custom, fine-tuned therapeutic response that rebalances animal metabolism. Progress in gene circuit design, in combination with recent breakthroughs in genome engineering, may result in tailored engineered mammalian cells with great potential for future cell-based therapies.",
"title": ""
},
{
"docid": "cd977d0e24fd9e26e90f2cf449141842",
"text": "Several leadership and ethics scholars suggest that the transformational leadership process is predicated on a divergent set of ethical values compared to transactional leadership. Theoretical accounts declare that deontological ethics should be associated with transformational leadership while transactional leadership is likely related to teleological ethics. However, very little empirical research supports these claims. Furthermore, despite calls for increasing attention as to how leaders influence their followers’ perceptions of the importance of ethics and corporate social responsibility (CSR) for organizational effectiveness, no empirical study to date has assessed the comparative impact of transformational and transactional leadership styles on follower CSR attitudes. Data from 122 organizational leaders and 458 of their followers indicated that leader deontological ethical values (altruism, universal rights, Kantian principles, etc.) were strongly associated with follower ratings of transformational leadership, while leader teleological ethical values (utilitarianism) were related to follower ratings of transactional leadership. As predicted, only transformational leadership was associated with follower beliefs in the stakeholder view of CSR. Implications for the study and practice of ethical leadership, future research directions, and management education are discussed.",
"title": ""
},
{
"docid": "9078698db240725e1eb9d1f088fb05f4",
"text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.",
"title": ""
},
{
"docid": "e541ae262655b7f5affefb32ce9267ee",
"text": "Internet of Things (IoT) is a revolutionary technology for the modern society. IoT can connect every surrounding objects for various applications like security, medical fields, monitoring and other industrial applications. This paper considers the application of IoT in the field of medicine. IoT in E-medicine can take the advantage of emerging technologies to provide immediate treatment to the patient as well as monitors and keeps track of health record for healthy person. IoT then performs complex computations on these collected data and can provide health related advice. Though IoT can provide a cost effective medical services to any people of all age groups, there are several key issues that need to be addressed. System security, IoT interoperability, dynamic storage facility and unified access mechanisms are some of the many fundamental issues associated with IoT. This paper proposes a system level design solution for security and flexibility aspect of IoT. In this paper, the functional components are bound in security function group which ensures the management of privacy and secure operation of the system. The security function group comprises of components which offers secure communication using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Since CP-ABE are delegated to unconstrained devices with the assumption that these devices are trusted, the producer encrypts data using AES and the ABE scheme is protected through symmetric key solutions.",
"title": ""
},
{
"docid": "eaa2ed7e15a3b0a3ada381a8149a8214",
"text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.",
"title": ""
},
{
"docid": "171e9eef8a23f5fdf05ba61a56415130",
"text": "Human moral judgment depends critically on “theory of mind,” the capacity to represent the mental states of agents. Recent studies suggest that the right TPJ (RTPJ) and, to lesser extent, the left TPJ (LTPJ), the precuneus (PC), and the medial pFC (MPFC) are robustly recruited when participants read explicit statements of an agent's beliefs and then judge the moral status of the agent's action. Real-world interactions, by contrast, often require social partners to infer each other's mental states. The current study uses fMRI to probe the role of these brain regions in supporting spontaneous mental state inference in the service of moral judgment. Participants read descriptions of a protagonist's action and then either (i) “moral” facts about the action's effect on another person or (ii) “nonmoral” facts about the situation. The RTPJ, PC, and MPFC were recruited selectively for moral over nonmoral facts, suggesting that processing moral stimuli elicits spontaneous mental state inference. In a second experiment, participants read the same scenarios, but explicit statements of belief preceded the facts: Protagonists believed their actions would cause harm or not. The response in the RTPJ, PC, and LTPJ was again higher for moral facts but also distinguished between neutral and negative outcomes. Together, the results illuminate two aspects of theory of mind in moral judgment: (1) spontaneous belief inference and (2) stimulus-driven belief integration.",
"title": ""
}
] |
scidocsrr
|
9bab0aaed546b4de3631a5dd8761a7e5
|
A Novel Symmetric Double-Slot Structure for Antipodal Vivaldi Antenna to Lower Cross-Polarization Level
|
[
{
"docid": "b48af90f2f8497ff5b1eb284036c5cb3",
"text": "A miniaturized coplanar waveguide (CPW)-fed antipodal Vivaldi antenna (AVA) is presented in this letter. Elliptically shaped strip conductors are used to lower the antenna operating frequency. With two pairs of tapered slots and circularly shaped loads, the radiation performance in lower operating band can be greatly enhanced. The fabricated prototype of this proposed antenna with a size of 90 ×93.5 ×0.8 mm3 was measured to confirm the simulated results. The wide measured impedance band from 1.32 GHz to more than 17 GHz for can be obtained. Meanwhile, the good measured radiation patterns with gain 3.5- 9.3 dBi in end-fire direction achieve in the whole measured working band.",
"title": ""
},
{
"docid": "6acc820f32c74ff30730faca2eff9f8f",
"text": "The conventional Vivaldi antenna is known for its ultrawideband characteristic, but low directivity. In order to improve the directivity, a double-slot structure is proposed to design a new Vivaldi antenna. The two slots are excited in uniform amplitude and phase by using a T-junction power divider. The double-slot structure can generate plane-like waves in the E-plane of the antenna. As a result, directivity of the double-slot Vivaldi antenna is significantly improved by comparison to a conventional Vivaldi antenna of the same size. The measured results show that impedance bandwidth of the double-slot Vivaldi antenna is from 2.5 to 15 GHz. Gain and directivity of the proposed antenna is considerably improved at the frequencies above 6 GHz. Furthermore, the main beam splitting at high frequencies of the conventional Vivaldi antenna on thick dielectric substrates is eliminated by the double-slot structure.",
"title": ""
},
{
"docid": "d7065dccb396b0a47526fc14e0a9e796",
"text": "A modified compact antipodal Vivaldi antenna is proposed with good performance for different applications including microwave and millimeter wave imaging. A step-by-step procedure is applied in this design including conventional antipodal Vivaldi antenna (AVA), AVA with a periodic slit edge, and AVA with a trapezoid-shaped dielectric lens to feature performances including wide bandwidth, small size, high gain, front-to-back ratio and directivity, modification on E-plane beam tilt, and small sidelobe levels. By adding periodic slit edge at the outer brim of the antenna radiators, lower-end limitation of the conventional AVA extended twice without changing the overall dimensions of the antenna. The optimized antenna is fabricated and tested, and the results show that S11 <; -10 dB frequency band is from 3.4 to 40 GHz, and it is in good agreement with simulation one. Gain of the antenna has been elevated by the periodic slit edge and the trapezoid dielectric lens at lower frequencies up to 8 dB and at higher frequencies up to 15 dB, respectively. The E-plane beam tilts and sidelobe levels are reduced by the lens.",
"title": ""
}
] |
[
{
"docid": "d848e146b4dbf78a6c629c5963c92f50",
"text": "Agile development is getting more and more used, also in the development of safety-critical software. For the sake of certification, it is necessary to comply with relevant standards – in this case IEC 61508 and EN 50128. In this paper we focus on two aspects of the need for configuration management and SafeScrum. First and foremost we need to adapt SafeScrum to the standards’ needs for configuration management. We show that this can be achieved by relative simple amendments to SafeScrum. In addition – in order to keep up with a rapidly changing set of development paradigms it is necessary to move the standards’ requirement in a goal based direction – more focus on what and not so much focus on how.",
"title": ""
},
{
"docid": "10a0f370ad3e9c3d652e397860114f90",
"text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.",
"title": ""
},
{
"docid": "ad6d10ad2165bbfd664e366d47c3ab89",
"text": "This paper presents a novel boundary based semiautomatic tool, ByLabel, for accurate image annotation. Given an image, ByLabel first detects its edge features and computes high quality boundary fragments. Current labeling tools require the human to accurately click on numerous boundary points. ByLabel simplifies this to just selecting among the boundary fragment proposals that ByLabel automatically generates. To evaluate the performance of By-Label, 10 volunteers, with no experiences of annotation, labeled both synthetic and real images. Compared to the commonly used tool LabelMe, ByLabel reduces image-clicks and time by 73% and 56% respectively, while improving the accuracy by 73% (from 1.1 pixel average boundary error to 0.3 pixel). The results show that our ByLabel outperforms the state-of-the-art annotation tool in terms of efficiency, accuracy and user experience. The tool is publicly available: http://webdocs.cs.ualberta.ca/~vis/ bylabel/.",
"title": ""
},
{
"docid": "23e548d5cada98f16033de2a6dfe04d5",
"text": "The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion, positive and negative situations were successfully discriminated from isolated bodies but not faces. Nevertheless, viewers perceived illusory positivity or negativity in the nondiagnostic faces when seen with bodies. To reveal the underlying mechanisms, we created compounds of intense negative faces combined with positive bodies, and vice versa. Perceived affect and mimicry of the faces shifted systematically as a function of their contextual body emotion. These findings challenge standard models of emotion expression and highlight the role of the body in expressing and perceiving emotions.",
"title": ""
},
{
"docid": "1065c331b4a9ae5209ee3f35e5a2041b",
"text": "Recent acts of extreme violence involving teens and associated links to violent video games have led to an increased interest in video game violence. Research suggests that violent video games influence aggressive behavior, aggressive affect, aggressive cognition, and physiological arousal. Anderson and Bushman [Annu. Rev. Psychol. 53 (2002) 27.] have posited a General Aggression Model (GAM) to explain the mechanism behind the link between violent video games and aggressive behavior. However, the influence of violent video games as a function of developmental changes across adolescence has yet to be addressed. The purpose of this review is to integrate the GAM with developmental changes that occur across adolescence. D 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "12a6a40af43d0543771e584b0735a826",
"text": "Purpose Early intervention and support for workers with mental health problems may be influenced by the mental health literacy of the worker, their colleagues and their supervisor. There are gaps, however, in our understanding of how to develop and evaluate mental health literacy within the context of the workplace. The purpose of this study was to evaluate the psychometric properties of a new Mental Health Literacy tool for the Workplace (MHL-W). Methods The MHL-W is a 16-question, vignette-based tool specifically tailored for the workplace context. It includes four vignettes featuring different manifestations of mental ill-health in the workplace, with parallel questions that explore each of the four dimensions of mental health literacy. In order to establish reliability and construct validity, data were collected from 192 healthcare workers who were participating in a mental health training project. Baseline data was used to examine the scale’s internal consistency, factor structure and correlations with general knowledge ratings, confidence ratings, attitudes towards people with mental illness, and attitudes towards seeking help. Paired t-tests were used to examine pre and post intervention scores in order to establish responsiveness of the scale. Results There was strong support for internal consistency of the tool and a one-factor solution. As predicted, the scores correlated highly with an overall rating of knowledge and confidence in addressing mental health issues, and moderately with attitudes towards seeking professional help and (decreased) stigmatized beliefs. It also appears to be responsive to change. Conclusions The MHL-W scale is promising tool to track the need for and impact of mental health education in the workplace.",
"title": ""
},
{
"docid": "827d7c359eadf40e8103c6c534b6e73f",
"text": "Making accurate recommendations for users has become an important function of e-commerce system with the rapid growth of WWW. Conventional recommendation systems usually recommend similar objects, which are of the same type with the query object without exploring the semantics of different similarity measures. In this paper, we organize objects in the recommendation system as a heterogeneous network. Through employing a path-based relevance measure to evaluate the relatedness between any-typed objects and capture the subtle semantic containing in each path, we implement a prototype system (called HeteRecom) for semantic based recommendation. HeteRecom has the following unique properties: (1) It provides the semantic-based recommendation function according to the path specified by users. (2) It recommends the similar objects of the same type as well as related objects of different types. We demonstrate the effectiveness of our system with a real-world movie data set.",
"title": ""
},
{
"docid": "68a6edfafb8e7dab899f8ce1f76d311c",
"text": "Networks such as social networks, airplane networks, and citation networks are ubiquitous. The adjacency matrix is often adopted to represent a network, which is usually high dimensional and sparse. However, to apply advanced machine learning algorithms to network data, low-dimensional and continuous representations are desired. To achieve this goal, many network embedding methods have been proposed recently. The majority of existing methods facilitate the local information i.e. local connections between nodes, to learn the representations, while completely neglecting global information (or node status), which has been proven to boost numerous network mining tasks such as link prediction and social recommendation. Hence, it also has potential to advance network embedding. In this paper, we study the problem of preserving local and global information for network embedding. In particular, we introduce an approach to capture global information and propose a network embedding framework LOG, which can coherently model LOcal and Global information. Experimental results demonstrate the ability to preserve global information of the proposed framework. Further experiments are conducted to demonstrate the effectiveness of learned representations of the proposed framework.",
"title": ""
},
{
"docid": "c90082e3eeca2c6c19d45aa21a034079",
"text": "In das umfassende Gebiet Wissensmanagement (WM) wird durch eine allgemeine Einleitung Definitionen, und Grundlagen eingeführt. Die konzeptuelle Ebene im Wissensmanagement mit dem Schwerpunkt Geschäftsprozessorientierte WM-Ansätze wird anschließend näher erläutert. Zur Umsetzung der WM-Ansätze werden Technologien des WMs beschrieben und der Nutzen von Web-Technologien im WM aufgezeigt. Abschließend werden allgemeine Trends sowie der Forschungsschwerpunkt der Abteilung Knowledge Engineering im WM diskutiert.",
"title": ""
},
{
"docid": "945ef1e5215666f6eb13eccaf24e8a56",
"text": "DNA mismatch repair (MMR) proteins are ubiquitous players in a diverse array of important cellular functions. In its role in post-replication repair, MMR safeguards the genome correcting base mispairs arising as a result of replication errors. Loss of MMR results in greatly increased rates of spontaneous mutation in organisms ranging from bacteria to humans. Mutations in MMR genes cause hereditary nonpolyposis colorectal cancer, and loss of MMR is associated with a significant fraction of sporadic cancers. Given its prominence in mutation avoidance and its ability to target a range of DNA lesions, MMR has been under investigation in studies of ageing mechanisms. This review summarizes what is known about the molecular details of the MMR pathway and the role of MMR proteins in cancer susceptibility and ageing.",
"title": ""
},
{
"docid": "9d6add9e5ea5b60adfb3612716159d5b",
"text": "BACKGROUND\nBasal insulin therapy does not stop loss of β-cell function, which is the hallmark of type 2 diabetes mellitus, and thus diabetes control inevitably deteriorates. Insulin degludec is a new, ultra-longacting basal insulin. We aimed to assess efficacy and safety of insulin degludec compared with insulin glargine in patients with type 2 diabetes mellitus.\n\n\nMETHODS\nIn this 52 week, phase 3, open-label, treat-to-target, non-inferiority trial, undertaken at 123 sites in 12 countries, we enrolled adults (aged ≥18 years) with type 2 diabetes mellitus and a glycated haemoglobin (HbA(1c)) of 7·0-10·0% after 3 months or more of any insulin regimen (with or without oral antidiabetic drugs). We randomly allocated eligible participants in a 3:1 ratio to receive once-daily subcutaneous insulin degludec or glargine, stratified by previous insulin regimen, via a central interactive response system. Basal insulin was titrated to a target plasma glucose concentration of 3·9-<5·0 mmol/L self-measured before breakfast. The primary outcome was non-inferiority of degludec to glargine measured by change in HbA(1c) from baseline to week 52 (non-inferiority limit of 0·4%) by ANOVA in the full analysis set. We assessed rates of hypoglycaemia in all treated patients. This study is registered with ClinicalTrials.gov, number NCT00972283.\n\n\nFINDINGS\n744 (99%) of 755 participants randomly allocated degludec and 248 (99%) of 251 allocated glargine were included in the full analysis set (mean age 58·9 years [SD 9·3], diabetes duration 13·5 years [7·3], HbA(1c) 8·3% [0·8], and fasting plasma glucose 9·2 mmol/L [3·1]); 618 (82%) and 211 (84%) participants completed the trial. After 1 year, HbA(1c) decreased by 1·1% in the degludec group and 1·2% in the glargine group (estimated treatment difference [degludec-glargine] 0·08%, 95% CI -0·05 to 0·21), confirming non-inferiority. Rates of overall confirmed hypoglycaemia (plasma glucose <3·1 mmol/L or severe episodes requiring assistance) were lower with degludec than glargine (11·1 vs 13·6 episodes per patient-year of exposure; estimated rate ratio 0·82, 95% CI 0·69 to 0·99; p=0·0359), as were rates of nocturnal confirmed hypoglycaemia (1·4 vs 1·8 episodes per patient-year of exposure; 0·75, 0·58 to 0·99; p=0·0399). Rates of severe hypoglycaemia seemed similar (0·06 vs 0·05 episodes per patient-year of exposure for degludec and glargine) but were too low for assessment of differences. Rates of other adverse events did not differ between groups.\n\n\nINTERPRETATION\nA policy of suboptimum diabetes control to reduce the risk of hypoglycaemia and its consequences in advanced type 2 diabetes mellitus might be unwarranted with newer basal insulins such as degludec, which are associated with lower risks of hypoglycaemia than insulin glargine.\n\n\nFUNDING\nNovo Nordisk.",
"title": ""
},
{
"docid": "60b3460f1ae554c6d24b9b982484d0c1",
"text": "Archaeological remote sensing is not a novel discipline. Indeed, there is already a suite of geoscientific techniques that are regularly used by practitioners in the field, according to standards and best practice guidelines. However, (i) the technological development of sensors for data capture; (ii) the accessibility of new remote sensing and Earth Observation data; and (iii) the awareness that a combination of different techniques can lead to retrieval of diverse and complementary information to characterize landscapes and objects of archaeological value and significance, are currently three triggers stimulating advances in methodologies for data acquisition, signal processing, and the integration and fusion of extracted information. The Special Issue “Remote Sensing and Geosciences for Archaeology” therefore presents a collection of scientific contributions that provides a sample of the state-of-the-art and forefront research in this field. Site discovery, understanding of cultural landscapes, augmented knowledge of heritage, condition assessment, and conservation are the main research and practice targets that the papers published in this Special Issue aim to address.",
"title": ""
},
{
"docid": "753a4af9741cd3fec4e0e5effaf5fc67",
"text": "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.",
"title": ""
},
{
"docid": "a3ea6fad86fe124aa68e0865b432ab32",
"text": "This paper mainly addressed the kinematics and dynamics simulation of the Slider-Crank mechanism. After proposing a mathematical model for the forward displacement of the slider-crank mechanism, the mathematical models for the forward velocity and acceleration of the slider-crank mechanism are constructed, respectively. According to the theory of statical equilibrium, the mathematical model for the forward dynamics of the slider-crank mechanism is constituted as well based on the acceleration analysis of each component part of this mechanism under consideration. Taking into account of mathematical models for the forward kinematics and dynamics of the slider-crank mechanism, simulation models for the forward kinematics and dynamics of the slider-crank mechanism are constituted in the Matlab/Simulink simulation platform and the forward kinematics and dynamics simulation of the slider-crank mechanism was successfully accomplished based on Matlab/Simulink by which an arduous and complicated mathematical manipulation can be avoided and a lot of computation time can be saved. Examples of the simulation for the forward kinematics and dynamics of a slider-crank mechanism are given to demonstrate the above-mentioned theoretical results.",
"title": ""
},
{
"docid": "5bbc5bde4ac8eedfcd2b330584974fbd",
"text": "Clusters, constellations, formations, or `swarms' of small satellites are fast becoming a way to perform scientific and technological missions more affordably. As objectives of these missions become more ambitious, there are still problems in increasing the number of communication windows, supporting multiple signals, and increasing data rates over reliable intersatellite and ground links to Earth. Also, there is a shortage of available frequencies in the 2 m and 70 cm bands due to rapid increase in the number of CubeSats orbiting the Earth - leading to further regulatory issues. Existing communication systems and radio signal processing Intellectual Property (IP) cores cannot fully address these challenges. One of the possible strategies to solve these issues is by equipping satellites with a Software Defined Radio (SDR). SDR is a key area to realise various software implementations which enable an adaptive and reconfigurable communication system without changing any hardware device or feature. This paper proposes a new SDR architecture which utilises a combination of Field Programmable Gate Array (FPGA) and field programmable Radio Frequency (RF) transceiver to solve back-end and front- end challenges and thereby enabling reception of multiple signals or satellites using single user equipment.",
"title": ""
},
{
"docid": "bbb4f7b90ade0ffbf7ba3e598c18a78f",
"text": "In this paper, an analysis of the resistance of multi-track coils in printed circuit board (PCB) implementations, where the conductors have rectangular cross-section, for spiral planar coils is carried out. For this purpose, different analytical losses models for the mentioned conductors have been reviewed. From this review, we conclude that for the range of frequencies, the coil dimensions and the planar configuration typically used in domestic induction heating, the application in which we focus, these analysis are unsatisfactory. Therefore, in this work the resistance of multi-track winding has been calculated by means of finite element analysis (FEA) tool. These simulations provide us some design guidelines that allow us to optimize the design of multi-track coils for domestic induction heating. Furthermore, several prototypes are used to verify the simulated results, both single-turn coils and multi-turn coils.",
"title": ""
},
{
"docid": "9c637dff0539c6a80ecceb8e9fa9d567",
"text": "Learning the stress patterns of English words presents a challenge for L1 speakers from syllable-timed and/or tone languages. Realization of stress contrasts in previous studies has been measured in a variety of ways. This study adapts and extends Pairwise Variability Index (PVI), a method generally used to measure duration as a property of speech rhythm, to compare F0 and amplitude contrasts across L1 and L2 production of stressed and unstressed syllables in English multisyllabic words. L1 North American English and L1 Taiwan-Mandarin English speech data were extracted from the AESOP-ILAS corpus. Results of acoustic analysis show that overall, stress contrasts were realized most robustly by L1 English speakers. A general pattern of contrast underdifferentiation was found in L2 speakers with respect to F0, duration and intensity, with the most striking difference found in F0. These results corroborate our earlier findings on L1 Mandarin speakers’ production of on-focus/post-focus contrasts in their realization of English narrow focus. Taken together, these results demonstrate that underdifferentiation of prosodic contrasts at both the lexical and phrase levels is a major prosodic feature of Taiwan English; future research will determine whether it can also be found in the L2 English of other syllable-timed or tone language speakers.",
"title": ""
},
{
"docid": "72e6d897e8852fca481d39237cf04e36",
"text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.",
"title": ""
},
{
"docid": "cc8c46399664594cdaa1bfc6c480a455",
"text": "INTRODUCTION\nPatients will typically undergo awake surgery for permanent implantation of spinal cord stimulation (SCS) in an attempt to optimize electrode placement using patient feedback about the distribution of stimulation-induced paresthesia. The present study compared efficacy of first-time electrode placement under awake conditions with that of neurophysiologically guided placement under general anesthesia.\n\n\nMETHODS\nA retrospective review was performed of 387 SCS surgeries among 259 patients which included 167 new stimulator implantation to determine whether first time awake surgery for placement of spinal cord stimulators is preferable to non-awake placement.\n\n\nRESULTS\nThe incidence of device failure for patients implanted using neurophysiologically guided placement under general anesthesia was one-half that for patients implanted awake (14.94% vs. 29.7%).\n\n\nCONCLUSION\nNon-awake surgery is associated with fewer failure rates and therefore fewer re-operations, making it a viable alternative. Any benefits of awake implantation should carefully be considered in the future.",
"title": ""
},
{
"docid": "43f1cc712b3803ef7ac8273136dbe75d",
"text": "Improved understanding of the anatomy and physiology of the aging face has laid the foundation for adopting an earlier and more comprehensive approach to facial rejuvenation, shifting the focus from individual wrinkle treatment and lift procedures to a holistic paradigm that considers the entire face and its structural framework. This article presents an overview of a comprehensive method to address facial aging. The key components to the reported strategy for improving facial cosmesis include, in addition to augmentation of volume loss, protection with sunscreens and antioxidants; promotion of epidermal cell turnover with techniques such as superficial chemical peels; microlaser peels and microdermabrasion; collagen stimulation and remodeling via light, ultrasound, or radiofrequency (RF)-based methods; and muscle control with botulinum toxin. For the treatment of wrinkles and for the augmentation of pan-facial dermal lipoatrophy, several types of fillers and volumizers including hyaluronic acid (HA), autologous fat, and calcium hydroxylapatite (CaHA) or injectable poly-l-lactic acid (PLLA) are available. A novel bimodal, trivector technique to restore structural facial volume loss that combines supraperiosteal depot injections of volume-depleted fat pads and dermal/subcutaneous injections for panfacial lipoatrophy with PLLA is presented. The combination of treatments with fillers; toxins; light-, sound-, and RF-based technologies; and surgical procedures may help to forestall the facial aging process and provide more natural results than are possible with any of these techniques alone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
}
] |
scidocsrr
|
71db204eb214c9b2070918b5ebcbec69
|
Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search
|
[
{
"docid": "3355c37593ee9ef1b2ab29823ca8c1d4",
"text": "The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing.",
"title": ""
}
] |
[
{
"docid": "57104614eb2ff83893f05fbb2ff65a7d",
"text": "We have developed a novel assembly task partner robot to support workers in their task. This system, PaDY (in-time Parts/tools Delivery to You robot), delivers parts and tools to a worker by recognizing the worker's behavior in the car production line; thus, improving the efficiency of the work by reducing the worker's physical workload for picking parts and tools. For this purpose, it is necessary to plan the trajectory of the robot before the worker moves to the next location for another assembling task. First a prediction method for the worker's trajectory using a Markov model for a discretized work space into cells is proposed, then motion planning method is proposed using the predicted worker's trajectory and a mixture Gaussian distribution for each area corresponding to each procedure of the work process in the automobile coordinate system. Experimental results illustrate the validity of the proposed motion planning method.",
"title": ""
},
{
"docid": "24a1d68957279ec9120b4f2a24e9d887",
"text": "The idea of applying machine learning(ML) to solve problems in security domains is almost 3 decades old. As information and communications grow more ubiquitous and more data become available, many security risks arise as well as appetite to manage and mitigate such risks. Consequently, research on applying and designing ML algorithms and systems for security has grown fast, ranging from intrusion detection systems(IDS) and malware classification to security policy management(SPM) and information leak checking. In this paper, we systematically study the methods, algorithms, and system designs in academic publications from 2008-2015 that applied ML in security domains. 98% of the surveyed papers appeared in the 6 highest-ranked academic security conferences and 1 conference known for pioneering ML applications in security. We examine the generalized system designs, underlying assumptions, measurements, and use cases in active research. Our examinations lead to 1) a taxonomy on ML paradigms and security domains for future exploration and exploitation, and 2) an agenda detailing open and upcoming challenges. Based on our survey, we also suggest a point of view that treats security as a game theory problem instead of a batch-trained ML problem.",
"title": ""
},
{
"docid": "068e4db45998b1b99c36ffb8684c66e8",
"text": "f ( HE NUMBER OF people worldwide with heart failure (HF) is increasing at an alarming pace. In the United States lone, there are approximately 5.3 million people who have HF, ith a prevalence estimated at 10 per 1,000 in people over the age f 65.1 It is now estimated that there are 660,000 new cases of HF iagnosed every year for people over 45 years of age. In 2008, here were more than 1 million hospital admissions for HF at a ost of $34.8 billion. Currently, preventative measures, optimal edical therapy, and heart transplantation are not effectively reucing the overall morbidity and mortality of this syndrome. The American College of Cardiology/American Heart Assocition (ACC/AHA) have classified HF in 4 stages based on the rogression of the disease (Table 1).2,3 Early in the course of the isease (stages A and B), symptoms are absent or mild, but the atients are at high risk of developing symptomatic or refractory isease. As the disease progresses through stage C, ventricular unction is maintained by adrenergic stimulation, activation of enin-angiotensin-aldosterone, and other neurohumoral and cytoine systems.4,5 These compensatory mechanisms become less ffective over time, and cardiac function deteriorates to the point here patients have marked symptoms at rest (stage D). The CC/AHA-recommended therapeutic options for patients with tage D symptoms are continuous inotropic support, heart translantation, mechanical circulatory support, or hospice care. Standard HF medical therapies such as angiotensin-convertng enzyme inhibitors, -blockers, diuretics, inotropic agents, nd antiarrhythmics may relieve symptoms, but the mortality ate remains unaffected. Optimal medical therapy does not halt he progression toward stage D HF symptoms, and when this ccurs, there is a greater than 75% 2-year mortality risk, with urgical intervention being the only effective treatment. Cariac transplantation is an effective therapy for terminal HF and s associated with excellent 1-year survival (93%), 5-year surival (88%), and functional capacity.6 However, there are aproximately 2,200 donors available for as many as 100,000 atients with advanced-stage HF.7 Moreover, donor hearts are",
"title": ""
},
{
"docid": "578e8c5d2ed1fd41bd2c869eb842f305",
"text": "We are investigating the magnetic resonance imaging characteristics of magnetic nanoparticles (MNPs) that consist of an iron-oxide magnetic core coated with oleic acid (OA), then stabilized with a pluronic or tetronic block copolymer. Since pluronics and tetronics vary structurally, and also in the ratio of hydrophobic (poly[propylene oxide]) and hydrophilic (poly[ethylene oxide]) segments in the polymer chain and in molecular weight, it was hypothesized that their anchoring to the OA coating around the magnetic core could significantly influence the physical properties of MNPs, their interactions with biological environment following intravenous administration, and ability to localize to tumors. The amount of block copolymer associated with MNPs was seen to depend upon their molecular structures and influence the characteristics of MNPs. Pluronic F127-modified MNPs demonstrated sustained and enhanced contrast in the whole tumor, whereas that of Feridex IV was transient and confined to the tumor periphery. In conclusion, our pluronic F127-coated MNPs, which can also be loaded with anticancer agents for drug delivery, can be developed as an effective cancer theranostic agent, i.e. an agent with combined drug delivery and imaging properties.",
"title": ""
},
{
"docid": "cb59c880b3848b7518264f305cfea32a",
"text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.",
"title": ""
},
{
"docid": "69f2773d7901ac9d477604a85fb6a591",
"text": "We propose an expert-augmented actor-critic algorithm, which we evaluate on two environments with sparse rewards: Montezuma’s Revenge and a demanding maze from the ViZDoom suite. In the case of Montezuma’s Revenge, an agent trained with our method achieves very good results consistently scoring above 27,000 points (in many experiments beating the first world). With an appropriate choice of hyperparameters, our algorithm surpasses the performance of the expert data. In a number of experiments, we have observed an unreported bug in Montezuma’s Revenge which allowed the agent to score more than 800, 000 points.",
"title": ""
},
{
"docid": "92d3bb6142eafc9dc9f82ce6a766941a",
"text": "The classical Rough Set Theory (RST) always generates too many rules, making it difficult for decision makers to choose a suitable rule. In this study, we use two processes (pre process and post process) to select suitable rules and to explore the relationship among attributes. In pre process, we propose a pruning process to select suitable rules by setting up a threshold on the support object of decision rules, to thereby solve the problem of too many rules. The post process used the formal concept analysis from these suitable rules to explore the attribute relationship and the most important factors affecting decision making for choosing behaviours of personal investment portfolios. In this study, we explored the main concepts (characteristics) for the conservative portfolio: the stable job, less than 4 working years, and the gender is male; the moderate portfolio: high school education, the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), the gender is male; and the aggressive portfolio: the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), less than 4 working years, and a stable job. The study result successfully explored the most important factors affecting the personal investment portfolios and the suitable rules that can help decision makers. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0bdab8a45e8e2cf3c0d47cd94a0cc52c",
"text": "In this paper we study the suitability and performance of conventional time delay of arrival estimation for humanoid robots. Moving away from simulated environments, we look at the influence of real-world robot's shape on the sound source localization. We present a TDOA/GCC based sound source localization method that successfully addresses this influence by utilizing a pre-measured set of TDOAs. The measuring methodology and important aspects of the implementation are thoroughly presented. Finally, an evaluation is made with the humanoid robot Nao. The experimental results are presented and discussed. Key-Words: Microphone arrays, time delay of arrival, sound source localization, generalized cross correlation.",
"title": ""
},
{
"docid": "899422014472e5b31f3935bd3d5452fd",
"text": "The subject-oriented modelling approach [5] significally differs from the classic Petri net based approach of many business process modeling languages like EPC [9], Business Process Model and Notation (BPMN) [11], and also Yet Another Workflow Language (YAWL) [10]. In this work, we compare the two approaches by modeling a case study called \"Procure to Pay\"[3], a typical business process where some equipment for a construction site is rented and finally paid. The case study is not only modelled but also automated using the Metasonic Suite for the subject-oriented and YAWL for the Petri net based approach.",
"title": ""
},
{
"docid": "1404323d435b1b7999feda249f817f36",
"text": "The Process of Encryption and Decryption is performed by using Symmetric key cryptography and public key cryptography for Secure Communication. In this paper, we studied that how the process of Encryption and Decryption is perform in case of Symmetric key and public key cryptography using AES and DES algorithms and modified RSA algorithm.",
"title": ""
},
{
"docid": "9743b6452df2f5d5e2834c397076f7b7",
"text": "This paper deals with the application of a well-known neural network technique, multi-layer back-propagation (BP) neural network, in financial data mining. A modified neural network forecasting model is presented, and an intelligent mining system is developed. The system can forecast the buying and selling signs according to the prediction of future trends to stock market, and provide decision-making for stock investors. The simulation result of seven years to Shanghai Composite Index shows that the return achieved by this mining sys-tem is about three times as large as that achieved by the buy and hold strategy, so it is advantageous to apply neural networks to forecast financial time series, the different investors could benefit from it. Keywords—data mining, neural network, stock forecasting.",
"title": ""
},
{
"docid": "cf68e7b27b45c3e0f779471880d07846",
"text": "This paper presents a new switching strategy for pulse width modulation (PWM) power converters. Since the proposed strategy uses independent on/off switching action of the upper or lower arm according to the polarity of the current, the dead time is not needed except instant of current polarity change. Therefore, it is not necessary to compensate the dead time effect and the possibility of arm short is strongly eliminated. The current control of PWM power converters can easily adopt the proposed switching strategy by using the polarity information of the reference current instead of the real current, thus eliminating the problems that commonly arise from real current detection. In order to confirm the usefulness of the proposed switching strategy, experimental tests were done using a single-phase inverter with passive loads, a three-phase inverter for induction motor drives, a three-phase ac/dc PWM converter, a three-phase active power filter, and a class-D amplifier, the results of which are presented in this paper",
"title": ""
},
{
"docid": "d82c85205acaabab61ff720675418a20",
"text": "We introduce a new system for automatic image content removal and inpainting. Unlike traditional inpainting algorithms, which require advance knowledge of the region to be filled in, our system automatically detects the area to be removed and infilled. Region segmentation and inpainting are performed jointly in a single pass. In this way, potential segmentation errors are more naturally alleviated by the inpainting module. The system is implemented as an encoder-decoder architecture, with two decoder branches, one tasked with segmentation of the foreground region, the other with inpainting. The encoder and the two decoder branches are linked via neglect nodes, which guide the inpainting process in selecting which areas need reconstruction. The whole model is trained using a conditional GAN strategy. Comparative experiments show that our algorithm outperforms state-of-the-art inpainting techniques (which, unlike our system, do not segment the input image and thus must be aided by an external segmentation module.)",
"title": ""
},
{
"docid": "740b783d840a706992dc6977a918f1f1",
"text": "Inadequate curriculum for software engineering is considered to be one of the most common software risks. A number of solutions, on improving Software Engineering Education (SEE) have been reported in literature but there is a need to collectively present these solutions at one place. We have performed a mapping study to present a broad view of literature; published on improving the current state of SEE. Our aim is to give academicians, practitioners and researchers an international view of the current state of SEE. Our study has identified 70 primary studies that met our selection criteria, which we further classified and categorized in a well-defined Software Engineering educational framework. We found that the most researched category within the SE educational framework is Innovative Teaching Methods whereas the least amount of research was found in Student Learning and Assessment category. Our future work is to conduct a Systematic Literature Review on SEE. Keywords—Mapping Study, Software Engineering, Software Engineering Education, Literature Survey.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "790895861cb5bba78513d26c1eb30e4c",
"text": "This paper develops an integrated approach, combining quality function deployment (QFD), fuzzy set theory, and analytic hierarchy process (AHP) approach, to evaluate and select the optimal third-party logistics service providers (3PLs). In the approach, multiple evaluating criteria are derived from the requirements of company stakeholders using a series of house of quality (HOQ). The importance of evaluating criteria is prioritized with respect to the degree of achieving the stakeholder requirements using fuzzy AHP. Based on the ranked criteria, alternative 3PLs are evaluated and compared with each other using fuzzy AHP again to make an optimal selection. The effectiveness of proposed approach is demonstrated by applying it to a Hong Kong based enterprise that supplies hard disk components. The proposed integrated approach outperforms the existing approaches because the outsourcing strategy and 3PLs selection are derived from the corporate/business strategy.",
"title": ""
},
{
"docid": "b24fe8a5357af646dd2706c62a46eb25",
"text": "This paper presents an intelligent adaptive system for the integration of haptic output in graphical user interfaces. The system observes the user’s actions, extracts meaningful features, and generates a user and application specific model. When the model is sufficiently detailled, it is used to predict the widget which is most likely to be used next by the user. Upon entering this widget, two magnets in a specialized mouse are activated to stop the movement, so target acquisition becomes easier and more comfortable. Besides the intelligent control system, we will present several methods to generate haptic cues which might be integrated in multimodal user interfaces in the future.",
"title": ""
},
{
"docid": "462a0746875e35116f669b16d851f360",
"text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] |
scidocsrr
|
92a4cd0463da8ba8b11b8ddc5e4576c6
|
Project management and IT governance. Integrating PRINCE2 and ISO 38500
|
[
{
"docid": "70b9aad14b2fc75dccab0dd98b3d8814",
"text": "This paper describes the first phase of an ongoing program of research into theory and practice of IT governance. It conceptually explores existing IT governance literature and reveals diverse definitions of IT governance, that acknowledge its structures, control frameworks and/or processes. The definitions applied within the literature and the nature and breadth of discussion demonstrate a lack of a clear shared understanding of the term IT governance. This lack of clarity has the potential to confuse and possibly impede useful research in the field and limit valid cross-study comparisons of results. Using a content analysis approach, a number of existing diverse definitions are moulded into a \"definitive\" definition of IT governance and its usefulness is critically examined. It is hoped that this exercise will heighten awareness of the \"broad reach\" of the IT governance concept to assist researchers in the development of research projects and more effectively guide practitioners in the overall assessment of IT governance.",
"title": ""
},
{
"docid": "2eff84064f1d9d183eddc7e048efa8e6",
"text": "Rupinder Kaur, Dr. Jyotsna Sengupta Abstract— The software process model consists of a set of activities undertaken to design, develop and maintain software systems. A variety of software process models have been designed to structure, describe and prescribe the software development process. The software process models play a very important role in software development, so it forms the core of the software product. Software project failure is often devastating to an organization. Schedule slips, buggy releases and missing features can mean the end of the project or even financial ruin for a company. Oddly, there is disagreement over what it means for a project to fail. In this paper, discussion is done on current process models and analysis on failure of software development, which shows the need of new research.",
"title": ""
}
] |
[
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "9b1a4e27c5d387ef091fdb9140eb8795",
"text": "In this study I investigated the relation between normal heterosexual attraction and autogynephilia (a man's propensity to be sexually aroused by the thought or image of himself as a woman). The subjects were 427 adult male outpatients who reported histories of dressing in women's garments, of feeling like women, or both. The data were questionnaire measures of autogynephilia, heterosexual interest, and other psychosexual variables. As predicted, the highest levels of autogynephilia were observed at intermediate rather than high levels of heterosexual interest; that is, the function relating these variables took the form of an inverted U. This finding supports the hypothesis that autogynephilia is a misdirected type of heterosexual impulse, which arises in association with normal heterosexuality but also competes with it.",
"title": ""
},
{
"docid": "c3c3add0c42f3b98962c4682a72b1865",
"text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.",
"title": ""
},
{
"docid": "3e5041c6883ce6ab59234ed2c8c995b7",
"text": "Self-amputation of the penis treated immediately: case report and review of the literature. Self-amputation of the penis is rare in urological practice. It occurs more often in a context psychotic disease. It can also be secondary to alcohol or drugs abuse. Treatment and care vary according on the severity of the injury, the delay of consultation and the patient's mental state. The authors report a case of self-amputation of the penis in an alcoholic context. The authors analyze the etiological and urological aspects of this trauma.",
"title": ""
},
{
"docid": "1fd51acb02bafb3ea8f5678581a873a4",
"text": "How often has this scenario happened? You are driving at night behind a car that has bright light-emitting diode (LED) taillights. When looking directly at the taillights, the light is not blurry, but when glancing at other objects, a trail of lights appears, known as a phantom array. The reason for this trail of lights might not be what you expected: it is not due to glare, degradation of eyesight, or astigmatism. The culprit may be the flickering of the LED lights caused by pulse-width modulating (PWM) drive circuitry. Actually, many LED taillights flicker on and off at frequencies between 200 and 500 Hz, which is too fast to notice when the eye is not in rapid motion. However, during a rapid eye movement (saccade), the images of the LED lights appear in different positions on the retina, causing a trail of images to be perceived (Figure 1). This disturbance of vision may not occur with all LED taillights because some taillights keep a constant current through the LEDs. However, when there is a PWM current through the LEDs, the biological effect of the light flicker may become noticeable during the eye saccade.",
"title": ""
},
{
"docid": "c60957f1bf90450eb947d2b0ab346ffb",
"text": "Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques.",
"title": ""
},
{
"docid": "f25c0b1fef38b7322197d61dd5dcac41",
"text": "Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide and one of the few malignancies with an increasing incidence in the USA. While the relationship between HCC and its inciting risk factors (e.g., hepatitis B, hepatitis C and alcohol liver disease) is well defined, driving genetic alterations are still yet to be identified. Clinically, HCC tends to be hypervascular and, for that reason, transarterial chemoembolization has proven to be effective in managing many patients with localized disease. More recently, angiogenesis has been targeted effectively with pharmacologic strategies, including monoclonal antibodies against VEGF and the VEGF receptor, as well as small-molecule kinase inhibitors of the VEGF receptor. Targeting angiogenesis with these approaches has been validated in several different solid tumors since the initial approval of bevacizumab for advanced colon cancer in 2004. In HCC, only sorafenib has been shown to extend survival in patients with advanced HCC and has opened the door for other anti-angiogenic strategies. Here, we will review the data supporting the targeting of the VEGF axis in HCC and the preclinical and early clinical development of bevacizumab.",
"title": ""
},
{
"docid": "291a1927343797d72f50134b97f73d88",
"text": "This paper proposes a half-rate single-loop reference-less binary CDR that operates from 8.5 Gb/s to 12.1 Gb/s (36% capture range). The high capture range is made possible by adding a novel frequency detection mechanism which limits the magnitude of the phase error between the input data and the VCO clock. The proposed frequency detector produces three phases of the data, and feeds into the phase detector the data phase that minimizes the CDR phase error. This frequency detector, implemented within a 10 Gb/s CDR in Fujitsu's 65 nm CMOS, consumes 11 mW and improves the capture range by up to 6 × when it is activated.",
"title": ""
},
{
"docid": "a6c3a4dfd33eb902f5338f7b8c7f78e5",
"text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.",
"title": ""
},
{
"docid": "a2b3cdf440dd6aa139ea51865d8f81cc",
"text": "Hyperspectral image (HSI) classification is a hot topic in the remote sensing community. This paper proposes a new framework of spectral-spatial feature extraction for HSI classification, in which for the first time the concept of deep learning is introduced. Specifically, the model of autoencoder is exploited in our framework to extract various kinds of features. First we verify the eligibility of autoencoder by following classical spectral information based classification and use autoencoders with different depth to classify hyperspectral image. Further in the proposed framework, we combine PCA on spectral dimension and autoencoder on the other two spatial dimensions to extract spectral-spatial information for classification. The experimental results show that this framework achieves the highest classification accuracy among all methods, and outperforms classical classifiers such as SVM and PCA-based SVM.",
"title": ""
},
{
"docid": "0d7586e443f265015beed6f8bdc15def",
"text": "With the rapid growth of E-Commerce on the Internet, online product search service has emerged as a popular and effective paradigm for customers to find desired products and select transactions. Most product search engines today are based on adaptations of relevance models devised for information retrieval. However, there is still a big gap between the mechanism of finding products that customers really desire to purchase and that of retrieving products of high relevance to customers' query. In this paper, we address this problem by proposing a new ranking framework for enhancing product search based on dynamic best-selling prediction in E-Commerce. Specifically, we first develop an effective algorithm to predict the dynamic best-selling, i.e. the volume of sales, for each product item based on its transaction history. By incorporating such best-selling prediction with relevance, we propose a new ranking model for product search, in which we rank higher the product items that are not only relevant to the customer's need but with higher probability to be purchased by the customer. Results of a large scale evaluation, conducted over the dataset from a commercial product search engine, demonstrate that our new ranking method is more effective for locating those product items that customers really desire to buy at higher rank positions without hurting the search relevance.",
"title": ""
},
{
"docid": "8bea1f9e107cfcebc080bc62d7ac600d",
"text": "The introduction of wireless transmissions into the data center has shown to be promising in improving cost effectiveness of data center networks DCNs. For high transmission flexibility and performance, a fundamental challenge is to increase the wireless availability and enable fully hybrid and seamless transmissions over both wired and wireless DCN components. Rather than limiting the number of wireless radios by the size of top-of-rack switches, we propose a novel DCN architecture, Diamond, which nests the wired DCN with radios equipped on all servers. To harvest the gain allowed by the rich reconfigurable wireless resources, we propose the low-cost deployment of scalable 3-D ring reflection spaces RRSs which are interconnected with streamlined wired herringbone to enable large number of concurrent wireless transmissions through high-performance multi-reflection of radio signals over metal. To increase the number of concurrent wireless transmissions within each RRS, we propose a precise reflection method to reduce the wireless interference. We build a 60-GHz-based testbed to demonstrate the function and transmission ability of our proposed architecture. We further perform extensive simulations to show the significant performance gain of diamond, in supporting up to five times higher server-to-server capacity, enabling network-wide load balancing, and ensuring high fault tolerance.",
"title": ""
},
{
"docid": "fec16344f8b726b9d232423424c101d3",
"text": "A triboelectric separator manufactured by PlasSep, Ltd., Canada was evaluated at MBA Polymers, Inc. as part of a project sponsored by the American Plastics Council (APC) to explore the potential of triboelectric methods for separating commingled plastics from end-oflife durables. The separator works on a very simple principle: that dissimilar materials will transfer electrical charge to one another when rubbed together, the resulting surface charge differences can then be used to separate these dissimilar materials from one another in an electric field. Various commingled plastics were tested under controlled operating conditions. The feed materials tested include commingled plastics derived from electronic shredder residue (ESR), automobile shredder residue (ASR), refrigerator liners, and water bottle plastics. The separation of ESR ABS and HIPS, and water bottle PC and PVC were very promising. However, this device did not efficiently separate many plastic mixtures, such as rubber and plastics; nylon and acetal; and PE and PP from ASR. All tests were carried out based on the standard operating conditions determined for ESR ABS and HIPS. There is the potential to improve the separation performance for many of the feed materials by individually optimizing their operating conditions. Cursory economics shows that the operation cost is very dependent upon assumed throughput, separation efficiency and requisite purity. Unit operation cost could range from $0.03/lb. to $0.05/lb. at capacities of 2000 lb./hr. and 1000 lb./hr.",
"title": ""
},
{
"docid": "532ded1b0cc25a21464996a15a976125",
"text": "Folded-plate structures provide an efficient design using thin laminated veneer lumber panels. Inspired by Japanese furniture joinery, the multiple tab-and-slot joint was developed for the multi-assembly of timber panels with non-parallel edges without adhesive or metal joints. Because the global analysis of our origami structures reveals that the rotational stiffness at ridges affects the global behaviour, we propose an experimental and numerical study of this linear interlocking connection. Its geometry is governed by three angles that orient the contact faces. Nine combinations of these angles were tested and the rotational slip was measured with two different bending set-ups: closing or opening the fold formed by two panels. The non-linear behaviour was conjointly reproduced numerically using the finite element method and continuum damage mechanics.",
"title": ""
},
{
"docid": "d83853692581644f3a86ad0e846c48d2",
"text": "This paper investigates cyber security issues with automatic dependent surveillance broadcast (ADS-B) based air traffic control. Before wide-scale deployment in civil aviation, any airborne or ground-based technology must be ensured to have no adverse impact on safe and profitable system operations, both under normal conditions and failures. With ADS-B, there is a lack of a clear understanding about vulnerabilities, how they can impact airworthiness and what failure conditions they can potentially induce. The proposed work streamlines a threat assessment methodology for security evaluation of ADS-B based surveillance. To the best of our knowledge, this work is the first to identify the need for mechanisms to secure ADS-B based airborne surveillance and propose a security solution. This paper presents preliminary findings and results of the ongoing investigation.12",
"title": ""
},
{
"docid": "1a5189a09df624d496b83470eed4cfb6",
"text": "Vol. 24, No. 1, 2012 103 Received January 5, 2011, Revised March 9, 2011, Accepted for publication April 6, 2011 Corresponding author: Gyong Moon Kim, M.D., Department of Dermatology, St. Vincent Hospital, College of Medicine, The Catholic University of Korea, 93-6 Ji-dong, Paldal-gu, Suwon 442-723, Korea. Tel: 82-31-249-7465, Fax: 82-31-253-8927, E-mail: gyongmoonkim@ catholic.ac.kr This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http:// creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Ann Dermatol Vol. 24, No. 1, 2012 http://dx.doi.org/10.5021/ad.2012.24.1.103",
"title": ""
},
{
"docid": "9973de0dc30f8e8f7234819163a15db2",
"text": "Jennifer L. Docktor, Natalie E. Strand, José P. Mestre, and Brian H. Ross Department of Physics, University of Wisconsin–La Crosse, La Crosse, Wisconsin 54601, USA Department of Physics, University of Illinois, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, Illinois 61801, USA Department of Educational Psychology, University of Illinois, Champaign, Illinois 61820, USA Department of Psychology, University of Illinois, Champaign, Illinois 61820, USA (Received 30 April 2015; published 1 September 2015)",
"title": ""
},
{
"docid": "39e6ddd04b7fab23dbbeb18f2696536e",
"text": "Moving IoT components from the cloud onto edge hosts helps in reducing overall network traffic and thus minimizes latency. However, provisioning IoT services on the IoT edge devices presents new challenges regarding system design and maintenance. One possible approach is the use of software-defined IoT components in the form of virtual IoT resources. This, in turn, allows exposing the thing/device layer and the core IoT service layer as collections of micro services that can be distributed to a broad range of hosts.\n This paper presents the idea and evaluation of using virtual resources in combination with a permission-based blockchain for provisioning IoT services on edge hosts.",
"title": ""
},
{
"docid": "55a798fd7ec96239251fce2a340ba1ba",
"text": "At EUROCRYPT’88, we introduced an interactive zero-howledge protocol ( G ~ O U and Quisquater [13]) fitted to the authentication of tamper-resistant devices (e.g. smart cads , Guillou and Ugon [14]). Each security device stores its secret authentication number, an RSA-like signature computed by an authority from the device identity. Any transaction between a tamperresistant security device and a verifier is limited to a unique interaction: the device sends its identity and a random test number; then the verifier teUs a random large question; and finally the device answers by a witness number. The transaction is successful when the test number is reconstructed from the witness number, the question and the identity according to numbers published by the authority and rules of redundancy possibly standardized. This protocol allows a cooperation between users in such a way that a group of cooperative users looks like a new entity, having a shadowed identity the product of the individual shadowed identities, while each member reveals nothing about its secret. In another scenario, the secret is partitioned between distinkt devices sharing the same identity. A group of cooperative users looks like a unique user having a larger public exponent which is the greater common multiple of each individual exponent. In this paper, additional features are introduced in order to provide: firstly, a mutual interactive authentication of both communicating entities and previously exchanged messages, and, secondly, a digital signature of messages, with a non-interactive zero-knowledge protocol. The problem of multiple signature is solved here in a very smart way due to the possibilities of cooperation between users. The only secret key is the factors of the composite number chosen by the authority delivering one authentication number to each smart card. This key is not known by the user. At the user level, such a scheme may be considered as a keyless identity-based integrity scheme. This integrity has a new and important property: it cannot be misused, i.e. derived into a confidentiality scheme.",
"title": ""
}
] |
scidocsrr
|
106748fc564850d4e56f05cd981460fd
|
My Computer Is an Honor Student - but How Intelligent Is It? Standardized Tests as a Measure of AI
|
[
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
}
] |
[
{
"docid": "f5ba6ef8d99ccc57bf64f7e5c3c05f7e",
"text": "Applications of fuzzy logic (FL) to power electronics and drives are on the rise. The paper discusses some representative applications of FL in the area, preceded by an interpretative review of fuzzy logic controller (FLC) theory. A discussion on design and implementation aspects is presented, that also considers the interaction of neural networks and fuzzy logic techniques. Finally, strengths and limitations of FLC are considered, including possible applications in the area.",
"title": ""
},
{
"docid": "e58036f93195603cb7dc7265b9adeb25",
"text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.",
"title": ""
},
{
"docid": "f1255742f2b1851457dd92ad97db7c8e",
"text": "Model transformations are frequently applied in business process modeling to bridge between languages on a different level of abstraction and formality. In this paper, we define a transformation between BPMN which is developed to enable business user to develop readily understandable graphical representations of business processes and YAWL, a formal workflow language that is able to capture all of the 20 workflow patterns reported. We illustrate the transformation challenges and present a suitable transformation algorithm. The benefit of the transformation is threefold. Firstly, it clarifies the semantics of BPMN via a mapping to YAWL. Secondly, the deployment of BPMN business process models is simplified. Thirdly, BPMN models can be analyzed with YAWL verification tools.",
"title": ""
},
{
"docid": "bec66d4d576f2c5c5643ffe4b72ab353",
"text": "Many cities suffer from noise pollution, which compromises people's working efficiency and even mental health. New York City (NYC) has opened a platform, entitled 311, to allow people to complain about the city's issues by using a mobile app or making a phone call; noise is the third largest category of complaints in the 311 data. As each complaint about noises is associated with a location, a time stamp, and a fine-grained noise category, such as \"Loud Music\" or \"Construction\", the data is actually a result of \"human as a sensor\" and \"crowd sensing\", containing rich human intelligence that can help diagnose urban noises. In this paper we infer the fine-grained noise situation (consisting of a noise pollution indicator and the composition of noises) of different times of day for each region of NYC, by using the 311 complaint data together with social media, road network data, and Points of Interests (POIs). We model the noise situation of NYC with a three dimension tensor, where the three dimensions stand for regions, noise categories, and time slots, respectively. Supplementing the missing entries of the tensor through a context-aware tensor decomposition approach, we recover the noise situation throughout NYC. The information can inform people and officials' decision making. We evaluate our method with four real datasets, verifying the advantages of our method beyond four baselines, such as the interpolation-based approach.",
"title": ""
},
{
"docid": "6e60d6b878c35051ab939a03bdd09574",
"text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.",
"title": ""
},
{
"docid": "f1e23e1ed402a8ac6bb08bbf82c689a0",
"text": "Droop control has been widely applied in dc microgrids (MGs) due to its inherent modularity and ease of implementation. Among the different droop control methods that can be adopted in dc MGs, two options have been considered in this paper: I-V and V-I droop. I-V droop controls the dc current depending on the dc voltage while V-I droop regulates the dc voltage based on the output current. The paper proposes a comparative study of V-I/I-V droop control approaches in dc MGs focusing on steady-state power-sharing performance and stability. The paper presents the control scheme for current-mode (I-V droop) and voltage-mode ( V-I droop) systems, derives the corresponding output impedance of the source subsystem, including converters dynamics, and analyzes the stability of the power system when supplying constant power loads. The paper first investigates the impact on stability of the key parameters including droop gains, local control loop dynamics, and number of sources and then performs a comparison between current-mode and voltage-mode systems in terms of stability. In addition, a generalized analytical impedance model of a multisource, multiload power system is presented to investigate stability in a more realistic scenario. For this purpose, the paper proposes the concept of “global droop gain” as an important factor to determine the stability behaviour of a parallel sources based dc system. The theoretical analysis has been validated with experimental results from a laboratory-scale dc MG.",
"title": ""
},
{
"docid": "4eabc161187126a726a6b65f6fc6c685",
"text": "In this paper, we propose a new method to estimate synthetic aperture radar interferometry (InSAR) interferometric phase in the presence of large coregistration errors. The method takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can automatically coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the terrain interferometric phase (interferogram) as the coregistration error reaches one pixel. The effectiveness of the method is also verified with the real data from the Spaceborne Imaging Radar-C/X Band SAR and the European Remote Sensing 1 and 2 satellites.",
"title": ""
},
{
"docid": "a009fc320c5a61d8d8df33c19cd6037f",
"text": "Over the past decade, crowdsourcing has emerged as a cheap and efficient method of obtaining solutions to simple tasks that are difficult for computers to solve but possible for humans. The popularity and promise of crowdsourcing markets has led to both empirical and theoretical research on the design of algorithms to optimize various aspects of these markets, such as the pricing and assignment of tasks. Much of the existing theoretical work on crowdsourcing markets has focused on problems that fall into the broad category of online decision making; task requesters or the crowdsourcing platform itself make repeated decisions about prices to set, workers to filter out, problems to assign to specific workers, or other things. Often these decisions are complex, requiring algorithms that learn about the distribution of available tasks or workers over time and take into account the strategic (or sometimes irrational) behavior of workers.\n As human computation grows into its own field, the time is ripe to address these challenges in a principled way. However, it appears very difficult to capture all pertinent aspects of crowdsourcing markets in a single coherent model. In this paper, we reflect on the modeling issues that inhibit theoretical research on online decision making for crowdsourcing, and identify some steps forward. This paper grew out of the authors' own frustration with these issues, and we hope it will encourage the community to attempt to understand, debate, and ultimately address them.",
"title": ""
},
{
"docid": "96e5fcffc40efbe09e44b25a8865f89a",
"text": "We propose a general yet simple theorem describing the convergence of SGD under the arbitrary sampling paradigm. Our theorem describes the convergence of an infinite array of variants of SGD, each of which is associated with a specific probability law governing the data selection rule used to form minibatches. This is the first time such an analysis is performed, and most of our variants of SGD were never explicitly considered in the literature before. Our analysis relies on the recently introduced notion of expected smoothness and does not rely on a uniform bound on the variance of the stochastic gradients. By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size. With this we can also determine the mini-batch size that optimizes the total complexity, and show explicitly that as the variance of the stochastic gradient evaluated at the minimum grows, so does the optimal mini-batch size. For zero variance, the optimal mini-batch size is one. Moreover, we prove insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime.",
"title": ""
},
{
"docid": "c5cc7fc9651ff11d27e08e1910a3bd20",
"text": "An omnidirectional circularly polarized (OCP) antenna operating at 28 GHz is reported and has been found to be a promising candidate for device-to-device (D2D) communications in the next generation (5G) wireless systems. The OCP radiation is realized by systematically integrating electric and magnetic dipole elements into a compact disc-shaped configuration (9.23 mm $^{3} =0.008~\\lambda _{0}^{3}$ at 28 GHz) in such a manner that they are oriented in parallel and radiate with the proper phase difference. The entire antenna structure was printed on a single piece of dielectric substrate using standard PCB manufacturing technologies and, hence, is amenable to mass production. A prototype OCP antenna was fabricated on Rogers 5880 substrate and was tested. The measured results are in good agreement with their simulated values and confirm the reported design concepts. Good OCP radiation patterns were produced with a measured peak realized RHCP gain of 2.2 dBic. The measured OCP overlapped impedance and axial ratio bandwidth was 2.2 GHz, from 26.5 to 28.7 GHz, an 8 % fractional bandwidth, which completely covers the 27.5 to 28.35 GHz band proposed for 5G cellular systems.",
"title": ""
},
{
"docid": "9e536dcfbe4c37659da5aa9e018b34a9",
"text": "A major challenge in Bayesian Optimization is the boundary issue (Swersky, 2017) where an algorithm spends too many evaluations near the boundary of its search space. In this paper we propose BOCK, Bayesian Optimization with Cylindrical Kernels, whose basic idea is to transform the ball geometry of the search space using a cylindrical transformation. Because of the transformed geometry, the Gaussian Process-based surrogate model spends less budget searching near the boundary, while concentrating its efforts relatively more near the center of the search region, where we expect the solution to be located. We evaluate BOCK extensively, showing that it is not only more accurate and efficient, but it also scales successfully to problems with a dimensionality as high as 500. We show that the better accuracy and scalability of BOCK even allows optimizing modestly sized neural network layers, as well as neural network hyperparameters.",
"title": ""
},
{
"docid": "cdc1e3b629659bf342def1f262d7aa0b",
"text": "In educational contexts, understanding the student’s learning must take account of the student’s construction of reality. Reality as experienced by the student has an important additional value. This assumption also applies to a student’s perception of evaluation and assessment. Students’ study behaviour is not only determined by the examination or assessment modes that are used. Students’ perceptions about evaluation methods also play a significant role. This review aims to examine evaluation and assessment from the student’s point of view. Research findings reveal that students’ perceptions about assessment significantly influence their approaches to learning and studying. Conversely, students’ approaches to study influence the ways in which they perceive evaluation and assessment. Findings suggest that students hold strong views about different assessment and evaluation formats. In this respect students favour multiple-choice format exams to essay type questions. However, when compared with more innovative assessment methods, students call the ‘fairness’ of these well-known evaluation modes into question.",
"title": ""
},
{
"docid": "b038feb73551ec07809add57344b4f9d",
"text": "IMPORTANCE\nThe appropriate treatment target for systolic blood pressure (SBP) in older patients with hypertension remains uncertain.\n\n\nOBJECTIVE\nTo evaluate the effects of intensive (<120 mm Hg) compared with standard (<140 mm Hg) SBP targets in persons aged 75 years or older with hypertension but without diabetes.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nA multicenter, randomized clinical trial of patients aged 75 years or older who participated in the Systolic Blood Pressure Intervention Trial (SPRINT). Recruitment began on October 20, 2010, and follow-up ended on August 20, 2015.\n\n\nINTERVENTIONS\nParticipants were randomized to an SBP target of less than 120 mm Hg (intensive treatment group, n = 1317) or an SBP target of less than 140 mm Hg (standard treatment group, n = 1319).\n\n\nMAIN OUTCOMES AND MEASURES\nThe primary cardiovascular disease outcome was a composite of nonfatal myocardial infarction, acute coronary syndrome not resulting in a myocardial infarction, nonfatal stroke, nonfatal acute decompensated heart failure, and death from cardiovascular causes. All-cause mortality was a secondary outcome.\n\n\nRESULTS\nAmong 2636 participants (mean age, 79.9 years; 37.9% women), 2510 (95.2%) provided complete follow-up data. At a median follow-up of 3.14 years, there was a significantly lower rate of the primary composite outcome (102 events in the intensive treatment group vs 148 events in the standard treatment group; hazard ratio [HR], 0.66 [95% CI, 0.51-0.85]) and all-cause mortality (73 deaths vs 107 deaths, respectively; HR, 0.67 [95% CI, 0.49-0.91]). The overall rate of serious adverse events was not different between treatment groups (48.4% in the intensive treatment group vs 48.3% in the standard treatment group; HR, 0.99 [95% CI, 0.89-1.11]). Absolute rates of hypotension were 2.4% in the intensive treatment group vs 1.4% in the standard treatment group (HR, 1.71 [95% CI, 0.97-3.09]), 3.0% vs 2.4%, respectively, for syncope (HR, 1.23 [95% CI, 0.76-2.00]), 4.0% vs 2.7% for electrolyte abnormalities (HR, 1.51 [95% CI, 0.99-2.33]), 5.5% vs 4.0% for acute kidney injury (HR, 1.41 [95% CI, 0.98-2.04]), and 4.9% vs 5.5% for injurious falls (HR, 0.91 [95% CI, 0.65-1.29]).\n\n\nCONCLUSIONS AND RELEVANCE\nAmong ambulatory adults aged 75 years or older, treating to an SBP target of less than 120 mm Hg compared with an SBP target of less than 140 mm Hg resulted in significantly lower rates of fatal and nonfatal major cardiovascular events and death from any cause.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT01206062.",
"title": ""
},
{
"docid": "9d04b10ebe8a65777aacf20fe37b55cb",
"text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.",
"title": ""
},
{
"docid": "f5c6f4d125ebe557367bdb404c3094fb",
"text": "In this paper, we present a Chinese event extraction system. We point out a language specific issue in Chinese trigger labeling, and then commit to discussing the contributions of lexical, syntactic and semantic features applied in trigger labeling and argument labeling. As a result, we achieved competitive performance, specifically, F-measure of 59.9 in trigger labeling and F-measure of 43.8 in argument labeling.",
"title": ""
},
{
"docid": "d0df1484ea03e91489e8916130392506",
"text": "Most of the conventional face hallucination methods assume the input image is sufficiently large and aligned, and all require the input image to be noise-free. Their performance degrades drastically if the input image is tiny, unaligned, and contaminated by noise. In this paper, we introduce a novel transformative discriminative autoencoder to 8X super-resolve unaligned noisy and tiny (16X16) low-resolution face images. In contrast to encoder-decoder based autoencoders, our method uses decoder-encoder-decoder networks. We first employ a transformative discriminative decoder network to upsample and denoise simultaneously. Then we use a transformative encoder network to project the intermediate HR faces to aligned and noise-free LR faces. Finally, we use the second decoder to generate hallucinated HR images. Our extensive evaluations on a very large face dataset show that our method achieves superior hallucination results and outperforms the state-of-the-art by a large margin of 1.82dB PSNR.",
"title": ""
},
{
"docid": "c0f5abdba3aa843f4419f59c92ed14ea",
"text": "ROC and DET curves are often used in the field of person authentication to assess the quality of a model or even to compare several models. We argue in this paper that this measure can be misleading as it compares performance measures that cannot be reached simultaneously by all systems. We propose instead new curves, called Expected Performance Curves (EPC). These curves enable the comparison between several systems according to a criterion, decided by the application, which is used to set thresholds according to a separate validation set. A free sofware is available to compute these curves. A real case study is used throughout the paper to illustrate it. Finally, note that while this study was done on an authentication problem, it also applies to most 2-class classification tasks.",
"title": ""
},
{
"docid": "16b542b43bea770b40e30a568856d05c",
"text": "To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist out of several modules, standard subnetworks that serve as higher-order units with a distinct structure and function. The simulations rely on a particular network module called CALM (Murre, Phaf, and Wolters, 1989, 1992). This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, computational learning theory, and hardware implementation, a framework for the design of such modular networks is laid-out. A number of small-scale simulation studies shows how intermodule connectivity patterns implement ’neural assemblies’ (Hebb, 1949) that induce a particular category structure in the network. Learning and categorization improves as the induced categories are more compatible with the structure of the task domain. In addition to structural compatibility, two other principles of design are proposed that underlie information processing in interactive activation networks:replication and recurrence. Because a general theory for relating network architectures to specific neural functions does not exist, we extend the biological metaphor of neural networks, by applying ge etic algorithms(a biocomputing method for search and optimization based on natural selection and evolution) to search for optimal modular network architectures for learning a visual categorization task. The best performing network architectures seemed to have reproduced some of the overall characteristics of the natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways. A potentially important result is that a genetically defined initial architecture cannot only enhance learning and recognition performance, but it can also induce a system to better generalizeits learned behaviour to instances never encountered before. This may explain why for many vital learning tasks in organisms only a minimal exposure to relevant stimuli is necessary. 1 Leiden Connectionist Group, Unit of Experimental and Theoretical Psychology, P.O. Box 9555, 2300 RB Leiden, The Netherlands. E-mail: Happel@rulfsw.leidenuniv.nl. 2 The second author is presently at the MRC Applied Psychology Unit, 15 Chaucer Road, Cambridge CB2 2EF, United Kingdom. E-mail: Jaap.Murre@mrc-apu.cam.ac.uk. This work is sponsored in part by the Dutch Foundation for Neural Networks and by the Medical Research Council. Happel and Murre The design and evolution of modular neural networks2",
"title": ""
},
{
"docid": "94f040bf8f9bc6f30109b822b977c3b5",
"text": "Introduction: The tooth mobility due to periodontal bone loss can cause masticatory discomfort, mainly in protrusive movements in the region of the mandibular anterior teeth. Thus, the splinting is a viable alternative to keep them in function satisfactorily. Objective: This study aimed to demonstrate, through a clinical case with medium-term following-up, the clinical application of splinting with glass fiber-reinforced composite resin. Case report: Female patient, 73 years old, complained about masticatory discomfort related to the right mandibular lateral incisor. Clinical and radiographic evaluation showed grade 2 dental mobility, bone loss and increased periodontal ligament space. The proposed treatment was splinting with glass fiber-reinforced composite resin from the right mandibular canine to left mandibular canine. Results: Four-year follow-up showed favorable clinical and radiographic results with respect to periodontal health and maintenance of functional aspects. Conclusion: The splinting with glass fiber-reinforced composite resin is a viable technique and stable over time for the treatment of tooth mobility.",
"title": ""
}
] |
scidocsrr
|
85f485dcf3e48b71c7cea30aef5dfb8b
|
Speaker identification features extraction methods: A systematic review
|
[
{
"docid": "5099bf6bfc1c878d9d07b057a9918492",
"text": "Speaker identification attempts to determine the best possible match from a group of certain speakers, for any given input speech signal. The text-independent speaker identification system does the task to identify the person who speaks regardless of what is said. The first step in speaker identification is the extraction of features. In this proposed method, the Bessel features are used as an alternative to the popular techniques like MFCC and LPCC. The quasi-stationary nature of speech signal is more efficiently represented by damped sinusoidal basis function that is more natural for the voiced speech signal. Since Bessel functions have damped sinusoidal as basis function, it is more natural choice for the representation of speech signals. Here, Bessel features derived from the speech signal is used for creating the Gaussian mixture models for text independent speaker identification. A set of ten speakers is used for modelling using Gaussian mixtures. The proposed system is made to test over the Malayalam database obtaining an efficiency of 98% which is promising.",
"title": ""
}
] |
[
{
"docid": "ebc8966779ba3b9e6a768f4c462093f5",
"text": "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003—significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.",
"title": ""
},
{
"docid": "9164bd704cdb8ca76d0b5f7acda9d4ef",
"text": "In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal \"canonical\" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.",
"title": ""
},
{
"docid": "17c6859c2ec80d4136cb8e76859e47a6",
"text": "This paper describes a complete and efficient vision system d eveloped for the robotic soccer team of the University of Aveiro, CAMB ADA (Cooperative Autonomous Mobile roBots with Advanced Distributed Ar chitecture). The system consists on a firewire camera mounted vertically on th e top of the robots. A hyperbolic mirror placed above the camera reflects the 360 d egrees of the field around the robot. The omnidirectional system is used to find t he ball, the goals, detect the presence of obstacles and the white lines, used by our localization algorithm. In this paper we present a set of algorithms to extract efficiently the color information of the acquired images and, in a second phase, ex tract the information of all objects of interest. Our vision system architect ure uses a distributed paradigm where the main tasks, namely image acquisition, co lor extraction, object detection and image visualization, are separated in se veral processes that can run at the same time. We developed an efficient color extracti on algorithm based on lookup tables and a radial model for object detection. Our participation in the last national robotic contest, ROBOTICA 2007, where we have obtained the first place in the Medium Size League of robotic soccer, shows the e ffectiveness of our algorithms. Moreover, our experiments show that the sys tem is fast and accurate having a maximum processing time independently of the r obot position and the number of objects found in the field.",
"title": ""
},
{
"docid": "b6b63aa72904f9b7e24e3750c0db12f0",
"text": "The explosion of the learning materials in personal learning environments has caused difficulties to locate appropriate learning materials to learners. Personalized recommendations have been used to support the activities of learners in personal learning environments and this technology can deliver suitable learning materials to learners. In order to improve the quality of recommendations, this research considers the multidimensional attributes of material, rating of learners, and the order and sequential patterns of the learner's accessed material in a unified model. The proposed approach has two modules. In the sequential-based recommendation module, latent patterns of accessing materials are discovered and presented in two formats including the weighted association rules and the compact tree structure (called Pattern-tree). In the attribute-based module, after clustering the learners using latent patterns by K-means algorithm, the learner preference tree (LPT) is introduced to consider the multidimensional attributes of materials, rating of learners, and also order of the accessed materials. The mixed, weighted, and cascade hybrid methods are employed to generate the final combined recommendations. The experiments show that the proposed approach outperforms the previous algorithms in terms of precision, recall, and intra-list similarity measure. The main contributions are improvement of the recommenda-tions' quality and alleviation of the sparsity problem by combining the contextual information, including order and sequential patterns of the accessed material, rating of learners, and the multidimensional attributes of materials. With the explosion of learning materials available on personal learning environments (PLEs), it is difficult for learners to discover the most appropriate materials according to keyword searching method. One way to address this challenge is the use of recom-mender systems [16]. In addition, up to very recent years, several researches have expressed the need for personalization in e-learning environments. In fact, one of the new forms of personalization in e-learning environments is to provide recommendations to learners to support and help them through the e-learning process [19]. According to the strategies applied, recommender systems can be segmented into three major categories: content-based, collabo-rative, and hybrid recommendation [1]. Hybrid recommendation mechanisms attempt to deal with some of the limitations and overcome the drawbacks of pure content-based approach and pure collaborative approach by combining the two approaches. The majority of the traditional recommendation algorithms have been developed for e-commerce applications, which are unable to cover the entire requirements of learning environments. One of these drawbacks is that they do not consider the learning process in their recommendation …",
"title": ""
},
{
"docid": "4569526ff0e03e01264a6e1e566a88c9",
"text": "Trust management is a fundamental and critical aspect of any serious application in ITS. However, only a few studies have addressed this important problem. In this paper, we present a survey on trust management for ITS. We first describe the properties of trust, trust metrics and potential attacks against trust management schemes. Existing related works are then reviewed based on the way in which trust management is implemented. Along with the review, we also identify some open research questions for future work, and consequently present a novel idea of trust management implementation.",
"title": ""
},
{
"docid": "19100853a7f0f4d519e0a5513a83aa08",
"text": "The authors explain how to perform software inspections to locate defects. They present metrics for inspection and examples of its effectiveness. The authors contend, on the basis of their experiences and those reported in the literature, that inspections can detect and eliminate faults more cheaply than testing.<<ETX>>",
"title": ""
},
{
"docid": "ac07682e0fa700a8f0c9df025feb2c53",
"text": "Today's web applications run inside a complex browser environment that is buggy, ill-specified, and implemented in different ways by different browsers. Thus, web applications that desire robustness must use a variety of conditional code paths and ugly hacks to deal with the vagaries of their runtime. Our new exokernel browser, called Atlantis, solves this problem by providing pages with an extensible execution environment. Atlantis defines a narrow API for basic services like collecting user input, exchanging network data, and rendering images. By composing these primitives, web pages can define custom, high-level execution environments. Thus, an application which does not want a dependence on Atlantis'predefined web stack can selectively redefine components of that stack, or define markup formats and scripting languages that look nothing like the current browser runtime. Unlike prior microkernel browsers like OP, and unlike compile-to-JavaScript frameworks like GWT, Atlantis is the first browsing system to truly minimize a web page's dependence on black box browser code. This makes it much easier to develop robust, secure web applications.",
"title": ""
},
{
"docid": "db657866610debb4c2f96c98c241b1f2",
"text": "Oxidative stress is viewed as an imbalance between the production of reactive oxygen species (ROS) and their elimination by protective mechanisms, which can lead to chronic inflammation. Oxidative stress can activate a variety of transcription factors, which lead to the differential expression of some genes involved in inflammatory pathways. The inflammation triggered by oxidative stress is the cause of many chronic diseases. Polyphenols have been proposed to be useful as adjuvant therapy for their potential anti-inflammatory effect, associated with antioxidant activity, and inhibition of enzymes involved in the production of eicosanoids. This review aims at exploring the properties of polyphenols in anti-inflammation and oxidation and the mechanisms of polyphenols inhibiting molecular signaling pathways which are activated by oxidative stress, as well as the possible roles of polyphenols in inflammation-mediated chronic disorders. Such data can be helpful for the development of future antioxidant therapeutics and new anti-inflammatory drugs.",
"title": ""
},
{
"docid": "72aaa1dc7bdffa25c884ebbe4acf671d",
"text": "BACKGROUND\nAnkylosing spondylitis (AS) can cause severe functional disorders that lead to loss of balance.\n\n\nOBJECTIVE\nThe aim of this study was to investigate the effects of balance and postural stability exercises on spa based rehabilitation programme in AS subjects.\n\n\nMETHODS\nTwenty-one participants were randomized to the study (n= 11) and control groups (n= 10). Patients balance and stability were assessed with the Berg Balance Scale (BBS), Timed Up and Go (TUG) Test, Single Leg Stance Test (SLST) and Functional Reach Test (FRT). AS spesicied measures were used for assessing to other parameters. The treatment plan for both groups consisted of conventional transcutaneous electrical nerve stimulation (TENS), spa and land-based exercises 5 days per week for 3 weeks. The study group performed exercises based on postural stability and balance with routine physiotherapy practice in thermal water and in exercise room.\n\n\nRESULTS\nThe TUG, SLST and FUT scores were significantly increased in the study group. In both groups, the BASMI, BASFI, BASDAI and ASQoL scores decreased significantly by the end of the treatment period (p< 0.05).\n\n\nCONCLUSIONS\nIn AS rehabilitation, performing balance and stability exercises in addition to spa based routine approaches can increase the duration of maintaining balance and can improve the benefits of physiotherapy.",
"title": ""
},
{
"docid": "da5311faac94c2d98e13bc6895519d86",
"text": "Several studies in Sweden have looked into railway electromagnetic interference (EMI) either to discover the source of the interference or to determine if the equipment in the system is performing properly. The movement of rolling stock along an electrified track produces certain EMI events. Transient electromagnetic fields are produced in the signalling system when the train leaves the neutral section of the overhead power line and enters the powered section. These transient EM fields are mainly produced by the engine. The track's infrastructure system has been tested for EMI events, but this phenomenon affects the surrounding environment as well, up to at least 10 meters from the track. The infrastructure is designed so that the return current from locomotives should go through the running rails, but occasionally the ground acts as a conductor, transmitting current to areas that are distant from the rail. The paper reviews the status of Swedish railways with respect to electromagnetic compatibility. This TREND project is a joint project with 7 FP EU.",
"title": ""
},
{
"docid": "000961818e2e0e619f1fc0464f69a496",
"text": "Database query languages can be intimidating to the non-expert, leading to the immense recent popularity for keyword based search in spite of its significant limitations. The holy grail has been the development of a natural language query interface. We present NaLIX, a generic interactive natural language query interface to an XML database. Our system can accept an arbitrary English language sentence as query input, which can include aggregation, nesting, and value joins, among other things. This query is translated, potentially after reformulation, into an XQuery expression that can be evaluated against an XML database. The translation is done through mapping grammatical proximity of natural language parsed tokens to proximity of corresponding elements in the result XML. In this demonstration, we show that NaLIX, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed features in NaLIX facilitate the interactive query process and improve the usability of the interface.",
"title": ""
},
{
"docid": "9d37260c493c40523c268f6e54c8b4ea",
"text": "Social collaborative filtering recommender systems extend the traditional user-to-item interaction with explicit user-to-user relationships, thereby allowing for a wider exploration of correlations among users and items, that potentially lead to better recommendations. A number of methods have been proposed in the direction of exploring the social network, either locally (i.e. the vicinity of each user) or globally. In this paper, we propose a novel methodology for collaborative filtering social recommendation that tries to combine the merits of both the aforementioned approaches, based on the soft-clustering of the Friend-of-a-Friend (FoaF) network of each user. This task is accomplished by the non-negative factorization of the adjacency matrix of the FoaF graph, while the edge-centric logic of the factorization algorithm is ameliorated by incorporating more general structural properties of the graph, such as the number of edges and stars, through the introduction of the exponential random graph models. The preliminary results obtained reveal the potential of this idea.",
"title": ""
},
{
"docid": "472e9807c2f4ed6d1e763dd304f22c64",
"text": "Commercial analytical database systems suffer from a high \"time-to-first-analysis\": before data can be processed, it must be modeled and schematized (a human effort), transferred into the database's storage layer, and optionally clustered and indexed (a computational effort). For many types of structured data, this upfront effort is unjustifiable, so the data are processed directly over the file system using the Hadoop framework, despite the cumulative performance benefits of processing this data in an analytical database system. In this paper we describe a system that achieves the immediate gratification of running MapReduce jobs directly over a file system, while still making progress towards the long-term performance benefits of database systems. The basic idea is to piggyback on MapReduce jobs, leverage their parsing and tuple extraction operations to incrementally load and organize tuples into a database system, while simultaneously processing the file system data. We call this scheme Invisible Loading, as we load fractions of data at a time at almost no marginal cost in query latency, but still allow future queries to run much faster.",
"title": ""
},
{
"docid": "e5048285c2616e9bfb28accd91629187",
"text": "Hidden Markov Models (HMMs) are learning methods for pattern recognition. The probabilistic HMMs have been one of the most used techniques based on the Bayesian model. First-order probabilistic HMMs were adapted to the theory of belief functions such that Bayesian probabilities were replaced with mass functions. In this paper, we present a second-order Hidden Markov Model using belief functions. Previous works in belief HMMs have been focused on the first-order HMMs. We extend them to the second-order model.",
"title": ""
},
{
"docid": "d2e078d0e40b4be456c57f288c7aaa95",
"text": "This study examines the factors influencing online shopping behavior of urban consumers in the State of Andhra Pradesh, India and provides a better understanding of the potential of electronic marketing for both researchers and online retailers. Data from a sample of 1500 Internet users (distributed evenly among six selected major cities) was collected by a structured questionnaire covering demographic profile and the factors influencing online shopping. Factor analysis and multiple regression analysis are used to establish relationship between the factors influencing online shopping and online shopping behavior. The study identified that perceived risk and price positively influenced online shopping behavior. Results also indicated that positive attitude, product risk and financial risk affect negatively the online shopping behavior. Factors Influencing Online Shopping Behavior of Urban Consumers in India",
"title": ""
},
{
"docid": "69c9aa877b9416e2a884eaa5408eb890",
"text": "Integrating trust and automation in finance.",
"title": ""
},
{
"docid": "a07472c2f086332bf0f97806255cb9d5",
"text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.",
"title": ""
},
{
"docid": "9d615d361cb1a357ae1663d1fe581d24",
"text": "We report three patients with dissecting cellulitis of the scalp. Prolonged treatment with oral isotretinoin was highly effective in all three patients. Furthermore, long-term post-treatment follow-up in two of the patients has shown a sustained therapeutic benefit.",
"title": ""
},
{
"docid": "fee9c0dbf2cbe0107b7a999694f293ca",
"text": "In traditional approaches for clustering market basket type data, relations among transactions are modeled according to the items occurring in these transactions. However, an individual item might induce different relations in different contexts. Since such contexts might be captured by interesting patterns in the overall data, we represent each transaction as a set of patterns through modifying the conventional pattern semantics. By clustering the patterns in the dataset, we infer a clustering of the transactions represented this way. For this, we propose a novel hypergraph model to represent the relations among the patterns. Instead of a local measure that depends only on common items among patterns, we propose a global measure that is based on the cooccurences of these patterns in the overall data. The success of existing hypergraph partitioning based algorithms in other domains depends on sparsity of the hypergraph and explicit objective metrics. For this, we propose a two-phase clustering approach for the above hypergraph, which is expected to be dense. In the first phase, the vertices of the hypergraph are merged in a multilevel algorithm to obtain large number of high quality clusters. Here, we propose new quality metrics for merging decisions in hypergraph clustering specifically for this domain. In order to enable the use of existing metrics in the second phase, we introduce a vertex-to-cluster affinity concept to devise a method for constructing a sparse hypergraph based on the obtained clustering. The experiments we have performed show the effectiveness of the proposed framework.",
"title": ""
},
{
"docid": "61953281f4b568ad15e1f62be9d68070",
"text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.",
"title": ""
}
] |
scidocsrr
|
b3d61252436267694daa1f132f6726ca
|
Progress in Tourism Management Tourism supply chain management : A new research agenda
|
[
{
"docid": "5bd3cf8712d04b19226e53fca937e5a6",
"text": "This paper reviews the published studies on tourism demand modelling and forecasting since 2000. One of the key findings of this review is that the methods used in analysing and forecasting the demand for tourism have been more diverse than those identified by other review articles. In addition to the most popular time series and econometric models, a number of new techniques have emerged in the literature. However, as far as the forecasting accuracy is concerned, the study shows that there is no single model that consistently outperforms other models in all situations. Furthermore, this study identifies some new research directions, which include improving the forecasting accuracy through forecast combination; integrating both qualitative and quantitative forecasting approaches, tourism cycles and seasonality analysis, events’ impact assessment and risk forecasting.",
"title": ""
}
] |
[
{
"docid": "1274ab286b1e3c5701ebb73adc77109f",
"text": "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.",
"title": ""
},
{
"docid": "5d23af3f778a723b97690f8bf54dfa41",
"text": "Software engineering techniques have been employed for many years to create software products. The selections of appropriate software development methodologies for a given project, and tailoring the methodologies to a specific requirement have been a challenge since the establishment of software development as a discipline. In the late 1990’s, the general trend in software development techniques has changed from traditional waterfall approaches to more iterative incremental development approaches with different combination of old concepts, new concepts, and metamorphosed old concepts. Nowadays, the aim of most software companies is to produce software in short time period with minimal costs, and within unstable, changing environments that inspired the birth of Agile. Agile software development practice have caught the attention of software development teams and software engineering researchers worldwide during the last decade but scientific research and published outcomes still remains quite scarce. Every agile approach has its own development cycle that results in technological, managerial and environmental changes in the software companies. This paper explains the values and principles of ten agile practices that are becoming more and more dominant in the software development industry. Agile processes are not always beneficial, they have some limitations as well, and this paper also discusses the advantages and disadvantages of Agile processes.",
"title": ""
},
{
"docid": "21e235169d37658afee28d5f3f7c831b",
"text": "Two studies assessed the effects of a training procedure (Goal Management Training, GMT), derived from Duncan's theory of goal neglect, on disorganized behavior following TBI. In Study 1, patients with traumatic brain injury (TBI) were randomly assigned to brief trials of GMT or motor skills training. GMT, but not motor skills training, was associated with significant gains on everyday paper-and-pencil tasks designed to mimic tasks that are problematic for patients with goal neglect. In Study 2, GMT was applied in a postencephalitic patient seeking to improve her meal-preparation abilities. Both naturalistic observation and self-report measures revealed improved meal preparation performance following GMT. These studies provide both experimental and clinical support for the efficacy of GMT toward the treatment of executive functioning deficits that compromise independence in patients with brain damage.",
"title": ""
},
{
"docid": "3f1ab17fb722d5a2612675673b200a82",
"text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.",
"title": ""
},
{
"docid": "8b1bd5243d4512324e451a780c1ec7d3",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this fundamentals of computer security by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "ed63ebf895f1f37ba9b788c36b8e6cfc",
"text": "Melanocyte stem cells (McSCs) and mouse models of hair graying serve as useful systems to uncover mechanisms involved in stem cell self-renewal and the maintenance of regenerating tissues. Interested in assessing genetic variants that influence McSC maintenance, we found previously that heterozygosity for the melanogenesis associated transcription factor, Mitf, exacerbates McSC differentiation and hair graying in mice that are predisposed for this phenotype. Based on transcriptome and molecular analyses of Mitfmi-vga9/+ mice, we report a novel role for MITF in the regulation of systemic innate immune gene expression. We also demonstrate that the viral mimic poly(I:C) is sufficient to expose genetic susceptibility to hair graying. These observations point to a critical suppressor of innate immunity, the consequences of innate immune dysregulation on pigmentation, both of which may have implications in the autoimmune, depigmenting disease, vitiligo.",
"title": ""
},
{
"docid": "cee3833160aa1cc513e96d49b72eeea9",
"text": "Spatial filtering (SF) constitutes an integral part of building EEG-based brain-computer interfaces (BCIs). Algorithms frequently used for SF, such as common spatial patterns (CSPs) and independent component analysis, require labeled training data for identifying filters that provide information on a subject's intention, which renders these algorithms susceptible to overfitting on artifactual EEG components. In this study, beamforming is employed to construct spatial filters that extract EEG sources originating within predefined regions of interest within the brain. In this way, neurophysiological knowledge on which brain regions are relevant for a certain experimental paradigm can be utilized to construct unsupervised spatial filters that are robust against artifactual EEG components. Beamforming is experimentally compared with CSP and Laplacian spatial filtering (LP) in a two-class motor-imagery paradigm. It is demonstrated that beamforming outperforms CSP and LP on noisy datasets, while CSP and beamforming perform almost equally well on datasets with few artifactual trials. It is concluded that beamforming constitutes an alternative method for SF that might be particularly useful for BCIs used in clinical settings, i.e., in an environment where artifact-free datasets are difficult to obtain.",
"title": ""
},
{
"docid": "4af5b29ebda47240d51cd5e7765d990f",
"text": "In this paper, a Rectangular Waveguide (RW) to microstrip transition with Low-Temperature Co-fired Ceramic (LTCC) technology in Ka-band is designed, fabricated and measured. Compared to the traditional transition using a rectangular slot, the proposed Stepped-Impedance Resonator (SIR) slot enlarges the bandwidth of the transition. By introducing an additional design parameter, it generates multi-modes within the transition. To further improve the bandwidth and to adjust the performance of the transition, a resonant strip is embedded between the open microstrip line and its ground plane. Measured results agree well with that of the simulation, showing an effective bandwidth about 22% (from 28.5 GHz to 36.5GHz), an insertion loss approximately 3 dB and return loss better than 15 dB in the pass-band.",
"title": ""
},
{
"docid": "b7eb2c65c459c9d5776c1e2cba84706c",
"text": "Observers, searching for targets among distractor items, guide attention with a mix of top-down information--based on observers' knowledge--and bottom-up information--stimulus-based and largely independent of that knowledge. There are 2 types of top-down guidance: explicit information (e.g., verbal description) and implicit priming by preceding targets (top-down because it implies knowledge of previous searches). Experiments 1 and 2 separate bottom-up and top-down contributions to singleton search. Experiment 3 shows that priming effects are based more strongly on target than on distractor identity. Experiments 4 and 5 show that more difficult search for one type of target (color) can impair search for other types (size, orientation). Experiment 6 shows that priming guides attention and does not just modulate response.",
"title": ""
},
{
"docid": "44480b69d1f49703db82977d1e248946",
"text": "Civic crowdfunding is a sub-type of crowdfunding whereby citizens contribute to funding community-based projects ranging from physical structures to amenities. Though civic crowdfunding has great potential for impact, it remains a developing field in terms of project success and widespread adoption. To explore how technology shapes interactions and outcomes within civic projects, our research addresses two interrelated questions: how do offline communities engage online across civic crowdfunding projects, and, what purpose does this activity serve both projects and communities? These questions are explored through discussion of types of offline communities and description of online activity across civic crowdfunding projects. We conclude by considering the implications of this knowledge for civic crowdfunding and its continued research.",
"title": ""
},
{
"docid": "5efd5fb9caaeadb90a684d32491f0fec",
"text": "The ModelNiew/Controller design pattern is very useful for architecting interactive software systems. This design pattern is partition-independent, because it is expressed in terms of an interactive application running in a single address space. Applying the ModelNiew/Controller design pattern to web-applications is therefore complicated by the fact that current technologies encourage developers to partition the application as early as in the design phase. Subsequent changes to that partitioning require considerable changes to the application's implementation despite the fact that the application logic has not changed. This paper introduces the concept of Flexible Web-Application Partitioning, a programming model and implementation infrastructure, that allows developers to apply the ModeWViewKontroller design pattern in a partition-independent manner: Applications are developed and tested in a single address-space; they can then be deployed to various clientherver architectures without changing the application's source code. In addition, partitioning decisions can be changed without modifying the application.",
"title": ""
},
{
"docid": "a9372375af0500609b7721120181c280",
"text": "Copyright © 2014 Alicia Garcia-Falgueras. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In accordance of the Creative Commons Attribution License all Copyrights © 2014 are reserved for SCIRP and the owner of the intellectual property Alicia Garcia-Falgueras. All Copyright © 2014 are guarded by law and by SCIRP as a guardian.",
"title": ""
},
{
"docid": "b856ab3760ff0f762fda12cc852903da",
"text": "This paper presents a detection method of small-foreign-metal particles using a 400 kHz SiC-MOSFETs high-frequency inverter. A 400 kHz SiC-MOSFETs high-frequency inverter is developed and applied to the small-foreign-metal particles detection on high-performance chemical films (HPCFs). HPCFs are manufactured with continuous production lines in industries. A new arrangement of IH coils are proposed, which is applicable for the practical production-lines of HPCFs. A prototype experimental model is constructed and tested. Experimental results demonstrate that the newly proposed IH coils with the constructed 400 kHz SiC-MOSFETs can heat small-foreign-metal particles and the heated small-foreign-metal particles can be detected by a thermographic camera. Experimental results with a new arrangement of IH coils also demonstrate that the proposed detection method of small-foreign-metal particles using 400 kHz SiC-MOSFETs high-frequency inverter can be applicable for the practical production lines of HPCFs.",
"title": ""
},
{
"docid": "8f4b873cab626dbf0ebfc79397086545",
"text": "R emote-sensing techniques have transformed ecological research by providing both spatial and temporal perspectives on ecological phenomena that would otherwise be difficult to study (eg Kerr and Ostrovsky 2003; Running et al. 2004; Vierling et al. 2008). In particular, a strong focus has been placed on the use of data obtained from space-borne remote-sensing instruments because these provide regional-to global-scale observations and repeat time-series sampling of ecological indicators (eg Gould 2000). The main limitation of most of the research-focused satellite missions is the mismatch between the pixel resolution of many regional-extent sensors (eg Landsat [spatial resolution of ~30 m] to the Moderate Resolution Imaging Spectro-radiometer [spatial resolution of ~1 km]), the revisit period (eg 18 days for Landsat), and the scale of many ecological processes. Indeed, data provided by these platforms are often \" too general to meet regional or local objectives \" in ecology (Wulder et al. 2004). To address this limitation, a range of new (largely commercially operated) satellite sensors have become operational over the past decade, offering data at finer than 10-m spatial resolution with more responsive capabilities (eg Quickbird, IKONOS, GeoEye-1, OrbView-3, WorldView-2). Such data are useful for ecological studies (Fretwell et al. 2012), but there remain three operational constraints: (1) a high cost per scene; (2) suitable repeat times are often only possible if oblique view angles are used, distorting geometric and radiometric pixel properties; and (3) cloud contamination, which can obscure features of interest (Loarie et al. 2007). Imaging sensors on board civilian aircraft platforms may also be used; these can provide more scale-appropriate data for fine-scale ecological studies, including data from light detection and ranging (LiDAR) sensors (Vierling et al. 2008). In theory, these surveys can be made on demand, but in practice data acquisition is costly, meaning that regular time-series monitoring is operationally constrained. A new method for fine-scale remote sensing is now emerging that could address all of these operational issues and thus potentially revolutionize spatial ecology and environmental science. Unmanned aerial vehicles (UAVs) are lightweight, low-cost aircraft platforms operated from the ground that can carry imaging or non-imaging payloads. UAVs offer ecologists a promising route to responsive, timely, and cost-effective monitoring of environmental phenomena at spatial and temporal resolutions that are appropriate to the scales of many ecologically relevant variables. Emerging from a military background, there are now a growing number of civilian agencies and organizations that have recognized the …",
"title": ""
},
{
"docid": "72d75ebfc728d3b287bcaf429a6b2ee5",
"text": "We present a fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization. This technology reflects an improvement of 2.8X routed logic density and >40% performance over the 14nm reference technology described in [1-3]. A full range of Vts is enabled on-chip through a unique multi-workfunction process. This enables both excellent low voltage SRAM response and highly scaled memory area simultaneously. The HD 6-T bitcell size is 0.0269um2. This 7nm technology is fully enabled by immersion lithography and advanced optical patterning techniques (like SAQP and SADP). However, the technology platform is also designed to leverage EUV insertion for specific multi-patterned (MP) levels for cycle time benefit and manufacturing efficiency. A complete set of foundation and complex IP is available in this advanced CMOS platform to enable both High Performance Compute (HPC) and mobile applications.",
"title": ""
},
{
"docid": "caf5b727bfc59efc9f60697321796920",
"text": "As humans start to spend more time in collaborative virtual environments (CVEs) it becomes important to study their interactions in such environments. One aspect of such interactions is personal space. To begin to address this, we have conducted empirical investigations in a non immersive virtual environment: an experiment to investigate the influence on personal space of avatar gender, and an observational study to further explore the existence of personal space. Experimental results give some evidence to suggest that avatar gender has an influence on personal space although the participants did not register high personal space invasion anxiety, contrary to what one might expect from personal space invasion in the physical world. The observational study suggests that personal space does exist in CVEs, as the users tend to maintain, in a similar way to the physical world, a distance when they are interacting with each other. Our studies provide an improved understanding of personal space in CVEs and the results can be used to further enhance the usability of these environments.",
"title": ""
},
{
"docid": "2b97e03fa089cdee0bf504dd85e5e4bb",
"text": "One of the most severe threats to revenue and quality of service in telecom providers is fraud. The advent of new technologies has provided fraudsters new techniques to commit fraud. SIM box fraud is one of such fraud that has emerged with the use of VOIP technologies. In this work, a total of nine features found to be useful in identifying SIM box fraud subscriber are derived from the attributes of the Customer Database Record (CDR). Artificial Neural Networks (ANN) has shown promising solutions in classification problems due to their generalization capabilities. Therefore, supervised learning method was applied using Multi layer perceptron (MLP) as a classifier. Dataset obtained from real mobile communication company was used for the experiments. ANN had shown classification accuracy of 98.71 %.",
"title": ""
},
{
"docid": "54b4726650b3afcddafb120ff99c9951",
"text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.",
"title": ""
},
{
"docid": "4418a2cfd7216ecdd277bde2d7799e4d",
"text": "Most of legacy systems use nowadays were modeled and documented using structured approach. Expansion of these systems in terms of functionality and maintainability requires shift towards object-oriented documentation and design, which has been widely accepted by the industry. In this paper, we present a survey of the existing Data Flow Diagram (DFD) to Unified Modeling language (UML) transformation techniques. We analyze transformation techniques using a set of parameters, identified in the survey. Based on identified parameters, we present an analysis matrix, which describes the strengths and weaknesses of transformation techniques. It is observed that most of the transformation approaches are rule based, which are incomplete and defined at abstract level that does not cover in depth transformation and automation issues. Transformation approaches are data centric, which focuses on datastore for class diagram generation. Very few of the transformation techniques have been applied on case study as a proof of concept, which are not comprehensive and majority of them are partially automated. Keywords-Unified Modeling Language (UML); Data Flow Diagram (DFD); Class Diagram; Model Transformation.",
"title": ""
},
{
"docid": "6ae289d7da3e923c1288f39fd7a162f6",
"text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.",
"title": ""
}
] |
scidocsrr
|
559ef0e503a2f0105a82e10adb2edb56
|
Policy search for learning robot control using sparse data
|
[
{
"docid": "e52c40a4fcb6cdb3d9b177e371127185",
"text": "Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials—from scratch. Our manipulator is inaccurate and provides no pose feedback. For learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by exploiting the sequential structure of the stacking task.",
"title": ""
}
] |
[
{
"docid": "029687097e06ed2d0132ca2fce393129",
"text": "The V-band systems have been widely used in the aerospace industry for securing spacecraft inside the launch vehicle payload fairing. Separation is initiated by firing pyro-devices to rapidly release the tension bands. A significant shock transient is expected as a result of the band separation. The shock environment is defined with the assumption that the shock events due to the band separation are associated with the rapid release of the strain energy from the preload tension of the restraining band.",
"title": ""
},
{
"docid": "b77bf3a4cfba0033a7fcdf777c803da4",
"text": "Argumentation mining involves automatically identifying the premises, conclusion, and type of each argument as well as relationships between pairs of arguments in a document. We describe our plan to create a corpus from the biomedical genetics research literature, annotated to support argumentation mining research. We discuss the argumentation elements to be annotated, theoretical challenges, and practical issues in creating such a corpus.",
"title": ""
},
{
"docid": "685a9dfa265a6c2ce5a9c56e1e193800",
"text": "It has been postulated that bilingualism may act as a cognitive reserve and recent behavioral evidence shows that bilinguals are diagnosed with dementia about 4-5 years later compared to monolinguals. In the present study, we investigated the neural basis of these putative protective effects in a group of aging bilinguals as compared to a matched monolingual control group. For this purpose, participants completed the Erikson Flanker task and their performance was correlated to gray matter (GM) volume in order to investigate if cognitive performance predicts GM volume specifically in areas affected by aging. We performed an ex-Gaussian analysis on the resulting RTs and report that aging bilinguals performed better than aging monolinguals on the Flanker task. Bilingualism was overall associated with increased GM in the ACC. Likewise, aging induced effects upon performance correlated only for monolinguals to decreased gray matter in the DLPFC. Taken together, these neural regions might underlie the benefits of bilingualism and act as a neural reserve that protects against the cognitive decline that occurs during aging.",
"title": ""
},
{
"docid": "1fd51acb02bafb3ea8f5678581a873a4",
"text": "How often has this scenario happened? You are driving at night behind a car that has bright light-emitting diode (LED) taillights. When looking directly at the taillights, the light is not blurry, but when glancing at other objects, a trail of lights appears, known as a phantom array. The reason for this trail of lights might not be what you expected: it is not due to glare, degradation of eyesight, or astigmatism. The culprit may be the flickering of the LED lights caused by pulse-width modulating (PWM) drive circuitry. Actually, many LED taillights flicker on and off at frequencies between 200 and 500 Hz, which is too fast to notice when the eye is not in rapid motion. However, during a rapid eye movement (saccade), the images of the LED lights appear in different positions on the retina, causing a trail of images to be perceived (Figure 1). This disturbance of vision may not occur with all LED taillights because some taillights keep a constant current through the LEDs. However, when there is a PWM current through the LEDs, the biological effect of the light flicker may become noticeable during the eye saccade.",
"title": ""
},
{
"docid": "71501d914671c4467c422df38b1bc71d",
"text": "In this project, we investigate potential factors that may affect business performance on Yelp. We use a mix of features already available in the Yelp dataset as well as generating our own features using location clustering and sentiment analysis of reviews for businesses in Phoenix, AZ. After preprocessing the data to handle missing values, we ran various featureselection techniques to evaluate which features might have the greatest importance. Multi-class classification (logistic regression, SVM, tree and random forest classifiers, Naive Bayes, GDA) was then run on these feature subsets with an accuracy of ∼ 45%, significantly higher than random chance (16.7% for 6-class classification). Regression models (linear regression, SVR) were also tested but achieved lower accuracy. In addition, the accuracy was approximately the same across different feature sets. We found that across feature selection techniques, the important features included positive/negative sentiment of reviews, number of reviews, location, whether a restaurant takes reservations, and cluster size (which represents the number of businesses in the surrounding neighborhood). However, sentiment seemed to have the largest predictive power. Thus, in order to improve business performance, a future step would be to conduct additional analysis of review text to determine new features that might help the business to achieve a higher number of positive reviews.",
"title": ""
},
{
"docid": "7fb2348fbde9dbef88357cc79ff394c5",
"text": "This paper presents a measurement system with capacitive sensor connected to an open-source electronic platform Arduino Uno. A simple code was modified in the project, which ensures that the platform works as interface for the sensor. The code can be modified and upgraded at any time to fulfill other specific applications. The simulations were carried out in the platform's own environment and the collected data are represented in graphical form. Accuracy of developed measurement platform is 0.1 pF.",
"title": ""
},
{
"docid": "e790824ac08ceb82000c3cda024dc329",
"text": "Cellulolytic bacteria were isolated from manure wastes (cow dung) and degrading soil (municipal solid waste). Nine bacterial strains were screened the cellulolytic activities. Six strains showed clear zone formation on Berg’s medium. CMC (carboxyl methyl cellulose) and cellulose were used as substrates for cellulase activities. Among six strains, cd3 and mw7 were observed in quantitative measurement determined by dinitrosalicylic acid (DNS) method. Maximum enzyme producing activity showed 1.702mg/ml and 1.677mg/ml from cd3 and mw7 for 1% CMC substrate. On the other hand, it was expressed 0.563mg/ml and 0.415mg/ml for 1% cellulose substrate respectively. It was also studied for cellulase enzyme producing activity optimizing with kinetic growth parameters such as different carbon source including various concentration of cellulose, incubation time, temperature, and pH. Starch substrate showed 0.909mg/ml and 0.851mg/ml in enzyme producing activity. The optimum substrate concentration of cellulose was 0.25% for cd3 but 1% for mw7 showing the amount of reducing sugar formation 0.628mg/ml and 0.669mg/ml. The optimum incubation parameters for cd3 were 84 hours, 40C and pH 6. Mw7 also had optimum parameters 60 hours, 40 C and pH6.",
"title": ""
},
{
"docid": "79339679226cc161cb84be73b45d2df5",
"text": "We introduce an algorithm, called KarmaLego, for the discovery of frequent symbolic time interval-related patterns (TIRPs). The mined symbolic time intervals can be part of the input, or can be generated by a temporal-abstraction process from raw time-stamped data. The algorithm includes a data structure for TIRP-candidate generation and a novel method for efficient candidate-TIRP generation, by exploiting the transitivity property of Allen’s temporal relations. Additionally, since the non-ambiguous definition of TIRPs does not specify the duration of the time intervals, we propose to pre-cluster the time intervals based on their duration to decrease the variance of the supporting instances. Our experimental comparison of the KarmaLego algorithm’s runtime performance with several existing state of the art time intervals pattern mining methods demonstrated a significant speed-up, especially with large datasets and low levels of minimal vertical support. Furthermore, pre-clustering by time interval duration led to an increase in the homogeneity of the duration of the discovered TIRP’s supporting instances’ time intervals components, accompanied, however, by a corresponding decrease in the number of discovered TIRPs.",
"title": ""
},
{
"docid": "5d102aec00c21891f97c8c083045e0c3",
"text": "A simple method for detecting salient regions in images is proposed. It requires only edge detection, threshold decomposition, the distance transform, and thresholding. Moreover, it avoids the need for setting any parameter values. Experiments show that the resulting regions are relatively coarse, but overall the method is surprisingly effective, and has the benefit of easy implementation. Quantitative tests were carried out on Liu et al.’s dataset of 5000 images. Although the ratings of our simple method were not as good as their approach which involved an extensive training stage, they were comparable to several other popular methods from the literature. Further tests on Kootstra and Schomaker’s dataset of 99 images also showed promising results.",
"title": ""
},
{
"docid": "f5d1c11e3b802bfacd52c2c9d7388a6c",
"text": "Knowledge graphs (KGs) model facts about the world; they consist of nodes (entities such as companies and people) that are connected by edges (relations such as founderOf ). Facts encoded in KGs are frequently used by search applications to augment result pages. When presenting a KG fact to the user, providing other facts that are pertinent to that main fact can enrich the user experience and support exploratory information needs. \\em KG fact contextualization is the task of augmenting a given KG fact with additional and useful KG facts. The task is challenging because of the large size of KGs; discovering other relevant facts even in a small neighborhood of the given fact results in an enormous amount of candidates. We introduce a neural fact contextualization method (\\em NFCM ) to address the KG fact contextualization task. NFCM first generates a set of candidate facts in the neighborhood of a given fact and then ranks the candidate facts using a supervised learning to rank model. The ranking model combines features that we automatically learn from data and that represent the query-candidate facts with a set of hand-crafted features we devised or adjusted for this task. In order to obtain the annotations required to train the learning to rank model at scale, we generate training data automatically using distant supervision on a large entity-tagged text corpus. We show that ranking functions learned on this data are effective at contextualizing KG facts. Evaluation using human assessors shows that it significantly outperforms several competitive baselines.",
"title": ""
},
{
"docid": "76ecd4ba20333333af4d09b894ff29fc",
"text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.",
"title": ""
},
{
"docid": "45edb31b3eb041fdfc5b18bb6dcd14e8",
"text": "The professional discourse on social justice suggests that more critical work is needed to sufficiently address the societal issues that affect occupational therapy practitioners' ability to advocate for and with clients. Occupational therapy offers unique opportunities for the scholarly discussion of social justice and for clinical practice to address these issues. This article discusses the importance of incorporating a social justice perspective into occupational therapy by using an example from the author's research program. The experiences of adolescents in foster care were documented in an ongoing qualitative participatory study. An overview of adolescents' (N = 40) perceived independent living and vocational service needs is provided, and several barriers that affect adolescents' ability to develop the skills needed to achieve independent adulthood are described. The article concludes with a discussion of social justice implications as they relate to the myriad issues in the foster care system, occupational therapy research, and practice.",
"title": ""
},
{
"docid": "1b3b105f2970e6c403c8d5ced2af9f45",
"text": "We study the problem of estimating multiple linear regressi on equations for the purpose of both prediction and variable selection. Following recen t work on multi-task learning Argyriou et al. [2008], we assume that the regression vector s share the same sparsity pattern. This means that the set of relevant predictor variable s is the same across the different equations. This assumption leads us to consider the Group La sso s a candidate estimation method. We show that this estimator enjoys nice sparsit y oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which natura lly extend recent work in Bickel et al. [2007], Lounici [2008]. In particular, in the m ulti-task learning scenario, in which the number of tasks can grow, we are able to remove compl etely the effect of the number of predictor variables in the bounds. Finally, we sho w w our results can be extended to more general noise distributions, of which we only require the variance to be finite.",
"title": ""
},
{
"docid": "c14da39ea48b06bfb01c6193658df163",
"text": "We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.",
"title": ""
},
{
"docid": "466c0d9436e1f1878aaafa2297022321",
"text": "Acetic acid was used topically at concentrations of between 0.5% and 5% to eliminate Pseudomonas aeruginosa from the burn wounds or soft tissue wounds of 16 patients. In-vitro studies indicated the susceptibility of P. aeruginosa to acetic acid; all strains exhibited a minimum inhibitory concentration of 2 per cent. P. aeruginosa was eliminated from the wounds of 14 of the 16 patients within two weeks of treatment. Acetic acid was shown to be an inexpensive and efficient agent for the elimination of P. aeruginosa from burn and soft tissue wounds.",
"title": ""
},
{
"docid": "92ae940687b50d16ecfa9df0a4f15abc",
"text": "In this paper, we propose to bring together the semantic web experience and statistical natural language semantic parsing modeling. The idea is that, the process for populating knowledgebases by semantically parsing structured web pages may provide very valuable implicit annotation for language understanding tasks. We mine search queries hitting to these web pages in order to semantically annotate them for building statistical unsupervised slot filling models, without even a need for a semantic annotation guideline. We present promising results demonstrating this idea for building an unsupervised slot filling model for the movies domain with some representative slots. Furthermore, we also employ unsupervised model adaptation for cases when there are some in-domain unannotated sentences available. Another key contribution of this work is using implicitly annotated natural-language-like queries for testing the performance of the models, in a totally unsupervised fashion. We believe, such an approach also ensures consistent semantic representation between the semantic parser and the backend knowledge-base.",
"title": ""
},
{
"docid": "346ad12b1e2155e0ec86f9e04b19ea00",
"text": "3D face reconstruction from Internet photos has recently produced exciting results. A person’s face, e.g., Tom Hanks, can be modeled and animated in 3D from a completely uncalibrated photo collection. Most methods, however, focus solely on face area and mask out the rest of the head. This paper proposes that head modeling from the Internet is a problem we can solve. We target reconstruction of the rough shape of the head. Our method is to gradually “grow” the head mesh starting from the frontal face and extending to the rest of views using photometric stereo constraints. We call our method boundary-value growing algorithm. Results on photos of celebrities downloaded from the Internet are presented.",
"title": ""
},
{
"docid": "1389e232bef9499c301fa4f4bbcb3e56",
"text": "PURPOSE\nTo review studies of healing touch and its implications for practice and research.\n\n\nDESIGN\nA review of the literature from published works, abstracts from conference proceedings, theses, and dissertations was conducted to synthesize information on healing touch. Works available until June 2003 were referenced.\n\n\nMETHODS\nThe studies were categorized by target of interventions and outcomes were evaluated.\n\n\nFINDINGS AND CONCLUSIONS\nOver 30 studies have been conducted with healing touch as the independent variable. Although no generalizable results were found, a foundation exists for further research to test its benefits.",
"title": ""
},
{
"docid": "ee05d558786b4bfbe60ba68b6a210120",
"text": "The capability of dealing with context sensitive knowledge is recognized as a crucial aspect in the management of massive amounts of Semantic Web (SW) data. Contextual knowledge can be modelled either by adopting the primitives from RDF/OWL based SW languages or by extending such languages with new specific constructs for context representation. In this paper, we show the benefits of the context-based solution by comparing modelling and reasoning in the two approaches on the paradigmatic use case of FIFA World Cup. The comparison considers the three key aspects of engineering and exploiting knowledge: (i) simplicity and expressivity of the (formal) language; (ii) compactness of the representation; and (iii) efficiency of reasoning. As for (i), we show that the context-based language enables the construction of simpler and more intuitive models while the RDF/OWL \"flat\" model presents practical limitations in modelling cross-contextual knowledge. For (ii), we show that the contextualized model is more compact than the OWL based model. Finally for (iii), query answering in the context-based model outperforms in most of the cases performances on the flat model.",
"title": ""
}
] |
scidocsrr
|
0c00ccb5f363f28347e55517cfb78f95
|
A Measure of Similarity of Time Series Containing Missing Data Using the Mahalanobis Distance
|
[
{
"docid": "d4f1cdfe13fda841edfb31ced34a4ee8",
"text": "ÐMissing data are often encountered in data sets used to construct effort prediction models. Thus far, the common practice has been to ignore observations with missing data. This may result in biased prediction models. In this paper, we evaluate four missing data techniques (MDTs) in the context of software cost modeling: listwise deletion (LD), mean imputation (MI), similar response pattern imputation (SRPI), and full information maximum likelihood (FIML). We apply the MDTs to an ERP data set, and thereafter construct regression-based prediction models using the resulting data sets. The evaluation suggests that only FIML is appropriate when the data are not missing completely at random (MCAR). Unlike FIML, prediction models constructed on LD, MI and SRPI data sets will be biased unless the data are MCAR. Furthermore, compared to LD, MI and SRPI seem appropriate only if the resulting LD data set is too small to enable the construction of a meaningful regression-based prediction model.",
"title": ""
},
{
"docid": "b9b85e8e4824b7f0cb6443d70ef38b38",
"text": "This paper presents methods for analyzing and manipulating unevenly spaced time series without a transformation to equally spaced data. Processing and analyzing such data in its unaltered form avoids the biases and information loss caused by resampling. Care is taken to develop a framework consistent with a traditional analysis of equally spaced data, as in Brockwell and Davis (1991), Hamilton (1994) and Box, Jenkins, and Reinsel (2004).",
"title": ""
}
] |
[
{
"docid": "00527294606231986ba34d68e847e01a",
"text": "In this paper, we describe a new scheme to learn dynamic user's interests in an automated information filtering and gathering system running on the Internet. Our scheme is aimed to handle multiple domains of long-term and short-term user's interests simultaneously, which is learned through positive and negative user's relevance feedback. We developed a 3-descriptor approach to represent the user's interest categories. Using a learning algorithm derived for this representation, our scheme adapts quickly to significant changes in user interest, and is also able to learn exceptions to interest categories.",
"title": ""
},
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "62e7974231c091845f908a50f5365d7f",
"text": "Sequentiality of access is an inherent characteristic of many database systems. We use this observation to develop an algorithm which selectively prefetches data blocks ahead of the point of reference. The number of blocks prefetched is chosen by using the empirical run length distribution and conditioning on the observed number of sequential block references immediately preceding reference to the current block. The optimal number of blocks to prefetch is estimated as a function of a number of “costs,” including the cost of accessing a block not resident in the buffer (a miss), the cost of fetching additional data blocks at fault times, and the cost of fetching blocks that are never referenced. We estimate this latter cost, described as memory pollution, in two ways. We consider the treatment (in the replacement algorithm) of prefetched blocks, whether they are treated as referenced or not, and find that it makes very little difference. Trace data taken from an operational IMS database system is analyzed and the results are presented. We show how to determine optimal block sizes. We find that anticipatory fetching of data can lead to significant improvements in system operation.",
"title": ""
},
{
"docid": "11cf4c50ced7ceafe7176a597f0f983d",
"text": "All mature hemopoietic lineage cells, with exclusion of platelets and mature erythrocytes, share the surface expression of a transmembrane phosphatase, the CD45 molecule. It is also present on hemopoietic stem cells and most leukemic clones and therefore presents as an appropriate target for immunotherapy with anti-CD45 antibodies. This short review details the biology of CD45 and its recent targeting for both treatment of malignant disorders and tolerance induction. In particular, the question of potential stem cell depletion for induction of central tolerance or depletion of malignant hemopoietic cells is addressed. Mechanisms underlying the effects downstream of CD45 binding to the cell surface are discussed.",
"title": ""
},
{
"docid": "62d63c1177b2426e133daca0ead7e50f",
"text": "⎯The problem of how to plan coal fuel blending and distribution from overseas coal sources to domestic power plants through some possible seaports by certain types of fleet in order to meet operational and environmental requirements is a complex task. The aspects under consideration includes each coal source contract’s supply, quality and price, each power plant’s demand, environmental requirements and limit on maximum number of different coal sources that can supply it, installation of blending facilities, selection of fleet types, and transient seaport’s capacity limit on fleet types. A coal blending and inter-model transportation model is explored to find optimal blending and distribution decisions for coal fuel from overseas contracts to domestic power plants. The objective in this study is to minimize total logistics costs, including procurement cost, shipping cost, and inland delivery cost. The developed model is one type of mix-integer zero-one programming problems. A real-world case problem is presented using the coal logistics system of a local electric utility company to demonstrate the benefit of the proposed approach. A well-known optimization package, AMPL-CPLEX, is utilized to solve this problem. Results from this study suggest that the obtained solution is better than the rule-of-thumb solution and the developed model provides a tool for management to conduct capacity expansion planning and power generation options. Keywords⎯Blending and inter-modal transportation model, Integer programming, Coal fuel. ∗ Corresponding author’s email: cmliu@fcu.edu.tw International Journal of Operations Research",
"title": ""
},
{
"docid": "8583702b48549c5bbf1553fa0e39a882",
"text": "A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.",
"title": ""
},
{
"docid": "493748a07dbf457e191487fe7459ee7e",
"text": "60 Computer T he Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: Taken as a whole, the set of Web pages lacks a unifying structure and shows far more author-ing style and content variation than that seen in traditional text-document collections. This level of complexity makes an \" off-the-shelf \" database-management and information-retrieval solution impossible. To date, index-based search engines for the Web have been the primary tool by which users search for information. The largest such search engines exploit technology's ability to store and index much of the Web. Such engines can therefore build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained keywords and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. Yet a user will be willing, typically , to look at only a few of these pages. How then, from this sea of pages, should a search engine select the correct ones—those of most value to the user? AUTHORITATIVE WEB PAGES First, to distill a large Web search topic to a size that makes sense to a human user, we need a means of identifying the topic's most definitive or authoritative Web pages. The notion of authority adds a crucial second dimension to the concept of relevance: We wish to locate not only a set of relevant pages, but also those relevant pages of the highest quality. Second, the Web consists not only of pages, but hyperlinks that connect one page to another. This hyperlink structure contains an enormous amount of latent human annotation that can help automatically infer notions of authority. Specifically, the creation of a hyperlink by the author of a Web page represents an implicit endorsement of the page being pointed to; by mining the collective judgment contained in the set of such endorsements, we can gain a richer understanding of the relevance and quality of the Web's contents. To address both these parameters, we began development of the Clever system 1-3 three years ago. Clever …",
"title": ""
},
{
"docid": "8cbfb79df2516bb8a06a5ae9399e3685",
"text": "We consider the problem of approximate set similarity search under Braun-Blanquet similarity <i>B</i>(<i>x</i>, <i>y</i>) = |<i>x</i> â© <i>y</i>| / max(|<i>x</i>|, |<i>y</i>|). The (<i>b</i><sub>1</sub>, <i>b</i><sub>2</sub>)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets <i>P</i> such that, given a query set <i>q</i>, if there exists <i>x</i> â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>) ⥠<i>b</i><sub>1</sub>, then we can efficiently return <i>x</i>â² â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>â²) > <i>b</i><sub>2</sub>. \nWe present a simple data structure that solves this problem with space usage <i>O</i>(<i>n</i><sup>1+Ï</sup>log<i>n</i> + â<sub><i>x</i> â <i>P</i></sub>|<i>x</i>|) and query time <i>O</i>(|<i>q</i>|<i>n</i><sup>Ï</sup> log<i>n</i>) where <i>n</i> = |<i>P</i>| and Ï = log(1/<i>b</i><sub>1</sub>)/log(1/<i>b</i><sub>2</sub>). Making use of existing lower bounds for locality-sensitive hashing by OâDonnell et al. (TOCT 2014) we show that this value of Ï is tight across the parameter space, i.e., for every choice of constants 0 < <i>b</i><sub>2</sub> < <i>b</i><sub>1</sub> < 1. \nIn the case where all sets have the same size our solution strictly improves upon the value of Ï that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broderâs MinHash (CCS 1997) for Jaccard similarity and Andoni et al.âs cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-<em>dependent</em> method by Andoni and Razenshteyn (STOC 2015).",
"title": ""
},
{
"docid": "608bf85fa593c7ddff211c5bcc7dd20a",
"text": "We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work.",
"title": ""
},
{
"docid": "ac1b28346ae9df1dd3b455d113551caf",
"text": "The new IEEE 802.11 standard, IEEE 802.11ax, has the challenging goal of serving more Uplink (UL) traffic and users as compared with his predecessor IEEE 802.11ac, enabling consistent and reliable streams of data (average throughput) per station. In this paper we explore several new IEEE 802.11ax UL scheduling mechanisms and compare between the maximum throughputs of unidirectional UDP Multi Users (MU) triadic. The evaluation is conducted based on Multiple-Input-Multiple-Output (MIMO) and Orthogonal Frequency Division Multiple Access (OFDMA) transmission multiplexing format in IEEE 802.11ax vs. the CSMA/CA MAC in IEEE 802.11ac in the Single User (SU) and MU modes for 1, 4, 8, 16, 32 and 64 stations scenario in reliable and unreliable channels. The comparison is conducted as a function of the Modulation and Coding Schemes (MCS) in use. In IEEE 802.11ax we consider two new flavors of acknowledgment operation settings, where the maximum acknowledgment windows are 64 or 256 respectively. In SU scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 64% and 85% in reliable and unreliable channels respectively. In MU-MIMO scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 263% and 270% in reliable and unreliable channels respectively. Also, as the number of stations increases, the advantage of IEEE 802.11ax in terms of the access delay also increases.",
"title": ""
},
{
"docid": "2383c90591822bc0c8cec2b1b2309b7a",
"text": "Apple's iPad has attracted a lot of attention since its release in 2010 and one area in which it has been adopted is the education sector. The iPad's large multi-touch screen, sleek profile and the ability to easily download and purchase a huge variety of educational applications make it attractive to educators. This paper presents a case study of the iPad's adoption in a primary school, one of the first in the world to adopt it. From interviews with teachers and IT staff, we conclude that the iPad's main strengths are the way in which it provides quick and easy access to information for students and the support it provides for collaboration. However, staff need to carefully manage both the teaching and the administrative environment in which the iPad is used, and we provide some lessons learned that can help other schools considering adopting the iPad in the classroom.",
"title": ""
},
{
"docid": "c7fb516fbba3293c92a00beaced3e95e",
"text": "Latent Dirichlet Allocation (LDA) is a generative model describing the observed data as being composed of a mixture of underlying unobserved topics, as introduced by Blei et al. (2003). A key hyperparameter of LDA is the number of underlying topics k, which must be estimated empirically in practice. Selecting the appropriate value of k is essentially selecting the correct model to represent the data; an important issue concerning the goodness of fit. We examine in the current work a series of metrics from literature on a quantitative basis by performing benchmarks against a generated dataset with a known value of k and evaluate the ability of each metric to recover the true value, varying over multiple levels of topic resolution in the Dirichlet prior distributions. Finally, we introduce a new metric and heuristic for estimating k and demonstrate improved performance over existing metrics from the literature on several benchmarks.",
"title": ""
},
{
"docid": "f03cc92b0bc69845b9f2b6c0c6f3168b",
"text": "Relational database management systems (RDBMSs) are powerful because they are able to optimize and answer queries against any relational database. A natural language interface (NLI) for a database, on the other hand, is tailored to support that specific database. In this work, we introduce a general purpose transfer-learnable NLI with the goal of learning one model that can be used as NLI for any relational database. We adopt the data management principle of separating data and its schema, but with the additional support for the idiosyncrasy and complexity of natural languages. Specifically, we introduce an automatic annotation mechanism that separates the schema and the data, where the schema also covers knowledge about natural language. Furthermore, we propose a customized sequence model that translates annotated natural language queries to SQL statements. We show in experiments that our approach outperforms previous NLI methods on the WikiSQL dataset and the model we learned can be applied to another benchmark dataset OVERNIGHT without retraining.",
"title": ""
},
{
"docid": "6c7172b5c91601646a7cdc502c88d22f",
"text": "In this paper, a number of options and issues are illustrated which companies and organizations seeking to incorporate environmental issues in product design and realization should consider. A brief overview and classification of a number of approaches for reducing the environmental impact is given, as well as their organizational impact. General characteristics, representative examples, and integration and information management issues of design tools supporting environmentally conscious product design are provided as well. 1 From Design for Manufacture to Design for the Life Cycle and Beyond One can argue that the “good old days” where a product was being designed, manufactured and sold to the customer with little or no subsequent concern are over. In the seventies, with the emergence of life-cycle engineering and concurrent engineering in the United States, companies became more aware of the need to include serviceability and maintenance issues in their design processes. A formal definition for Concurrent Engineering is given in (Winner, et al., 1988), as “a systematic approach to the integrated, concurrent design of products and their related processes, including manufacturing and support. This approach is intended to cause the developers, from the outset, to consider all elements of the product life cycle from conception through disposal, including quality, cost, schedule, and user requirements.” Although concurrent engineering seems to span the entire life-cycle of a product according to the preceding definition, its traditional focus has been on design, manufacturing, and maintenance. Perhaps one of the most striking areas where companies now have to be concerned is with the environment. The concern regarding environmental impact stems from the fact that, whether we want it or not, all our products affect in some way our environment during their life-span. In Figure 1, a schematic representation of a system’s life-cycle is given. Materials are mined from the earth, air and sea, processed into products, and distributed to consumers for usage, as represented by the flow from left to right in the top half of Figure 1.",
"title": ""
},
{
"docid": "962a653490e8afbcf13c47426c85ecec",
"text": "Alzheimer’s disease (AD) and mild cognitive impairment (MCI) are the most prevalent neurodegenerative brain diseases in elderly population. Recent studies on medical imaging and biological data have shown morphological alterations of subcortical structures in patients with these pathologies. In this work, we take advantage of these structural deformations for classification purposes. First, triangulated surface meshes are extracted from segmented hippocampus structures in MRI and point-to-point correspondences are established among population of surfaces using a spectral matching method. Then, a deep learning variational auto-encoder is applied on the vertex coordinates of the mesh models to learn the low dimensional feature representation. A multi-layer perceptrons using softmax activation is trained simultaneously to classify Alzheimer’s patients from normal subjects. Experiments on ADNI dataset demonstrate the potential of the proposed method in classification of normal individuals from early MCI (EMCI), late MCI (LMCI), and AD subjects with classification rates outperforming standard SVM based approach.",
"title": ""
},
{
"docid": "7ab232fbbda235c42e0dabb2b128ed59",
"text": "Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.",
"title": ""
},
{
"docid": "4b012d1dc18f18118a73488e934eff4d",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: s u m m a r y Current drought information is based on indices that do not capture the joint behaviors of hydrologic variables. To address this limitation, the potential of copulas in characterizing droughts from multiple variables is explored in this study. Starting from the standardized index (SI) algorithm, a modified index accounting for seasonality is proposed for precipitation and streamflow marginals. Utilizing Indiana stations with long-term observations (a minimum of 80 years for precipitation and 50 years for streamflow), the dependence structures of precipitation and streamflow marginals with various window sizes from 1-to 12-months are constructed from empirical copulas. A joint deficit index (JDI) is defined by using the distribution function of copulas. This index provides a probability-based description of the overall drought status. Not only is the proposed JDI able to reflect both emerging and prolonged droughts in a timely manner, it also allows a month-by-month drought assessment such that the required amount of precipitation for achieving normal conditions in future can be computed. The use of JDI is generalizable to other hydrologic variables as evidenced by similar drought severities gleaned from JDIs constructed separately from precipitation and streamflow data. JDI further allows the construction of an inter-variable drought index, where the entire dependence structure of precipitation and streamflow marginals is preserved. Introduction Drought, as a prolonged status of water deficit, has been a challenging topic in water resources management. It is perceived as one of the most expensive and least understood natural disasters. In monetary terms, a typical drought costs American farmers and businesses $6–8 billion each year (WGA, 2004), more than damages incurred from floods and hurricanes. The consequences tend to be more severe in areas such as the mid-western part of the United States, where agriculture is the major economic driver. Unfortunately , though there is a strong need to develop an algorithm for characterizing and predicting droughts, it cannot be achieved easily either through physical or statistical analyses. The main obstacles are identification of complex drought-causing mechanisms, and lack of a precise (universal) scientific definition for droughts. When a drought event occurs, moisture deficits are observed in many hydrologic variables, such as precipitation, …",
"title": ""
},
{
"docid": "8ed247a04a8e5ab201807e0d300135a3",
"text": "We reproduce the Structurally Constrained Recurrent Network (SCRN) model, and then regularize it using the existing widespread techniques, such as naïve dropout, variational dropout, and weight tying. We show that when regularized and optimized appropriately the SCRN model can achieve performance comparable with the ubiquitous LSTMmodel in language modeling task on English data, while outperforming it on non-English data. Title and Abstract in Russian Воспроизведение и регуляризация SCRN модели Мы воспроизводим структурно ограниченную рекуррентную сеть (SCRN), а затем добавляем регуляризацию, используя существующие широко распространенные методы, такие как исключение (дропаут), вариационное исключение и связка параметров. Мы показываем, что при правильной регуляризации и оптимизации показатели SCRN сопоставимы с показателями вездесущей LSTM в задаче языкового моделирования на английских текстах, а также превосходят их на неанглийских данных.",
"title": ""
},
{
"docid": "b518deb76d6a59f6b88d58b563100f4b",
"text": "As part of the 50th anniversary of the Canadian Operational Research Society, we reviewed queueing applications by Canadian researchers and practitioners. We concentrated on finding real applications, but also considered theoretical contributions to applied areas that have been developed by the authors based on real applications. There were a surprising number of applications, many not well documented. Thus, this paper features examples of queueing theory applications over a spectrum of areas, years and types. One conclusion is that some of the successful queueing applications were achieved and ameliorated by using simple principles gained from studying queues and not by complex mathematical models.",
"title": ""
},
{
"docid": "f9692d0410cb97fd9c2ecf6f7b043b9f",
"text": "This paper develops and analyzes four energy scenarios for California that are both exploratory and quantitative. The businessas-usual scenario represents a pathway guided by outcomes and expectations emerging from California’s energy crisis. Three alternative scenarios represent contexts where clean energy plays a greater role in California’s energy system: Split Public is driven by local and individual activities; Golden State gives importance to integrated state planning; Patriotic Energy represents a national drive to increase energy independence. Future energy consumption, composition of electricity generation, energy diversity, and greenhouse gas emissions are analyzed for each scenario through 2035. Energy savings, renewable energy, and transportation activities are identified as promising opportunities for achieving alternative energy pathways in California. A combined approach that brings together individual and community activities with state and national policies leads to the largest energy savings, increases in energy diversity, and reductions in greenhouse gas emissions. Critical challenges in California’s energy pathway over the next decades identified by the scenario analysis include dominance of the transportation sector, dependence on fossil fuels, emissions of greenhouse gases, accounting for electricity imports, and diversity of the electricity sector. The paper concludes with a set of policy lessons revealed from the California energy scenarios. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
fe8fcd0de803e1e871c46dae2508eb8d
|
Experiments with SVM to classify opinions in different domains
|
[
{
"docid": "095dbdc1ac804487235cdd0aeffe8233",
"text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.",
"title": ""
},
{
"docid": "8a7ea746acbfd004d03d4918953d283a",
"text": "Sentiment analysis is an important current research area. This paper combines rule-based classification, supervised learning andmachine learning into a new combinedmethod. Thismethod is tested onmovie reviews, product reviews and MySpace comments. The results show that a hybrid classification can improve the classification effectiveness in terms of microand macro-averaged F1. F1 is a measure that takes both the precision and recall of a classifier’s effectiveness into account. In addition, we propose a semi-automatic, complementary approach in which each classifier can contribute to other classifiers to achieve a good level of effectiveness.",
"title": ""
}
] |
[
{
"docid": "e5d107b5f81d9cd1b6d5ac58339cc427",
"text": "While one of the first steps in many NLP systems is selecting what embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves. To that end, we introduce a novel, straightforward yet highly effective method for combining multiple types of word embeddings in a single model, leading to state-of-the-art performance within the same model class on a variety of tasks. We subsequently show how the technique can be used to shed new insight into the usage of word embeddings in NLP systems.",
"title": ""
},
{
"docid": "258655a00ea8acde4e2bde42376c1ead",
"text": "A main puzzle of deep networks revolves around the absence of overfitting despite large overparametrization and despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to linear gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian. The proposition depends on the qualitative theory of dynamical systems and is supported by numerical results. Our main propositions extend to deep nonlinear networks two properties of gradient descent for linear networks, that have been recently established (1) to be key to their generalization properties: 1. Gradient descent enforces a form of implicit regularization controlled by the number of iterations, and asymptotically converges to the minimum norm solution for appropriate initial conditions of gradient descent. This implies that there is usually an optimum early stopping that avoids overfitting of the loss. This property, valid for the square loss and many other loss functions, is relevant especially for regression. 2. For classification, the asymptotic convergence to the minimum norm solution implies convergence to the maximum margin solution which guarantees good classification error for “low noise” datasets. This property holds for loss functions such as the logistic and cross-entropy loss independently of the initial conditions. The robustness to overparametrization has suggestive implications for the robustness of the architecture of deep convolutional networks with respect to the curse of dimensionality. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. 1 ar X iv :1 80 1. 00 17 3v 2 [ cs .L G ] 1 6 Ja n 20 18",
"title": ""
},
{
"docid": "7c171e744df03df658c02e899e197bd4",
"text": "In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments.",
"title": ""
},
{
"docid": "c12d27988e70e9b3e6987ca2f0ca8bca",
"text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.",
"title": ""
},
{
"docid": "305f877227516eded75819bdf48ab26d",
"text": "Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32× 32 or 128× 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374× 374 in PASCAL2.",
"title": ""
},
{
"docid": "353500d18d56c0bf6dc13627b0517f41",
"text": "In order to accelerate the learning process in high dimensional reinforcement learning problems, TD methods such as Q-learning and Sarsa are usually combined with eligibility traces. The recently introduced DQN (Deep Q-Network) algorithm, which is a combination of Q-learning with a deep neural network, has achieved good performance on several games in the Atari 2600 domain. However, the DQN training is very slow and requires too many time steps to converge. In this paper, we use the eligibility traces mechanism and propose the deep Q(λ) network algorithm. The proposed method provides faster learning in comparison with the DQN method. Empirical results on a range of games show that the deep Q(λ) network significantly reduces learning time.",
"title": ""
},
{
"docid": "62f5640954e5b731f82599fb52ea816f",
"text": "This paper presents an energy-balance control strategy for a cascaded single-phase grid-connected H-bridge multilevel inverter linking n independent photovoltaic (PV) arrays to the grid. The control scheme is based on an energy-sampled data model of the PV system and enables the design of a voltage loop linear discrete controller for each array, ensuring the stability of the system for the whole range of PV array operating conditions. The control design is adapted to phase-shifted and level-shifted carrier pulsewidth modulations to share the control action among the cascade-connected bridges in order to concurrently synthesize a multilevel waveform and to keep each of the PV arrays at its maximum power operating point. Experimental results carried out on a seven-level inverter are included to validate the proposed approach.",
"title": ""
},
{
"docid": "0d0fd1c837b5e45b83ee590017716021",
"text": "General intelligence and personality traits from the Five-Factor model were studied as predictors of academic achievement in a large sample of Estonian schoolchildren from elementary to secondary school. A total of 3618 students (1746 boys and 1872 girls) from all over Estonia attending Grades 2, 3, 4, 6, 8, 10, and 12 participated in this study. Intelligence, as measured by the Raven’s Standard Progressive Matrices, was found to be the best predictor of students’ grade point average (GPA) in all grades. Among personality traits (measured by self-reports on the Estonian Big Five Questionnaire for Children in Grades 2 to 4 and by the NEO Five Factor Inventory in Grades 6 to 12), Openness, Agreeableness, and Conscientiousness correlated positively and Neuroticism correlated negatively with GPA in almost every grade. When all measured variables were entered together into a regression model, intelligence was still the strongest predictor of GPA, being followed by Agreeableness in Grades 2 to 4 and Conscientiousness in Grades 6 to 12. Interactions between predictor variables and age accounted for only a small percentage of variance in GPA, suggesting that academic achievement relies basically on the same mechanisms through the school years. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e9cc899155bd5f88ae1a3d5b88de52af",
"text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.",
"title": ""
},
{
"docid": "2d998d0e0966acf04dfe377cde35aafa",
"text": "This paper proposes a generalization of the multi- Bernoulli filter called the labeled multi-Bernoulli filter that outputs target tracks. Moreover, the labeled multi-Bernoulli filter does not exhibit a cardinality bias due to a more accurate update approximation compared to the multi-Bernoulli filter by exploiting the conjugate prior form for labeled Random Finite Sets. The proposed filter can be interpreted as an efficient approximation of the δ-Generalized Labeled Multi-Bernoulli filter. It inherits the advantages of the multi-Bernoulli filter in regards to particle implementation and state estimation. It also inherits advantages of the δ-Generalized Labeled Multi-Bernoulli filter in that it outputs (labeled) target tracks and achieves better performance.",
"title": ""
},
{
"docid": "855a8cfdd9d01cd65fe32d18b9be4fdf",
"text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.",
"title": ""
},
{
"docid": "d8c40ed2d2b2970412cc8404576d0c80",
"text": "In this paper an adaptive control technique combined with the so-called IDA-PBC (Interconnexion Damping Assignment, Passivity Based Control) controller is proposed for the stabilization of a class of underactuated mechanical systems, namely, the Inertia Wheel Inverted Pendulum (IWIP). It has two degrees of freedom with one actuator. The IDA-PBC stabilizes for all initial conditions (except a set of zeros measure) the upward position of the IWIP. The efficiency of this controller depends on the tuning of several gains. Motivated by this issue we propose to automatically adapt some of these gains in order to regain performance rapidly. The effectiveness of the proposed adaptive scheme is demonstrated through numerical simulations and experimental results.",
"title": ""
},
{
"docid": "1073c1f4013f6c57259502391d75d356",
"text": "A long-standing dream of Artificial Intelligence (AI) has pursued to enrich computer programs with commonsense knowledge enabling machines to reason about our world. This paper offers a new practical insight towards the automation of commonsense reasoning with first-order logic (FOL) ontologies. We propose a new black-box testing methodology of FOL SUMO-based ontologies by exploiting WordNet and its mapping into SUMO. Our proposal includes a method for the (semi-)automatic creation of a very large set of tests and a procedure for its automated evaluation by using automated theorem provers (ATPs). Applying our testing proposal, we are able to successfully evaluate a) the competency of several translations of SUMO into FOL and b) the performance of various automated ATPs. In addition, we are also able to evaluate the resulting set of tests according to different quality criteria.",
"title": ""
},
{
"docid": "1053359e8374c47d4645c5609ffafaee",
"text": "In this paper, we derive a new infinite series representation for the trivariate non-central chi-squared distribution when the underlying correlated Gaussian variables have tridiagonal form of inverse covariance matrix. We make use of the Miller's approach and the Dougall's identity to derive the joint density function. Moreover, the trivariate cumulative distribution function (cdf) and characteristic function (chf) are also derived. Finally, bivariate noncentral chi-squared distribution and some known forms are shown to be special cases of the more general distribution. However, non-central chi-squared distribution for an arbitrary covariance matrix seems intractable with the Miller's approach.",
"title": ""
},
{
"docid": "31e8d60af8a1f9576d28c4c1e0a3db86",
"text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.",
"title": ""
},
{
"docid": "c69a480600fea74dab84290e6c0e2204",
"text": "Mobile cloud computing is computing of Mobile application through cloud. As we know market of mobile phones is growing rapidly. According to IDC, the premier global market intelligence firm, the worldwide Smartphone market grew 42. 5% year over year in the first quarter of 2012. With the growing demand of Smartphone the demand for fast computation is also growing. Inspite of comparatively more processing power and storage capability of Smartphone's, they still lag behind Personal Computers in meeting processing and storage demands of high end applications like speech recognition, security software, gaming, health services etc. Mobile cloud computing is an answer to intensive processing and storage demand of real-time and high end applications. Being in nascent stage, Mobile Cloud Computing has privacy and security issues which deter the users from adopting this technology. This review paper throws light on privacy and security issues of Mobile Cloud Computing.",
"title": ""
},
{
"docid": "83f1fc22d029b3a424afcda770a5af23",
"text": "Three species of Xerolycosa: Xerolycosa nemoralis (Westring, 1861), Xerolycosa miniata (C.L. Koch, 1834) and Xerolycosa mongolica (Schenkel, 1963), occurring in the Palaearctic Region are surveyed, illustrated and redescribed. Arctosa mongolica Schenkel, 1963 is removed from synonymy with Xerolycosa nemoralis and transferred to Xerolycosa, and the new combination Xerolycosa mongolica (Schenkel, 1963) comb. n. is established. One new synonymy, Xerolycosa undulata Chen, Song et Kim, 1998 syn.n. from Heilongjiang = Xerolycosa mongolica (Schenkel, 1963), is proposed. In addition, one more new combination is established, Trochosa pelengena (Roewer, 1960) comb. n., ex Xerolycosa.",
"title": ""
},
{
"docid": "e9bd226d50c9a6633c32b9162cbd14f4",
"text": "PURPOSE\nTo report clinical features and treatment outcomes of ocular juvenile xanthogranuloma (JXG).\n\n\nDESIGN\nRetrospective case series.\n\n\nPARTICIPANTS\nThere were 32 tumors in 31 eyes of 30 patients with ocular JXG.\n\n\nMETHODS\nReview of medical records.\n\n\nMAIN OUTCOME MEASURES\nTumor control, intraocular pressure (IOP), and visual acuity.\n\n\nRESULTS\nThe mean patient age at presentation was 51 months (median, 15 months; range, 1-443 months). Eye redness (12/30, 40%) and hyphema (4/30, 13%) were the most common presenting symptoms. Cutaneous JXG was concurrently present in 3 patients (3/30, 10%), and spinal JXG was present in 1 patient (1/30, 3%). The ocular tissue affected by JXG included the iris (21/31, 68%), conjunctiva (6/31, 19%), eyelid (2/31, 6%), choroid (2/31, 6%), and orbit (1/31, 3%). Those with iris JXG presented at a median age of 13 months compared with 30 months for those with conjunctival JXG. In the iris JXG group, mean IOP was 19 mmHg (median, 18 mmHg; range, 11-30 mmHg) and hyphema was noted in 8 eyes (8/21, 38%). The iris tumor was nodular (16/21, 76%) or diffuse (5/21, 24%). Fine-needle aspiration biopsy was used in 10 cases and confirmed JXG cytologically in all cases. The iris lesion was treated with topical (18/21, 86%) and/or periocular (4/21, 19%) corticosteroids. The eyelid, conjunctiva, and orbital JXG were treated with excisional biopsy in 5 patients (5/9, 56%), topical corticosteroids in 2 patients (2/9, 22%), and observation in 2 patients (2/9, 22%). Of 28 patients with a mean follow-up of 15 months (median, 6 months; range, 1-68 months), tumor regression was achieved in all cases, without recurrence. Two patients were lost to follow-up. Upon follow-up of the iris JXG group, visual acuity was stable or improved (18/19 patients, 95%) and IOP was controlled long-term without medication (14/21 patients, 74%). No eyes were managed with enucleation.\n\n\nCONCLUSIONS\nOcular JXG preferentially affects the iris and is often isolated without cutaneous involvement. Iris JXG responds to topical or periocular corticosteroids, often with stabilization or improvement of vision and IOP.",
"title": ""
},
{
"docid": "1e7721225d84896a72f2ea790570ecbd",
"text": "We have developed a Blumlein line pulse generator which utilizes the superposition of electrical pulses launched from two individually switched pulse forming lines. By using a fast power MOSFET as a switch on each end of the Blumlein line, we were able to generate pulses with amplitudes of 1 kV across a 100-Omega load. Pulse duration and polarity can be controlled by the temporal delay in the triggering of the two switches. In addition, the use of identical switches allows us to overcome pulse distortions arising from the use of non-ideal switches in the traditional Blumlein configuration. With this pulse generator, pulses with durations between 8 and 300 ns were applied to Jurkat cells (a leukemia cell line) to investigate the pulse dependent increase in calcium levels. The development of the calcium levels in individual cells was studied by spinning-disc confocal fluorescent microscopy with the calcium indicator, fluo-4. With this fast imaging system, fluorescence changes, representing calcium mobilization, could be resolved with an exposure of 5 ms every 18 ms. For a 60-ns pulse duration, each rise in intracellular calcium was greater as the electric field strength was increased from 25 kV/cm to 100 kV/cm. Only for the highest electric field strength is the response dependent on the presence of extracellular calcium. The results complement ion-exchange mechanisms previously observed during the charging of cellular membranes, which were suggested by observations of membrane potential changes during exposure.",
"title": ""
},
{
"docid": "3348e5aaa5f610f47e11f58aa1094d4d",
"text": "Accountability has emerged as a critical concept related to data protection in cloud ecosystems. It is necessary to maintain chains of accountability across cloud ecosystems. This is to enhance the confidence in the trust that cloud actors have while operating in the cloud. This paper is concerned with accountability in the cloud. It presents a conceptual model, consisting of attributes, practices and mechanisms for accountability in the cloud. The proposed model allows us to explain, in terms of accountability attributes, cloud-mediated interactions between actors. This forms the basis for characterizing accountability relationships between cloud actors, and hence chains of accountability in cloud ecosystems.",
"title": ""
}
] |
scidocsrr
|
afa4b96604b51dfd4b8c09d1433f174b
|
ACOUSTIC SCENE CLASSIFICATION USING PARALLEL COMBINATION OF LSTM AND
|
[
{
"docid": "afee419227629f8044b5eb0addd65ce3",
"text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.",
"title": ""
},
{
"docid": "29c91c8d6f7faed5d23126482a2f553b",
"text": "In this article, we present an account of the state of the art in acoustic scene classification (ASC), the task of classifying environments from the sounds they produce. Starting from a historical review of previous research in this area, we define a general framework for ASC and present different implementations of its components. We then describe a range of different algorithms submitted for a data challenge that was held to provide a general and fair benchmark for ASC techniques. The data set recorded for this purpose is presented along with the performance metrics that are used to evaluate the algorithms and statistical significance tests to compare the submitted methods.",
"title": ""
},
{
"docid": "927afdfa9f14c96a034d78be03936ff8",
"text": "Multimedia event detection (MED) is the task of detecting given events (e.g. birthday party, making a sandwich) in a large collection of video clips. While visual features and automatic speech recognition typically provide the best features for this task, nonspeech audio can also contribute useful information, such as crowds cheering, engine noises, or animal sounds. MED is typically formulated as a two-stage process: the first stage generates clip-level feature representations, often by aggregating frame-level features; the second stage performs binary or multi-class classification to decide whether a given event occurs in a video clip. Both stages are usually performed \"statically\", i.e. using only local temporal information, or bag-of-words models. In this paper, we introduce longer-range temporal information with deep recurrent neural networks (RNNs) for both stages. We classify each audio frame among a set of semantic units called \"noisemes\" the sequence of frame-level confidence distributions is used as a variable-length clip-level representation. Such confidence vector sequences are then fed into long short-term memory (LSTM) networks for clip-level classification. We observe improvements in both frame-level and clip-level performance compared to SVM and feed-forward neural network baselines.",
"title": ""
},
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
}
] |
[
{
"docid": "02156199912027e9230b3c000bcbe87b",
"text": "Voice conversion (VC) using sequence-to-sequence learning of context posterior probabilities is proposed. Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters. Although conventional VC can be built from non-parallel data, it is difficult to convert speaker individuality such as phonetic property and speaking rate contained in the posterior probabilities because the source posterior probabilities are directly used for predicting target speech parameters. In this work, we assume that the training data partly include parallel speech data and propose sequence-to-sequence learning between the source and target posterior probabilities. The conversion models perform non-linear and variable-length transformation from the source probability sequence to the target one. Further, we propose a joint training algorithm for the modules. In contrast to conventional VC, which separately trains the speech recognition that estimates posterior probabilities and the speech synthesis that predicts target speech parameters, our proposed method jointly trains these modules along with the proposed probability conversion modules. Experimental results demonstrate that our approach outperforms the conventional VC.",
"title": ""
},
{
"docid": "e648fb690dae270c4e63442a49aacaa9",
"text": "It is argued that the concept of free will, like the concept of truth in formal languages, requires a separation between an object level and a meta-level for being consistently defined. The Jamesian two-stage model, which deconstructs free will into the causally open “free” stage with its closure in the “will” stage, is implicitly a move in this direction. However, to avoid the dilemma of determinism, free will additionally requires an infinite regress of causal meta-stages, making free choice a hypertask. We use this model to define free will of the rationalist-compatibilist type. This is shown to provide a natural three-way distinction between quantum indeterminism, freedom and free will, applicable respectively to artificial intelligence (AI), animal agents and human agents. We propose that the causal hierarchy in our model corresponds to a hierarchy of Turing uncomputability. Possible neurobiological and behavioral tests to demonstrate free will experimentally are suggested. Ramifications of the model for physics, evolutionary biology, neuroscience, neuropathological medicine and moral philosophy are briefly outlined.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "12ee85d0fa899e4e864bc1c30dedcd22",
"text": "An object-oriented simulation (OOS) consists of a set of objects that interact with each other over time. This paper provides a thorough introduction to OOS, addresses the important issue of composition versus inheritance, describes frames and frameworks for OOS, and presents an example of a network simulation language as an illustration of OOS.",
"title": ""
},
{
"docid": "62ee277e32395dd9d5883e3160d2cf7a",
"text": "Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here1.",
"title": ""
},
{
"docid": "3003c878b36fa5c7be329cd3bb226dea",
"text": "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the β-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the β-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework. Learning disentangled representations without supervision is a difficult open problem. Disentangled variables are generally considered to contain interpretable semantic information and reflect separate factors of variation in the data. While the definition of disentanglement is open to debate, many believe a factorial representation, one with statistically independent variables, is a good starting point [1, 2, 3]. Such representations distill information into a compact form which is oftentimes semantically meaningful and useful for a variety of tasks [2, 4]. For instance, it is found that such representations are more generalizable and robust against adversarial attacks [5]. Many state-of-the-art methods for learning disentangled representations are based on re-weighting parts of an existing objective. For instance, it is claimed that mutual information between latent variables and the observed data can encourage the latents into becoming more interpretable [6]. It is also argued that encouraging independence between latent variables induces disentanglement [7]. However, there is no strong evidence linking factorial representations to disentanglement. In part, this can be attributed to weak qualitative evaluation procedures. While traversals in the latent representation can qualitatively illustrate disentanglement, quantitative measures of disentanglement are in their infancy. In this paper, we: • show a decomposition of the variational lower bound that can be used to explain the success of the β-VAE [7] in learning disentangled representations. • propose a simple method based on weighted minibatches to stochastically train with arbitrary weights on the terms of our decomposition without any additional hyperparameters. • introduce the β-TCVAE, which can be used as a plug-in replacement for the β-VAE with no extra hyperparameters. Empirical evaluations suggest that the β-TCVAE discovers more interpretable representations than existing methods, while also being fairly robust to random initialization. • propose a new information-theoretic disentanglement metric, which is classifier-free and generalizable to arbitrarily-distributed and non-scalar latent variables. While Kim & Mnih [8] have independently proposed augmenting VAEs with an equivalent total correlation penalty to the β-TCVAE, their proposed training method differs from ours and requires an auxiliary discriminator network. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.",
"title": ""
},
{
"docid": "32fd7a91091f74a5ea55226aa44403d3",
"text": "Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia.",
"title": ""
},
{
"docid": "3d6886b96d1a6fdf1339ce4c2e2b76af",
"text": "Crisis informatics is a field of research that investigates the use of computer-mediated communication— including social media—by members of the public and other entities during times of mass emergency. Supporting this type of research is challenging because large amounts of ephemeral event data can be generated very quickly and so must then be just as rapidly captured. Such data sets are challenging to analyze because of their heterogeneity and size. We have been designing, developing, and deploying software infrastructure to enable the large-scale collection and analysis of social media data during crisis events. We report on the challenges encountered when working in this space, the desired characteristics of such infrastructure, and the techniques, technology, and architectures that have been most useful in providing both scalability and flexibility. We also discuss the types of analytics this infrastructure supports and implications for future crisis informatics research.",
"title": ""
},
{
"docid": "1940721177615adccce0906e7c93cd28",
"text": "Pattern Matching is a computationally intensive task used in many research fields and real world applications. Due to the ever-growing volume of data to be processed, and increasing link speeds, the number of patterns to be matched has risen significantly. In this paper we explore the parallel capabilities of modern General Purpose Graphics Processing Units (GPGPU) applications for high speed pattern matching. A highly compressed failure-less Aho-Corasick algorithm is presented for Intrusion Detection Systems on off-the-shelf hardware. This approach maximises the bandwidth for data transfers between the host and the Graphics Processing Unit (GPU). Experiments are performed on multiple alphabet sizes, demonstrating the capabilities of the library to be used in different research fields, while sustaining an adequate throughput for intrusion detection systems or DNA sequencing. The work also explores the performance impact of adequate prefix matching for alphabet sizes and varying pattern numbers achieving speeds up to 8Gbps and low memory consumption for intrusion detection systems.",
"title": ""
},
{
"docid": "e8e1bf877e45de0d955d8736c342ec76",
"text": "Parking guidance and information (PGI) systems are becoming important parts of intelligent transportation systems due to the fact that cars and infrastructure are becoming more and more connected. One major challenge in developing efficient PGI systems is the uncertain nature of parking availability in parking facilities (both on-street and off-street). A reliable PGI system should have the capability of predicting the availability of parking at the arrival time with reliable accuracy. In this paper, we study the nature of the parking availability data in a big city and propose a multivariate autoregressive model that takes into account both temporal and spatial correlations of parking availability. The model is used to predict parking availability with high accuracy. The prediction errors are used to recommend the parking location with the highest probability of having at least one parking spot available at the estimated arrival time. The results are demonstrated using real-time parking data in the areas of San Francisco and Los Angeles.",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "75ed4cabbb53d4c75fda3a291ea0ab67",
"text": "Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols.",
"title": ""
},
{
"docid": "b75336a7470fe2b002e742dbb6bfa8d5",
"text": "In Intelligent Tutoring System (ITS), tracing the student's knowledge state during learning has been studied for several decades in order to provide more supportive learning instructions. In this paper, we propose a novel model for knowledge tracing that i) captures students' learning ability and dynamically assigns students into distinct groups with similar ability at regular time intervals, and ii) combines this information with a Recurrent Neural Network architecture known as Deep Knowledge Tracing. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art techniques for student modelling.",
"title": ""
},
{
"docid": "14b6af9d7199f724112021f81694c7ea",
"text": "Much research indicates that East Asians, more than Americans, explain events with reference to the context. The authors examined whether East Asians also attend to the context more than Americans do. In Study 1, Japanese and Americans watched animated vignettes of underwater scenes and reported the contents. In a subsequent recognition test, they were shown previously seen objects as well as new objects, either in their original setting or in novel settings, and then were asked to judge whether they had seen the objects. Study 2 replicated the recognition task using photographs of wildlife. The results showed that the Japanese (a) made more statements about contextual information and relationships than Americans did and (b) recognized previously seen objects more accurately when they saw them in their original settings rather than in the novel settings, whereas this manipulation had relatively little effect on Americans.",
"title": ""
},
{
"docid": "adad5599122e63cde59322b7ba46461b",
"text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.",
"title": ""
},
{
"docid": "050c701f2663f4fa85aadd65a5dc96f2",
"text": "The availability of multiple, essentially complete genome sequences of prokaryotes and eukaryotes spurred both the demand and the opportunity for the construction of an evolutionary classification of genes from these genomes. Such a classification system based on orthologous relationships between genes appears to be a natural framework for comparative genomics and should facilitate both functional annotation of genomes and large-scale evolutionary studies. We describe here a major update of the previously developed system for delineation of Clusters of Orthologous Groups of proteins (COGs) from the sequenced genomes of prokaryotes and unicellular eukaryotes and the construction of clusters of predicted orthologs for 7 eukaryotic genomes, which we named KOGs after euk aryotic o rthologous g roups. The COG collection currently consists of 138,458 proteins, which form 4873 COGs and comprise 75% of the 185,505 (predicted) proteins encoded in 66 genomes of unicellular organisms. The euk aryotic o rthologous g roups (KOGs) include proteins from 7 eukaryotic genomes: three animals (the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster and Homo sapiens), one plant, Arabidopsis thaliana, two fungi (Saccharomyces cerevisiae and Schizosaccharomyces pombe), and the intracellular microsporidian parasite Encephalitozoon cuniculi. The current KOG set consists of 4852 clusters of orthologs, which include 59,838 proteins, or ~54% of the analyzed eukaryotic 110,655 gene products. Compared to the coverage of the prokaryotic genomes with COGs, a considerably smaller fraction of eukaryotic genes could be included into the KOGs; addition of new eukaryotic genomes is expected to result in substantial increase in the coverage of eukaryotic genomes with KOGs. Examination of the phyletic patterns of KOGs reveals a conserved core represented in all analyzed species and consisting of ~20% of the KOG set. This conserved portion of the KOG set is much greater than the ubiquitous portion of the COG set (~1% of the COGs). In part, this difference is probably due to the small number of included eukaryotic genomes, but it could also reflect the relative compactness of eukaryotes as a clade and the greater evolutionary stability of eukaryotic genomes. The updated collection of orthologous protein sets for prokaryotes and eukaryotes is expected to be a useful platform for functional annotation of newly sequenced genomes, including those of complex eukaryotes, and genome-wide evolutionary studies.",
"title": ""
},
{
"docid": "1377bac68319fcc57fbafe6c21e89107",
"text": "In recent years, robotics in agriculture sector with its implementation based on precision agriculture concept is the newly emerging technology. The main reason behind automation of farming processes are saving the time and energy required for performing repetitive farming tasks and increasing the productivity of yield by treating every crop individually using precision farming concept. Designing of such robots is modeled based on particular approach and certain considerations of agriculture environment in which it is going to work. These considerations and different approaches are discussed in this paper. Also, prototype of an autonomous Agriculture Robot is presented which is specifically designed for seed sowing task only. It is a four wheeled vehicle which is controlled by LPC2148 microcontroller. Its working is based on the precision agriculture which enables efficient seed sowing at optimal depth and at optimal distances between crops and their rows, specific for each crop type.",
"title": ""
},
{
"docid": "116fd1ecd65f7ddfdfad6dca09c12876",
"text": "Malicious hardware Trojan circuitry inserted in safety-critical applications is a major threat to national security. In this work, we propose a novel application of a key-based obfuscation technique to achieve security against hardware Trojans. The obfuscation scheme is based on modifying the state transition function of a given circuit by expanding its reachable state space and enabling it to operate in two distinct modes -- the normal mode and the obfuscated mode. Such a modification obfuscates the rareness of the internal circuit nodes, thus making it difficult for an adversary to insert hard-to-detect Trojans. It also makes some inserted Trojans benign by making them activate only in the obfuscated mode. The combined effect leads to higher Trojan detectability and higher level of protection against such attack. Simulation results for a set of benchmark circuits show that the scheme is capable of achieving high levels of security at modest design overhead.",
"title": ""
},
{
"docid": "789fe916396c5a57a0327618d5efc74d",
"text": "In object detection, an intersection over union (IoU) threshold is required to define positives and negatives. An object detector, trained with low IoU threshold, e.g. 0.5, usually produces noisy detections. However, detection performance tends to degrade with increasing the IoU thresholds. Two main factors are responsible for this: 1) overfitting during training, due to exponentially vanishing positive samples, and 2) inference-time mismatch between the IoUs for which the detector is optimal and those of the input hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, is proposed to address these problems. It consists of a sequence of detectors trained with increasing IoU thresholds, to be sequentially more selective against close false positives. The detectors are trained stage by stage, leveraging the observation that the output of a detector is a good distribution for training the next higher quality detector. The resampling of progressively improved hypotheses guarantees that all detectors have a positive set of examples of equivalent size, reducing the overfitting problem. The same cascade procedure is applied at inference, enabling a closer match between the hypotheses and the detector quality of each stage. A simple implementation of the Cascade R-CNN is shown to surpass all single-model object detectors on the challenging COCO dataset. Experiments also show that the Cascade R-CNN is widely applicable across detector architectures, achieving consistent gains independently of the baseline detector strength. The code is available at https://github.com/zhaoweicai/cascade-rcnn.",
"title": ""
},
{
"docid": "956541e525760ae663028d7b73d6fb46",
"text": "Regression testing is an important activity but can get expensive for large test suites. Test-suite reduction speeds up regression testing by identifying and removing redundant tests based on a given set of requirements. Traditional research on test-suite reduction is rather diverse but most commonly shares three properties: (1) requirements are defined by a coverage criterion such as statement coverage; (2) the reduced test suite has to satisfy all the requirements as the original test suite; and (3) the quality of the reduced test suites is measured on the software version on which the reduction is performed. These properties make it hard for test engineers to decide how to use reduced test suites. We address all three properties of traditional test-suite reduction: (1) we evaluate test-suite reduction with requirements defined by killed mutants; (2) we evaluate inadequate reduction that does not require reduced test suites to satisfy all the requirements; and (3) we propose evolution-aware metrics that evaluate the quality of the reduced test suites across multiple software versions. Our evaluations allow a more thorough exploration of trade-offs in test-suite reduction, and our evolution-aware metrics show how the quality of reduced test suites can change after the version where the reduction is performed. We compare the trade-offs among various reductions on 18 projects with a total of 261,235 tests over 3,590 commits and a cumulative history spanning 35 years of development. Our results help test engineers make a more informed decision about balancing size, coverage, and fault-detection loss of reduced test suites.",
"title": ""
}
] |
scidocsrr
|
bd6507e6373311ec32b58755a49880a5
|
An Infinite Hidden Markov Model for Short-term Interest Rates ∗
|
[
{
"docid": "c9cc65bb205cd758654245d69e467d45",
"text": "Résumé This study considers the time series behavior of the U.S. real interest rate from 1961 to 1986. We provide a statistical characterization of the series using the methodology of Hamilton (1989), by allowing three possible regimes affecting both the mean and variance of the series. The results suggest that the ex-post real interest rate is essentially random around a mean that is different for the periods 1961-1973, 1973-1980 and 1980-1986. The variance of the process is also different in these episodes being higher in both the 1973-1980 and 1980-1986 sub-periods. The inflation rate series is also analyzed using a three regime framework and again our results show interesting patterns with shifts in both mean and variance. Various model selection tests are run and both an ex-ante real interest rate and an expected inflation series are constructed. Finally, we make clear how our results can explain some recent findings in the literature. Cette étude s'intéresse au comportement des séries du taux d'intérêt réel américain de 1961 à 1986. En utilisant la méthodologie d'Hamilton (1989), la modélisation statistique des séries se fait en postulant trois régimes possibles affectant la moyenne et la variance de celles-ci. Les résultats suggèrent que le taux d'intérêt réel ex-post est essentiellement un processus non corrélé et centré sur une moyenne qui diffère sur les périodes 1961-1973, 1973-1980 et 1980-1986. La variance du processus est aussi différente pour chacune de ces périodes, étant plus élevée dans les sous périodes 1973-1980 et 1980-1986. Les séries du taux d'inflation sont aussi analysées à la lumière de ce modèle à trois régimes et les résultats traduisent encore un comportement intéressant de celles-ci, avec des changements dans la moyenne et la variance. Différents tests de spécification sont utilisés et des séries, à la fois du taux d'intérêt réel ex-ante et de l'inflation anticipée, sont construites. Enfin, Il est montré comment ces résultats peuvent expliquer certaines conclusion récentes de la littérature.",
"title": ""
}
] |
[
{
"docid": "8eb15b09807c1c26b7fbd8b73e11ab2b",
"text": "The work of managers in small and medium-sized enterprises is very information-intensive and the environment in which it is done is very information rich. But are managers able to exploit the wealth of information which surrounds them? And how can information be managed in organisations so that its potential for improving business performance and enhancing the competitiveness of these enterprises can be realised? Answers to these questions lie in clarifying the context of the practice of information management by exploring aspects of organisations and managerial work and in exploring the nature of information at the level of the organisation and the individual manager. From these answers it is possible to suggest some guidelines for managing the integration of business strategy and information, the adoption of a broadly-based definition of information and the development of information capabilities.",
"title": ""
},
{
"docid": "89835907e8212f7980c35ae12d711339",
"text": "In this letter, a novel ultra-wideband (UWB) bandpass filter with compact size and improved upper-stopband performance has been studied and implemented using multiple-mode resonator (MMR). The MMR is formed by attaching three pairs of circular impedance-stepped stubs in shunt to a high impedance microstrip line. By simply adjusting the radius of the circles of the stubs, the resonant modes of the MMR can be roughly allocated within the 3.1-10.6 GHz UWB band while suppressing the spurious harmonics in the upper-stopband. In order to enhance the coupling degree, two interdigital coupled-lines are used in the input and output sides. Thus, a predicted UWB passband is realized. Meanwhile, the insertion loss is higher than 30.0 dB in the upper-stopband from 12.1 to 27.8 GHz. Finally, the filter is successfully designed and fabricated. The EM-simulated and the measured results are presented in this work where excellent agreement between them is obtained.",
"title": ""
},
{
"docid": "81e9b0223d1f5ca74738646ca1f31ca9",
"text": "Limit studies on Dynamic Voltage and Frequency Scaling (DVFS) provide apparently contradictory conclusions. On the one hand early limit studies report that DVFS is effective at large timescales (on the order of million(s) of cycles) with large scaling overheads (on the order of tens of microseconds), and they conclude that there is no need for small overhead DVFS at small timescales. Recent work on the other hand—motivated by the surge of on-chip voltage regulator research—explores the potential of fine-grained DVFS and reports substantial energy savings at timescales of hundreds of cycles (while assuming no scaling overhead).\n This article unifies these apparently contradictory conclusions through a DVFS limit study that simultaneously explores timescale and scaling speed. We find that coarse-grained DVFS is unaffected by timescale and scaling speed, however, fine-grained DVFS may lead to substantial energy savings for memory-intensive workloads. Inspired by these insights, we subsequently propose a fine-grained microarchitecture-driven DVFS mechanism that scales down voltage and frequency upon individual off-chip memory accesses using on-chip regulators. Fine-grained DVFS reduces energy consumption by 12% on average and up to 23% over a collection of memory-intensive workloads for an aggressively clock-gated processor, while incurring an average 0.08% performance degradation (and at most 0.14%). We also demonstrate that the proposed fine-grained DVFS mechanism is orthogonal to existing coarse-grained DVFS policies, and further reduces energy by 6% on average and up to 11% for memory-intensive applications with limited performance impact (at most 0.7%).",
"title": ""
},
{
"docid": "c94001a32f92f5f9125f3118b0640644",
"text": "Traditional remote-server-exploiting malware is quickly evolving and adapting to the new web-centric computing paradigm. By leveraging the large population of (insecure) web sites and exploiting the vulnerabilities at client-side modern (complex) browsers (and their extensions), web-based malware becomes one of the most severe and common infection vectors nowadays. While traditional malware collection and analysis are mainly focusing on binaries, it is important to develop new techniques and tools for collecting and analyzing web-based malware, which should include a complete web-based malicious logic to reflect the dynamic, distributed, multi-step, and multi-path web infection trails, instead of just the binaries executed at end hosts. This paper is a first attempt in this direction to automatically collect web-based malware scenarios (including complete web infection trails) to enable fine-grained analysis. Based on the collections, we provide the capability for offline \"live\" replay, i.e., an end user (e.g., an analyst) can faithfully experience the original infection trail based on her current client environment, even when the original malicious web pages are not available or already cleaned. Our evaluation shows that WebPatrol can collect/cover much more complete infection trails than state-of-the-art honeypot systems such as PHoneyC [11] and Capture-HPC [1]. We also provide several case studies on the analysis of web-based malware scenarios we have collected from a large national education and research network, which contains around 35,000 web sites.",
"title": ""
},
{
"docid": "05db9a684a537fdf1234e92047618e18",
"text": "Globally the internet is been accessed by enormous people within their restricted domains. When the client and server exchange messages among each other, there is an activity that can be observed in log files. Log files give a detailed description of the activities that occur in a network that shows the IP address, login and logout durations, the user's behavior etc. There are several types of attacks occurring from the internet. Our focus of research in this paper is Denial of Service (DoS) attacks with the help of pattern recognition techniques in data mining. Through which the Denial of Service attack is identified. Denial of service is a very dangerous attack that jeopardizes the IT resources of an organization by overloading with imitation messages or multiple requests from unauthorized users.",
"title": ""
},
{
"docid": "86e2873956b79e6bc9826763096e639c",
"text": "ever do anything that is a waste of time – and be prepared to wage long, tedious wars over this principle, \" said Michael O'Connor, project manager at Trimble Navigation in Christchurch, New Zealand. This product group at Trimble is typical of the homegrown approach to agile software development methodologies. While interest in agile methodologies has blossomed in the past two years, its roots go back more than a decade. Teams using early versions of Scrum, Dynamic Systems Development Methodology (DSDM), and adaptive software development (ASD) were delivering successful projects in the early-to mid-1990s. This article attempts to answer the question, \" What constitutes agile software development? \" Because of the breadth of agile approaches and the people who practice them, this is not as easy a question to answer as one might expect. I will try to answer this question by first focusing on the sweet-spot problem domain for agile approaches. Then I will delve into the three dimensions that I refer to as agile ecosystems: barely sufficient methodology, collaborative values, and chaordic perspective. Finally, I will examine several of these agile ecosystems. All problems are different and require different strategies. While battlefield commanders plan extensively, they realize that plans are just a beginning; probing enemy defenses (creating change) and responding to enemy actions (responding to change) are more important. Battlefield commanders succeed by defeating the enemy (the mission), not conforming to a plan. I cannot imagine a battlefield commander saying, \" We lost the battle, but by golly, we were successful because we followed our plan to the letter. \" Battlefields are messy, turbulent, uncertain, and full of change. No battlefield commander would say, \" If we just plan this battle long and hard enough, and put repeatable processes in place, we can eliminate change early in the battle and not have to deal with it later on. \" A growing number of software projects operate in the equivalent of a battle zone – they are extreme projects. This is where agile approaches shine. Project teams operating in this zone attempt to utilize leading or bleeding-edge technologies , respond to erratic requirements changes, and deliver products quickly. Projects may have a relatively clear mission , but the specific requirements can be volatile and evolving as customers and development teams alike explore the unknown. These projects, which I call high-exploration factor projects, do not succumb to rigorous, plan-driven methods. …",
"title": ""
},
{
"docid": "e60d699411055bf31316d468226b7914",
"text": "Tabular data is difficult to analyze and to search through, yielding for new tools and interfaces that would allow even non tech-savvy users to gain insights from open datasets without resorting to specialized data analysis tools and without having to fully understand the dataset structure. The goal of our demonstration is to showcase answering natural language questions from tabular data, and to discuss related system configuration and model training aspects. Our prototype is publicly available and open-sourced (see demo )",
"title": ""
},
{
"docid": "5b2a088f0f53b2a960c1ebad0f9e7251",
"text": "The detailed balance method for calculating the radiative recombination limit to the performance of solar cells has been extended to include free carrier absorption and Auger recombination in addition to radiative losses. This method has been applied to crystalline silicon solar cells where the limiting efficiency is found to be 29.8 percent under AM1.5, based on the measured optical absorption spectrum and published values of the Auger and free carrier absorption coefficients. The silicon is assumed to be textured for maximum benefit from light-trapping effects.",
"title": ""
},
{
"docid": "612cd1b5883fdb09dd9ace00174eb4fa",
"text": "Localization in indoor environment poses a fundamental challenge in ubiquitous computing compared to its well-established GPS-based outdoor environment counterpart. This study investigated the feasibility of a WiFi-based indoor positioning system to localize elderly in an elderly center focusing on their orientation. The fingerprinting method of Received Signal Strength Indication (RSSI) from WiFi Access Points (AP) has been employed to discriminate and uniquely identify a position. The discrimination process of the reference points with its orientation have been analyzed with 0.9, 1.8, and 2.7 meter resolution. The experimental result shows that the WiFi-based RSSI fingerprinting method can discriminate the location and orientation of a user within 1.8 meter resolution.",
"title": ""
},
{
"docid": "560b1d80377210ae6f60d375fa97560e",
"text": "We present the design and evaluation of a multi-articular soft exosuit that is portable, fully autonomous, and provides assistive torques to the wearer at the ankle and hip during walking. Traditional rigid exoskeletons can be challenging to perfectly align with a wearer’s biological joints and can have large inertias, which can lead to the wearer altering their natural motion patterns. Exosuits, in comparison, use textiles to create tensile forces over the body in parallel with the muscles, enabling them to be light and not restrict the wearer’s kinematics. We describe the biologically inspired design and function of our exosuit, including a simplified model of the suit’s architecture and its interaction with the body. A key feature of the exosuit is that it can generate forces passively due to the body’s motion, similar to the body’s ligaments and tendons. These passively-generated forces can be supplemented by actively contracting Bowden cables using geared electric motors, to create peak forces in the suit of up to 200N. We define the suit-human series stiffness as an important parameter in the design of the exosuit and measure it on several subjects, and we perform human subjects testing to determine the biomechanical and physiological effects of the suit. Results from a five-subject study showed a minimal effect on gait kinematics and an average best-case metabolic reduction of 6.4%, comparing suit worn unpowered vs powered, during loaded walking with 34.6kg of carried mass including the exosuit and actuators (2.0kg on both legs, 10.1kg total).",
"title": ""
},
{
"docid": "c514eb87b60db16abd139207d7d24a9d",
"text": "A technique called Time Hopping is proposed for speeding up reinforcement learning algorithms. It is applicable to continuous optimization problems running in computer simulations. Making shortcuts in time by hopping between distant states combined with off-policy reinforcement learning allows the technique to maintain higher learning rate. Experiments on a simulated biped crawling robot confirm that Time Hopping can accelerate the learning process more than seven times.",
"title": ""
},
{
"docid": "e1050f3c38f0b49893da4dd7722aff71",
"text": "The Berkeley lower extremity exoskeleton (BLEEX) is a load-carrying and energetically autonomous human exoskeleton that, in this first generation prototype, carries up to a 34 kg (75 Ib) payload for the pilot and allows the pilot to walk at up to 1.3 m/s (2.9 mph). This article focuses on the human-in-the-loop control scheme and the novel ring-based networked control architecture (ExoNET) that together enable BLEEX to support payload while safely moving in concert with the human pilot. The BLEEX sensitivity amplification control algorithm proposed here increases the closed loop system sensitivity to its wearer's forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The tradeoffs between not having sensors to measure human variables, the need for dynamic model accuracy, and robustness to parameter uncertainty are described. ExoNET provides the physical network on which the BLEEX control algorithm runs. The ExoNET control network guarantees strict determinism, optimized data transfer for small data sizes, and flexibility in configuration. Its features and application on BLEEX are described",
"title": ""
},
{
"docid": "695c396f27ba31f15f7823511473925c",
"text": "Design and experimental analysis of beam steering in microstrip patch antenna array using dumbbell shaped Defected Ground Structure (DGS) for S-band (5.2 GHz) application was carried out in this study. The Phase shifting in antenna has been achieved using different size and position of dumbbell shape DGS. DGS has characteristics of slow wave, wide stop band and compact size. The obtained radiation pattern has provided steerable main lobe and nulls at predefined direction. The radiation pattern for different size and position of dumbbell structure in microstrip patch antenna array was measured and comparative study has been carried out.",
"title": ""
},
{
"docid": "90ef8ff57b2dac74a0e58c43c222b6c8",
"text": "The paper presents an overview of the research on teaching culture and describes effective pedagogical practices that can be integrated into the second language curriculum. Particularly, this overview tries to advance an approach for teaching culture and language through the theoretical construct of the 3Ps (Products, Practices, Perspectives), combined with an inquiry-based teaching approach utilizing instructional technology. This approach promotes student motivation and engagement that can help overcome past issues of stereotyping and lack of intercultural awareness. The authors summarize the research articles illustrating how teachers successfully integrate digital media together with inquiry learning into instruction to create a rich and meaningful environment in which students interact with authentic data and build their own understanding of a foreign culture’s products, practices, and perspectives. In addition, the authors review the articles that describe more traditional methods of teaching culture and demonstrate how they can be enhanced with technology. “The digital revolution is far more significant than the invention of writing or even of printing. It offers the potential for humans to learn new ways of thinking and organizing social structures.” Douglas Engelbard (1997) The advent of the Standards for Foreign Language Learning in the 21st Century (National Standards in Foreign Language Education Project, 1999) drew attention to the vital role of culture in language classrooms and defined culture as a fundamental part of the second language (L2) learning 5",
"title": ""
},
{
"docid": "b630a6b346edfb073c120cb70169b884",
"text": "Image tracing is a foundational component of the workflow in graphic design, engineering, and computer animation, linking hand-drawn concept images to collections of smooth curves needed for geometry processing and editing. Even for clean line drawings, modern algorithms often fail to faithfully vectorize junctions, or points at which curves meet; this produces vector drawings with incorrect connectivity. This subtle issue undermines the practical application of vectorization tools and accounts for hesitance among artists and engineers to use automatic vectorization software. To address this issue, we propose a novel image vectorization method based on state-of-the-art mathematical algorithms for frame field processing. Our algorithm is tailored specifically to disambiguate junctions without sacrificing quality.",
"title": ""
},
{
"docid": "19a538b6a49be54b153b0a41b6226d1f",
"text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.",
"title": ""
},
{
"docid": "c6ad70b8b213239b0dd424854af194e2",
"text": "The neural mechanisms underlying the processing of conventional and novel conceptual metaphorical sentences were examined with event-related potentials (ERPs). Conventional metaphors were created based on the Contemporary Theory of Metaphor and were operationally defined as familiar and readily interpretable. Novel metaphors were unfamiliar and harder to interpret. Using a sensicality judgment task, we compared ERPs elicited by the same target word when it was used to end anomalous, novel metaphorical, conventional metaphorical and literal sentences. Amplitudes of the N400 ERP component (320-440 ms) were more negative for anomalous sentences, novel metaphors, and conventional metaphors compared with literal sentences. Within a later window (440-560 ms), ERPs associated with conventional metaphors converged to the same level as literal sentences while the novel metaphors stayed anomalous throughout. The reported results were compatible with models assuming an initial stage for metaphor mappings from one concept to another and that these mappings are cognitively taxing.",
"title": ""
},
{
"docid": "03f0614b2479fd470eea5ef39c5a93f9",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: a r t i c l e i n f o a b s t r a c t Detailed land use/land cover classification at ecotope level is important for environmental evaluation. In this study, we investigate the possibility of using airborne hyperspectral imagery for the classification of ecotopes. In particular, we assess two tree-based ensemble classification algorithms: Adaboost and Random Forest, based on standard classification accuracy, training time and classification stability. Our results show that Adaboost and Random Forest attain almost the same overall accuracy (close to 70%) with less than 1% difference, and both outperform a neural network classifier (63.7%). Random Forest, however, is faster in training and more stable. Both ensemble classifiers are considered effective in dealing with hyperspectral data. Furthermore, two feature selection methods, the out-of-bag strategy and a wrapper approach feature subset selection using the best-first search method are applied. A majority of bands chosen by both methods concentrate between 1.4 and 1.8 μm at the early shortwave infrared region. Our band subset analyses also include the 22 optimal bands between 0.4 and 2.5 μm suggested in Thenkabail et al. (2004). Accuracy assessments of hyperspectral waveband performance for vegetation analysis applications. Remote Sensing of Environment, 91, 354–376.] due to similarity of the target classes. All of the three band subsets considered in this study work well with both classifiers as in most cases the overall accuracy dropped only by less than 1%. A subset of 53 bands is created by combining all feature subsets and comparing to using the entire set the overall accuracy is the same with Adaboost, and with Random Forest, a 0.2% improvement. The strategy to use a basket of band selection methods works better. Ecotopes belonging to the tree classes are in general classified better than the grass classes. Small adaptations of the classification scheme are recommended to improve the applicability of remote sensing method for detailed ecotope mapping. 1. Introduction Land use/land cover classification is a generic tool for environmental monitoring. To measure subtle changes in the ecosystem, a land use/land cover classification at ecotope level with definitive biological and ecological characteristics is needed. Ecotopes are distinct ecological landscape features …",
"title": ""
},
{
"docid": "5fb0931dafbb024663f2d68faca2f552",
"text": "The instrumentation and control (I&C) systems in nuclear power plants (NPPs) collect signals from sensors measuring plant parameters, integrate and evaluate sensor information, monitor plant performance, and generate signals to control plant devices for a safe operation of NPPs. Although the application of digital technology in industrial control systems (ICS) started a few decades ago, I&C systems in NPPs have utilized analog technology longer than any other industries. The reason for this stems from the fact that NPPs require strong assurance for safety and reliability. In recent years, however, digital I&C systems have been developed and installed in new and operating NPPs. This application of digital computers, and communication system and network technologies in NPP I&C systems accompanies cyber security concerns, similar to other critical infrastructures based on digital technologies. The Stuxnet case in 2010 evoked enormous concern regarding cyber security in NPPs. Thus, performing appropriate cyber security risk assessment for the digital I&C systems of NPPs, and applying security measures to the systems, has become more important nowadays. In general, approaches to assure cyber security in NPPs may be compatible with those for ICS and/or supervisory control and data acquisition (SCADA) systems in many aspects. Cyber security requirements and the risk assessment methodologies for ICS and SCADA systems are adopted from those for information technology (IT) systems. Many standards and guidance documents have been published for these areas [1~10]. Among them NIST SP 800-30 [4], NIST SP 800-37 [5], and NIST 800-39 [6] describe the risk assessment methods, NIST SP 800-53 [7] and NIST SP 800-53A [8] address security controls for IT systems. NIST SP 800-82 [10] describes the differences between IT systems and ICS and provides guidance for securing ICS, including SCADA systems, distributed control systems (DCS), and other systems performing control functions. As NIST SP 800-82 noted the differences between IT The applications of computers and communication system and network technologies in nuclear power plants have expanded recently. This application of digital technologies to the instrumentation and control systems of nuclear power plants brings with it the cyber security concerns similar to other critical infrastructures. Cyber security risk assessments for digital instrumentation and control systems have become more crucial in the development of new systems and in the operation of existing systems. Although the instrumentation and control systems of nuclear power plants are similar to industrial control systems, the former have specifications that differ from the latter in terms of architecture and function, in order to satisfy nuclear safety requirements, which need different methods for the application of cyber security risk assessment. In this paper, the characteristics of nuclear power plant instrumentation and control systems are described, and the considerations needed when conducting cyber security risk assessments in accordance with the lifecycle process of instrumentation and control systems are discussed. For cyber security risk assessments of instrumentation and control systems, the activities and considerations necessary for assessments during the system design phase or component design and equipment supply phase are presented in the following 6 steps: 1) System Identification and Cyber Security Modeling, 2) Asset and Impact Analysis, 3) Threat Analysis, 4) Vulnerability Analysis, 5) Security Control Design, and 6) Penetration test. The results from an application of the method to a digital reactor protection system are described.",
"title": ""
}
] |
scidocsrr
|
bdb9c22b8c10276efd9058a53444655d
|
Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks
|
[
{
"docid": "90eaa74b7d1955136c1d79435c89e44e",
"text": "Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model.",
"title": ""
}
] |
[
{
"docid": "b8be5a7904829b247436fa9c544110a6",
"text": "Realization of Randomness had always been a controversial concept with great importance both from theoretical and practical Perspectives. This realization has been revolutionized in the light of recent studies especially in the realms of Chaos Theory, Algorithmic Information Theory and Emergent behavior in complex systems. We briefly discuss different definitions of Randomness and also different methods for generating it. The connection between all these approaches and the notion of Normality as the necessary condition of being unpredictable would be discussed. Then a complex-system-based Random Number Generator would be introduced. We will analyze its paradoxical features (Conservative Nature and reversibility in spite of having considerable variation) by using information theoretic measures in connection with other measures. The evolution of this Random Generator is equivalent to the evolution of its probabilistic description in terms of probability distribution over blocks of different lengths. By getting the aid of simulations we will show the ability of this system to preserve normality during the process of coarse graining. Keywords—Random number generators; entropy; correlation information; elementary cellular automata; reversibility",
"title": ""
},
{
"docid": "32a45d3c08e24d29ad5f9693253c0e9e",
"text": "This paper presents comparative study of high-speed, low-power and low voltage full adder circuits. Our approach is based on XOR-XNOR design full adder circuits in a single unit. A low power and high performance 9T full adder cell using a design style called “XOR (3T)” is discussed. The designed circuit commands a high degree of regularity and symmetric higher density than the conventional CMOS design style as well as it lowers power consumption by using XOR (3T) logic circuits. Gate Diffusion Input (GDI) technique of low-power digital combinatorial circuit design is also described. This technique helps in reducing the power consumption and the area of digital circuits while maintaining low complexity of logic design. This paper analyses, evaluates and compares the performance of various adder circuits. Several simulations conducted using different voltage supplies, load capacitors and temperature variation demonstrate the superiority of the XOR (3T) based full adder designs in term of delay, power and power delay product (PDP) compared to the other full adder circuits. Simulation results illustrate the superiority of the designed adder circuits against the conventional CMOS, TG and Hybrid full adder circuits in terms of power, delay and power delay product (PDP). .",
"title": ""
},
{
"docid": "f26cc4afade8625576ff631e1ff4f3b4",
"text": "Electromigration and voltage drop (IR-drop) are two major reliability issues in modern IC design. Electromigration gradually creates permanently open or short circuits due to excessive current densities; IR-drop causes insufficient power supply, thus degrading performance or even inducing functional errors because of nonzero wire resistance. Both types of failure can be triggered by insufficient wire widths. Although expanding the wire width alleviates electromigration and IR-drop, unlimited expansion not only increases the routing cost, but may also be infeasible due to the limited routing resource. In addition, electromigration and IR-drop manifest mainly in the power/ground (P/G) network. Therefore, taking wire widths into consideration is desirable to prevent electromigration and IR-drop at P/G routing. Unlike mature digital IC designs, P/G routing in analog ICs has not yet been well studied. In a conventional design, analog designers manually route P/G networks by implementing greedy strategies. However, the growing scale of analog ICs renders manual routing inefficient, and the greedy strategies may be ineffective when electromigration and IR-drop are considered. This study distances itself from conventional manual design and proposes an automatic analog P/G router that considers electromigration and IR-drops. First, employing transportation formulation, this article constructs an electromigration-aware rectilinear Steiner tree with the minimum routing cost. Second, without changing the solution quality, wires are bundled to release routing space for enhancing routability and relaxing congestion. A wire width extension method is subsequently adopted to reduce wire resistance for IR-drop safety. Compared with high-tech designs, the proposed approach achieves equally optimal solutions for electromigration avoidance, with superior efficiencies. Furthermore, via industrial design, experimental results also show the effectiveness and efficiency of the proposed algorithm for electromigration prevention and IR-drop reduction.",
"title": ""
},
{
"docid": "305679866d219b0856ed48230f30c549",
"text": "The contingency table is a work horse of official statistics, the format of reported data for the US Census, Bureau of Labor Statistics, and the Internal Revenue Service. In many settings such as these privacy is not only ethically mandated, but frequently legally as well. Consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release. However, all current techniques for reporting contingency tables fall short on at leas one of privacy, accuracy, and consistency (among multiple released tables). We propose a solution that provides strong guarantees for all three desiderata simultaneously.\n Our approach can be viewed as a special case of a more general approach for producing synthetic data: Any privacy-preserving mechanism for contingency table release begins with raw data and produces a (possibly inconsistent) privacy-preserving set of marginals. From these tables alone-and hence without weakening privacy--we will find and output the \"nearest\" consistent set of marginals. Interestingly, this set is no farther than the tables of the raw data, and consequently the additional error introduced by the imposition of consistency is no more than the error introduced by the privacy mechanism itself.\n The privacy mechanism of [20] gives the strongest known privacy guarantees, with very little error. Combined with the techniques of the current paper, we therefore obtain excellent privacy, accuracy, and consistency among the tables. Moreover, our techniques are surprisingly efficient. Our techniques apply equally well to the logical cousin of the contingency table, the OLAP cube.",
"title": ""
},
{
"docid": "75efe03f1d16a72712674e4494aca633",
"text": "Three bodies of research that have developed in relative isolation center on each of three kinds of phonological processing: phonological awareness, awareness of the sound structure of language; phonological receding in lexical access, receding written symbols into a sound-based representational system to get from the written word to its lexical referent; and phonetic receding in working memory, recoding written symbols into a sound-based representational system to maintain them efficiently in working memory. In this review we integrate these bodies of research and address the interdependent issues of the nature of phonological abilities and their causal roles in the acquisition of reading skills. Phonological ability seems to be general across tasks that purport to measure the three kinds of phonological processing, and this generality apparently is independent of general cognitive ability. However, the generality of phonological ability is not complete, and there is an empirical basis for distinguishing phonological awareness and phonetic recoding in working memory. Our review supports a causal role for phonological awareness in learning to read, and suggests the possibility of similar causal roles for phonological recoding in lexical access and phonetic recoding in working memory. Most researchers have neglected the probable causal role of learning to read in the development of phonological skills. It is no longer enough to ask whether phonological skills play a causal role in the acquisition of reading skills. The question now is which aspects of phonological processing (e.g., awareness, recoding in lexical access, recoding in working memory) are causally related to which aspects of reading (e.g., word recognition, word analysis, sentence comprehension), at which point in their codevelopment, and what are the directions of these causal relations?",
"title": ""
},
{
"docid": "e09594fce400df1297c5c32afac85fee",
"text": "Results: Of the 74 ears tested, 45 (61%) had effusion on direct inspection. The effusion was purulent in 8 ears (18%), serous in 9 ears (20%), and mucoid in 28 ears (62%). Ultrasound identified the presence or absence of effusion in 71 cases (96%) (P=.04). Ultrasound distinguished between serous and mucoid effusion with 100% accuracy (P=.04). The probe did not distinguish between mucoid and purulent effusion.",
"title": ""
},
{
"docid": "31975dad000fa4dabf2b922876298aca",
"text": "We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future.",
"title": ""
},
{
"docid": "9423718cce01b45c688066f322b2c2aa",
"text": "Currently there are many techniques based on information technology and communication aimed at assessing the performance of students. Data mining applied in the educational field (educational data mining) is one of the most popular techniques that are used to provide feedback with regard to the teaching-learning process. In recent years there have been a large number of open source applications in the area of educational data mining. These tools have facilitated the implementation of complex algorithms for identifying hidden patterns of information in academic databases. The main objective of this paper is to compare the technical features of three open source tools (RapidMiner, Knime and Weka) as used in educational data mining. These features have been compared in a practical case study on the academic records of three engineering programs in an Ecuadorian university. This comparison has allowed us to determine which tool is most effective in terms of predicting student performance.",
"title": ""
},
{
"docid": "795d4e73b3236a2b968609c39ce8f417",
"text": "In this paper, we are introducing an intelligent valet parking management system that guides the cars to autonomously park within a parking lot. The IPLMS for Intelligent Parking Lot Management System, consists of two modules: 1) a model car with a set of micro-controllers and sensors which can scan the environment for suitable parking spot and avoid collision to obstacles, and a Parking Lot Management System (IPLMS) which screens the parking spaces within the parking lot and offers guidelines to the car. The model car has the capability to autonomously maneuver within the parking lot using a fuzzy logic algorithm, and execute parking in the spot determined by the IPLMS, using a parking algorithm. The car receives the instructions from the IPLMS through a wireless communication link. The IPLMS has the flexibility to be adopted by any parking management system, and can potentially save the clients time to look for a parking spot, and/or to stroll from an inaccessible parking space. Moreover, the IPLMS can decrease the financial burden from the parking lot management by offering an easy-to-install system for self-guided valet parking.",
"title": ""
},
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
},
{
"docid": "c3838ee9c296364d2bea785556dfd2fb",
"text": "Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.",
"title": ""
},
{
"docid": "6091fa15d92c79eb646d5a160f4dbf61",
"text": "PURPOSE\nTo describe the normal anatomy of the finger flexor tendon pulley system, with anatomic correlation, and to define criteria to diagnose pulley abnormalities with different imaging modalities.\n\n\nMATERIALS AND METHODS\nThree groups of cadaveric fingers underwent computed tomography (CT), magnetic resonance (MR) imaging, and ultrasonography (US). The normal anatomy of the pulley system was studied at extension and flexion without and with MR tenography. Pulley lengths were measured, and anatomic correlation was performed. Pulley lesions were created and studied at flexion, extension, and forced flexion. Two radiologists reviewed the studies in blinded fashion.\n\n\nRESULTS\nMR imaging demonstrated A2 (proximal phalanx) and A4 (middle phalanx) pulleys in 12 (100%) of 12 cases, without and with tenography. MR tenography showed the A3 (proximal interphalangeal) and A5 (distal interphalangeal) pulleys in 10 (83%) and nine (75%) cases, respectively. US showed the A2 pulley in all cases and the A4 pulley in eight (67%). CT did not allow direct pulley visualization. No significant differences in pulley lengths were measured at MR, US, or pathologic examination (P: =.512). Direct lesion diagnosis was possible with MR imaging and US in 79%-100% of cases, depending on lesion type. Indirect diagnosis was successful with all methods with forced flexion.\n\n\nCONCLUSION\nMR imaging and US provide means of direct finger pulley system evaluation.",
"title": ""
},
{
"docid": "ab5963208b0c5a513ceca6e926e8aab9",
"text": "This paper presents a large-scale corpus for non-task-oriented dialogue response selection, which contains over 27K distinct prompts more than 82K responses collected from social media.1 To annotate this corpus, we define a 5-grade rating scheme: bad, mediocre, acceptable, good, and excellent, according to the relevance, coherence, informativeness, interestingness, and the potential to move a conversation forward. To test the validity and usefulness of the produced corpus, we compare various unsupervised and supervised models for response selection. Experimental results confirm that the proposed corpus is helpful in training response selection models.",
"title": ""
},
{
"docid": "2f793fb05d0dbe43f20f2b73119aa402",
"text": "Dark Web analysis is an important aspect in field of counter terrorism (CT). In the present scenario terrorist attacks are biggest problem for the mankind and whole world is under constant threat from these well-planned, sophisticated and coordinated terrorist operations. Terrorists anonymously set up various web sites embedded in the public Internet, exchanging ideology, spreading propaganda, and recruiting new members. Dark web is a hotspot where terrorists are communicating and spreading their messages. Now every country is focusing for CT. Dark web analysis can be an efficient proactive method for CT by detecting and avoiding terrorist threats/attacks. In this paper we have proposed dark web analysis model that analyzes dark web forums for CT and connecting the dots to prevent the country from terrorist attacks.",
"title": ""
},
{
"docid": "c34e2227c97f71fbe3d2514e1e77e6e6",
"text": "A major difficulty in a recommendation system for groups is to use a group aggregation strategy to ensure, among other things, the maximization of the average satisfaction of group members. This paper presents an approach based on the theory of noncooperative games to solve this problem. While group members can be seen as game players, the items for potential recommendation for the group comprise the set of possible actions. Achieving group satisfaction as a whole becomes, then, a problem of finding the Nash equilibrium. Experiments with a MovieLens dataset and a function of arithmetic mean to compute the prediction of group satisfaction for the generated recommendation have shown statistically significant results when compared to state-of-the-art aggregation strategies, in particular, when evaluation among group members are more heterogeneous. The feasibility of this unique approach is shown by the development of an application for Facebook, which recommends movies to groups of friends.",
"title": ""
},
{
"docid": "1c3b044d572509e14b11d2ec7cb6a566",
"text": "Animal models point towards a key role of brain-derived neurotrophic factor (BDNF), insulin-like growth factor-I (IGF-I) and vascular endothelial growth factor (VEGF) in mediating exercise-induced structural and functional changes in the hippocampus. Recently, also platelet derived growth factor-C (PDGF-C) has been shown to promote blood vessel growth and neuronal survival. Moreover, reductions of these neurotrophic and angiogenic factors in old age have been related to hippocampal atrophy, decreased vascularization and cognitive decline. In a 3-month aerobic exercise study, forty healthy older humans (60 to 77years) were pseudo-randomly assigned to either an aerobic exercise group (indoor treadmill, n=21) or to a control group (indoor progressive-muscle relaxation/stretching, n=19). As reported recently, we found evidence for fitness-related perfusion changes of the aged human hippocampus that were closely linked to changes in episodic memory function. Here, we test whether peripheral levels of BDNF, IGF-I, VEGF or PDGF-C are related to changes in hippocampal blood flow, volume and memory performance. Growth factor levels were not significantly affected by exercise, and their changes were not related to changes in fitness or perfusion. However, changes in IGF-I levels were positively correlated with hippocampal volume changes (derived by manual volumetry and voxel-based morphometry) and late verbal recall performance, a relationship that seemed to be independent of fitness, perfusion or their changes over time. These preliminary findings link IGF-I levels to hippocampal volume changes and putatively hippocampus-dependent memory changes that seem to occur over time independently of exercise. We discuss methodological shortcomings of our study and potential differences in the temporal dynamics of how IGF-1, VEGF and BDNF may be affected by exercise and to what extent these differences may have led to the negative findings reported here.",
"title": ""
},
{
"docid": "bb1a10dc8ad5bc953b6fbc2c1c3e0b59",
"text": "A Ka-band traveling-wave power divider/combiner, which is based on double ridge-waveguide couplers, is presented. The in-phase output, which is a challenge of the waveguide-based traveling-wave power divider, is achieved by optimizing the equivalent circuit of the proposed structure. The novel ridge-waveguide coupler has advantages of low loss, high power capability, and easy assembly. Finally, the proposed power divider/combiner is simulated, fabricated, and measured. A 15-dB measured return-loss bandwidth at the center frequency of 35 GHz is over 28%, a maximum transmission coefficients amplitude imbalance of ±1 dB is achieved, and the phase deviation is less than ± 12° from 32 to 39 GHz.",
"title": ""
},
{
"docid": "8cecac2a619701d7a7a16d706beadc0a",
"text": "Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior -- often referred to as gaming -- the performance of the classifier may deteriorate sharply. Indeed, gaming is a well-known obstacle for using machine learning methods in practice; in financial policy-making, the problem is widely known as Goodhart's law. In this paper, we formalize the problem, and pursue algorithms for learning classifiers that are robust to gaming.\n We model classification as a sequential game between a player named \"Jury\" and a player named \"Contestant.\" Jury designs a classifier, and Contestant receives an input to the classifier drawn from a distribution. Before being classified, Contestant may change his input based on Jury's classifier. However, Contestant incurs a cost for these changes according to a cost function. Jury's goal is to achieve high classification accuracy with respect to Contestant's original input and some underlying target classification function, assuming Contestant plays best response. Contestant's goal is to achieve a favorable classification outcome while taking into account the cost of achieving it.\n For a natural class of \"separable\" cost functions, and certain generalizations, we obtain computationally efficient learning algorithms which are near optimal, achieving a classification error that is arbitrarily close to the theoretical minimum. Surprisingly, our algorithms are efficient even on concept classes that are computationally hard to learn. For general cost functions, designing an approximately optimal strategy-proof classifier, for inverse-polynomial approximation, is NP-hard.",
"title": ""
},
{
"docid": "41ebdf724580830ce2c106ec0415912f",
"text": "Standard Multi-Armed Bandit (MAB) problems assume that the arms are independent. However, in many application scenarios, the information obtained by playing an arm provides information about the remainder of the arms. Hence, in such applications, this informativeness can and should be exploited to enable faster convergence to the optimal solution. In this paper, formalize a new class of multi-armed bandit methods, Global Multi-armed Bandit (GMAB), in which arms are globally informative through a global parameter, i.e., choosing an arm reveals information about all the arms. We propose a greedy policy for the GMAB which always selects the arm with the highest estimated expected reward, and prove that it achieves bounded parameter-dependent regret. Hence, this policy selects suboptimal arms only finitely many times, and after a finite number of initial time steps, the optimal arm is selected in all of the remaining time steps with probability one. In addition, we also study how the informativeness of the arms about each other’s rewards affects the speed of learning. Specifically, we prove that the parameter-free (worst-case) regret is sublinear in time, and decreases with the informativeness of the arms. We also prove a sublinear in time Bayesian risk bound for the GMAB which reduces to the well-known Bayesian risk bound for linearly parameterized bandits when the arms are fully informative. GMABs have applications ranging from drug dosage control to dynamic pricing. Appearing in Proceedings of the 18 International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38. Copyright 2015 by the authors.",
"title": ""
},
{
"docid": "1c9c30e3e007c2d11c6f5ebd0092050b",
"text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.",
"title": ""
}
] |
scidocsrr
|
1dbaf7c92cceefc110c73c346c2875b2
|
3D printed soft actuators for a legged robot capable of navigating unstructured terrain
|
[
{
"docid": "6c5969169086a3b412e27f630c054c60",
"text": "Soft continuum manipulators have the advantage of being more compliant and having more degrees of freedom than rigid redundant manipulators. This attribute should allow soft manipulators to autonomously execute highly dexterous tasks. However, current approaches to motion planning, inverse kinematics, and even design limit the capacity of soft manipulators to take full advantage of their inherent compliance. We provide a computational approach to whole arm planning for a soft planar manipulator that advances the arm's end effector pose in task space while simultaneously considering the arm's entire envelope in proximity to a confined environment. The algorithm solves a series of constrained optimization problems to determine locally optimal inverse kinematics. Due to inherent limitations in modeling the kinematics of a highly compliant soft robot and the local optimality of the planner's solutions, we also rely on the increased softness of our newly designed manipulator to accomplish the whole arm task, namely the arm's ability to harmlessly collide with the environment. We detail the design and fabrication of the new modular manipulator as well as the planner's central algorithm. We experimentally validate our approach by showing that the robotic system is capable of autonomously advancing the soft arm through a pipe-like environment in order to reach distinct goal states.",
"title": ""
}
] |
[
{
"docid": "55b3fe6f2b93fd958d0857b485927bc9",
"text": "In this paper, in order to satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy during high-speed, high-acceleration tracking motions of a 3-degree-of-freedom (3-DOF) planar parallel manipulator, we propose a new control approach, termed convex synchronized (C-S) control. This control strategy is based on the so-called convex combination method, in which the synchronized control method is adopted. Through the adoption of a set of n synchronized controllers, each of which is tuned to satisfy at least one of a set of n closed-loop performance specifications, the resultant set of n closed-loop transfer functions are combined in a convex manner, from which a C-S controller is solved algebraically. Significantly, the resultant C-S controller simultaneously satisfies all n closed-loop performance specifications. Since each synchronized controller is only required to satisfy at least one of the n closed-loop performance specifications, the convex combination method is more efficient than trial-and-error methods, where the gains of a single controller are tuned to satisfy all n closed-loop performance specifications simultaneously. Furthermore, during the design of each synchronized controller, a feedback signal, termed the synchronization error, is employed. Different from the traditional tracking errors, this synchronization error represents the degree of coordination of the active joints in the parallel manipulator based on the manipulator kinematics. As a result, the trajectory tracking accuracy of each active joint and that of the manipulator end-effector is improved. Thus, possessing both the advantages of the convex combination method and synchronized control, the proposed C-S control method can satisfy multiple closed-loop performance specifications simultaneously while improving tracking accuracy. In addition, unavoidable dynamic modeling errors are addressed through the introduction of a robust performance specification, which ensures that all performance specifications are satisfied despite allowable variations in dynamic parameters, or modeling errors. Experiments conducted on a 3-DOF P-R-R-type planar parallel manipulator demonstrate the aforementioned claims.",
"title": ""
},
{
"docid": "f4b92c53dc001d06489093ff302384b2",
"text": "Computational topology has recently known an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.",
"title": ""
},
{
"docid": "1f62fab7d2d88ab3c048e0c620f3842b",
"text": "Being able to locate the origin of a sound is important for our capability to interact with the environment. Humans can locate a sound source in both the horizontal and vertical plane with only two ears, using the head related transfer function HRTF, or more specifically features like interaural time difference ITD, interaural level difference ILD, and notches in the frequency spectra. In robotics notches have been left out since they are considered complex and difficult to use. As they are the main cue for humans' ability to estimate the elevation of the sound source this have to be compensated by adding more microphones or very large and asymmetric ears. In this paper, we present a novel method to extract the notches that makes it possible to accurately estimate the location of a sound source in both the horizontal and vertical plane using only two microphones and human-like ears. We suggest the use of simple spiral-shaped ears that has similar properties to the human ears and make it easy to calculate the position of the notches. Finally we show how the robot can learn its HRTF and build audiomotor maps using supervised learning and how it automatically can update its map using vision and compensate for changes in the HRTF due to changes to the ears or the environment",
"title": ""
},
{
"docid": "0af4eddf70691a7bff675d42a39f96ae",
"text": "How do we know which grammatical error correction (GEC) system is best? A number of metrics have been proposed over the years, each motivated by weaknesses of previous metrics; however, the metrics themselves have not been compared to an empirical gold standard grounded in human judgments. We conducted the first human evaluation of GEC system outputs, and show that the rankings produced by metrics such as MaxMatch and I-measure do not correlate well with this ground truth. As a step towards better metrics, we also propose GLEU, a simple variant of BLEU, modified to account for both the source and the reference, and show that it hews much more closely to human judgments.",
"title": ""
},
{
"docid": "b7dcd24f098965ff757b7ce5f183662b",
"text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.",
"title": ""
},
{
"docid": "46ab85859bd3966b243db79696a236f0",
"text": "The general purpose optimization method known as Particle Swarm Optimization (PSO) has received much attention in past years, with many attempts to find the variant that performs best on a wide variety of optimization problems. The focus of past research has been with making the PSO method more complex, as this is frequently believed to increase its adaptability to other optimization problems. This study takes the opposite approach and simplifies the PSO method. To compare the efficacy of the original PSO and the simplified variant here, an easy technique is presented for efficiently tuning their behavioural parameters. The technique works by employing an overlaid meta-optimizer, which is capable of simultaneously tuning parameters with regard to multiple optimization problems, whereas previous approaches to meta-optimization have tuned behavioural parameters to work well on just a single optimization problem. It is then found that the PSO method and its simplified variant not only have comparable performance for optimizing a number of Artificial Neural Network problems, but the simplified variant appears to offer a small improvement in some cases.",
"title": ""
},
{
"docid": "9c5711c68c7a9c7a4a8fc4d9dbcf145d",
"text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884",
"title": ""
},
{
"docid": "907b8a8a8529b09114ae60e401bec1bd",
"text": "Studies of information seeking and workplace collaboration often find that social relationships are a strong factor in determining who collaborates with whom. Social networks provide one means of visualizing existing and potential interaction in organizational settings. Groupware designers are using social networks to make systems more sensitive to social situations and guide users toward effective collaborations. Yet, the implications of embedding social networks in systems have not been systematically studied. This paper details an evaluation of two different social networks used in a system to recommend individuals for possible collaboration. The system matches people looking for expertise with individuals likely to have expertise. The effectiveness of social networks for matching individuals is evaluated and compared. One finding is that social networks embedded into systems do not match individuals' perceptions of their personal social network. This finding and others raise issues for the use of social networks in groupware. Based on the evaluation results, several design considerations are discussed.",
"title": ""
},
{
"docid": "5afdbb9c705ad379227a46958addc8f2",
"text": "In this paper we present a novel experiment to explore the impact of avatar realism on the illusion of virtual body ownership (IVBO) in immersive virtual environments, with full-body avatar embodiment and freedom of movement. We evaluated four distinct avatars (a humanoid robot, a block-man, and both male and female human adult) presenting an increasing level of anthropomorphism in their detailed compositions Our results revealed that each avatar elicited a relatively high level of illusion. However both machine-like and cartoon-like avatars elicited an equivalent IVBO, slightly superior to the human-ones. A realistic human appearance is therefore not a critical top-down factor of IVBO, and could lead to an Uncanney Valley effect.",
"title": ""
},
{
"docid": "4e7122172cb7c37416381c251b510948",
"text": "Anatomic and physiologic data are used to analyze the energy expenditure on different components of excitatory signaling in the grey matter of rodent brain. Action potentials and postsynaptic effects of glutamate are predicted to consume much of the energy (47% and 34%, respectively), with the resting potential consuming a smaller amount (13%), and glutamate recycling using only 3%. Energy usage depends strongly on action potential rate--an increase in activity of 1 action potential/cortical neuron/s will raise oxygen consumption by 145 mL/100 g grey matter/h. The energy expended on signaling is a large fraction of the total energy used by the brain; this favors the use of energy efficient neural codes and wiring patterns. Our estimates of energy usage predict the use of distributed codes, with <or=15% of neurons simultaneously active, to reduce energy consumption and allow greater computing power from a fixed number of neurons. Functional magnetic resonance imaging signals are likely to be dominated by changes in energy usage associated with synaptic currents and action potential propagation.",
"title": ""
},
{
"docid": "5db123f7b584b268f908186c67d3edcb",
"text": "From the point of view of a programmer, the robopsychology is a synonym for the activity is done by developers to implement their machine learning applications. This robopsychological approach raises some fundamental theoretical questions of machine learning. Our discussion of these questions is constrained to Turing machines. Alan Turing had given an algorithm (aka the Turing Machine) to describe algorithms. If it has been applied to describe itself then this brings us to Turing’s notion of the universal machine. In the present paper, we investigate algorithms to write algorithms. From a pedagogy point of view, this way of writing programs can be considered as a combination of learning by listening and learning by doing due to it is based on applying agent technology and machine learning. As the main result we introduce the problem of learning and then we show that it cannot easily be handled in reality therefore it is reasonable to use machine learning algorithm for learning Turing machines.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "822b3d69fd4c55f45a30ff866c78c2b1",
"text": "Orthogonal frequency-division multiplexing (OFDM) modulation is a promising technique for achieving the high bit rates required for a wireless multimedia service. Without channel estimation and tracking, OFDM systems have to use differential phase-shift keying (DPSK), which has a 3-dB signalto-noise ratio (SNR) loss compared with coherent phase-shift keying (PSK). To improve the performance of OFDM systems by using coherent PSK, we investigate robust channel estimation for OFDM systems. We derive a minimum mean-square-error (MMSE) channel estimator, which makes full use of the timeand frequency-domain correlations of the frequency response of time-varying dispersive fading channels. Since the channel statistics are usually unknown, we also analyze the mismatch of the estimator-to-channel statistics and propose a robust channel estimator that is insensitive to the channel statistics. The robust channel estimator can significantly improve the performance of OFDM systems in a rapid dispersive fading channel.",
"title": ""
},
{
"docid": "a3e1eb38273f67a283063bce79b20b9d",
"text": "In this article, we examine the impact of digital screen devices, including television, on cognitive development. Although we know that young infants and toddlers are using touch screen devices, we know little about their comprehension of the content that they encounter on them. In contrast, research suggests that children begin to comprehend child-directed television starting at ∼2 years of age. The cognitive impact of these media depends on the age of the child, the kind of programming (educational programming versus programming produced for adults), the social context of viewing, as well the particular kind of interactive media (eg, computer games). For children <2 years old, television viewing has mostly negative associations, especially for language and executive function. For preschool-aged children, television viewing has been found to have both positive and negative outcomes, and a large body of research suggests that educational television has a positive impact on cognitive development. Beyond the preschool years, children mostly consume entertainment programming, and cognitive outcomes are not well explored in research. The use of computer games as well as educational computer programs can lead to gains in academically relevant content and other cognitive skills. This article concludes by identifying topics and goals for future research and provides recommendations based on current research-based knowledge.",
"title": ""
},
{
"docid": "2a81d56c89436b3379c7dec082d19b17",
"text": "We present a fast, efficient, and automatic method for extracting vessels from retinal images. The proposed method is based on the second local entropy and on the gray-level co-occurrence matrix (GLCM). The algorithm is designed to have flexibility in the definition of the blood vessel contours. Using information from the GLCM, a statistic feature is calculated to act as a threshold value. The performance of the proposed approach was evaluated in terms of its sensitivity, specificity, and accuracy. The results obtained for these metrics were 0.9648, 0.9480, and 0.9759, respectively. These results show the high performance and accuracy that the proposed method offers. Another aspect evaluated in this method is the elapsed time to carry out the segmentation. The average time required by the proposed method is 3 s for images of size 565 9 584 pixels. To assess the ability and speed of the proposed method, the experimental results are compared with those obtained using other existing methods.",
"title": ""
},
{
"docid": "a287e289fcf2d7e56069fabd90227c7a",
"text": "The mixing of audio signals has been at the foundation of audio production since the advent of electrical recording in the 1920’s, yet the mathematical and psychological bases for this activity are relatively under-studied. This paper investigates how the process of mixing music is conducted. We introduce a method of transformation from a “gainspace” to a “mix-space”, using a novel representation of the individual track gains. An experiment is conducted in order to obtain time-series data of mix engineers exploration of this space as they adjust levels within a multitrack session to create their desired mixture. It is observed that, while the exploration of the space is influenced by the initial configuration of track gains, there is agreement between individuals on the appropriate gain settings required to create a balanced mixture. Implications for the design of intelligent music production systems are discussed.",
"title": ""
},
{
"docid": "2e7bc1cc2f4be94ad0e4bce072a9f98a",
"text": "Glycosylation plays an important role in ensuring the proper structure and function of most biotherapeutic proteins. Even small changes in glycan composition, structure, or location can have a drastic impact on drug safety and efficacy. Recently, glycosylation has become the subject of increased focus as biopharmaceutical companies rush to create not only biosimilars, but also biobetters based on existing biotherapeutic proteins. Against this backdrop of ongoing biopharmaceutical innovation, updated methods for accurate and detailed analysis of protein glycosylation are critical for biopharmaceutical companies and government regulatory agencies alike. This review summarizes current methods of characterizing biopharmaceutical glycosylation, including compositional mass profiling, isomer-specific profiling and structural elucidation by MS and hyphenated techniques.",
"title": ""
},
{
"docid": "44b5fbb00aa1c4f9700cd06b59410d4c",
"text": "This paper presents insights from two case studies of Toyota Motor Corporation and its way of strategic global knowledge creation. We will show how Toyota’s knowledge creation has moved from merely transferring knowledge from Japan to subsidiaries abroad to a focus of creating knowledge in foreign markets by local staff. Toyota’s new strategy of ‘learn local, act global’ for international business development proved successful for tapping rich local knowledge bases, thus ensuring competitive edge. In fact, this strategy finally turned Toyota from simply being a global projector to a truly metanational company.",
"title": ""
},
{
"docid": "c7b7ca49ea887c25b05485e346b5b537",
"text": "I n our last article 1 we described the external features which characterize the cranial and facial structures of the cranial strains known as hyperflexion and hyperextension. To understand how these strains develop we have to examine the anatomical relations underlying all cranial patterns. Each strain represent a variation on a theme. By studying the features in common, it is possible to account for the facial and dental consequences of these variations. The key is the spheno-basilar symphysis and the displacements which can take place between the occiput and the sphenoid at that suture. In hyperflexion there is shortening of the cranium in an antero-posterior direction with a subsequent upward buckling of the spheno-basilar symphysis (Figure 1). In children, where the cartilage of the joint has not ossified, a v-shaped wedge can be seen occasionally on the lateral skull radiograph (Figure 2). Figure (3a) is of the cranial base seen from a vertex viewpoint. By leaving out the temporal bones the connection between the centrally placed spheno-basilar symphysis and the peripheral structures of the cranium can be seen more easily. Sutherland realized that the cranium could be divided into quadrants (Figure 3b) centered on the spheno-basilar symphysis and that what happens in each quadrant is directly influenced by the spheno-basilar symphysis. He noted that accompanying the vertical changes at the symphysis there are various lateral displacements. As the peripheral structures move laterally, this is known as external rotation. If they move closer to the midline, this is called internal rotation. It is not unusual to have one side of the face externally rotated and the other side internally rotated (Figure 4a). This can have a significant effect in the mouth, giving rise to asymmetries (Figure 4b). This shows a palatal view of the maxilla with the left posterior dentition externally rotated and the right buccal posterior segment internally rotated, reflecting the internal rotation of the whole right side of the face. This can be seen in hyperflexion but also other strains. With this background, it is now appropriate to examine in detail the cranial strain known as hyperflexion. As its name implies, it is brought about by an exaggeration of the flexion/ extension movement of the cranium into flexion. Rhythmic movement of the cranium continues despite the displacement into flexion, but it does so more readily into flexion than extension. As the skull is shortened in an antero-posterior plane, it is widened laterally. Figures 3a and 3b. 3a: cranial base from a vertex view (temporal bones left out). 3b: Sutherland’s quadrants imposed on cranial base. Figure 2. Lateral Skull Radiograph of Hyperflexion patient. Note V-shaped wedge at superior border of the spheno-basillar symphysis. Figure 1. Movement of Occiput and Sphenold in Hyperflexion. Reprinted from Orthopedic Gnathology, Hockel, J., Ed. 1983. With permission from Quintessence Publishing Co.",
"title": ""
}
] |
scidocsrr
|
7e6124467c00fc3c57c9e210e73d401d
|
Anti-pattern Detection as a Knowledge Utilization
|
[
{
"docid": "56ff8aa7934ed264908f42025d4c175b",
"text": "The identification of design patterns as part of the reengineering process can convey important information to the designer. However, existing pattern detection methodologies generally have problems in dealing with one or more of the following issues: identification of modified pattern versions, search space explosion for large systems and extensibility to novel patterns. In this paper, a design pattern detection methodology is proposed that is based on similarity scoring between graph vertices. Due to the nature of the underlying graph algorithm, this approach has the ability to also recognize patterns that are modified from their standard representation. Moreover, the approach exploits the fact that patterns reside in one or more inheritance hierarchies, reducing the size of the graphs to which the algorithm is applied. Finally, the algorithm does not rely on any pattern-specific heuristic, facilitating the extension to novel design structures. Evaluation on three open-source projects demonstrated the accuracy and the efficiency of the proposed method",
"title": ""
}
] |
[
{
"docid": "7b5be6623ad250bea3b84c86c6fb0000",
"text": "HTTP video streaming, employed by most of the video-sharing websites, allows users to control the video playback using, for example, pausing and switching the bit rate. These user-viewing activities can be used to mitigate the temporal structure impairments of the video quality. On the other hand, other activities, such as mouse movement, do not help reduce the impairment level. In this paper, we have performed subjective experiments to analyze user-viewing activities and correlate them with network path performance and user quality of experience. The results show that network measurement alone may miss important information about user dissatisfaction with the video quality. Moreover, video impairments can trigger user-viewing activities, notably pausing and reducing the screen size. By including the pause events into the prediction model, we can increase its explanatory power.",
"title": ""
},
{
"docid": "7663ad8e4f8307e8bb31b0dc92457502",
"text": "Computerized clinical decision support (CDS) aims to aid decision making of health care providers and the public by providing easily accessible health-related information at the point and time it is needed. natural language processing (NLP) is instrumental in using free-text information to drive CDS, representing clinical knowledge and CDS interventions in standardized formats, and leveraging clinical narrative. The early innovative NLP research of clinical narrative was followed by a period of stable research conducted at the major clinical centers and a shift of mainstream interest to biomedical NLP. This review primarily focuses on the recently renewed interest in development of fundamental NLP methods and advances in the NLP systems for CDS. The current solutions to challenges posed by distinct sublanguages, intended user groups, and support goals are discussed.",
"title": ""
},
{
"docid": "c8a2ba8f47266d0a63281a5abb5fa47f",
"text": "Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions.",
"title": ""
},
{
"docid": "23e8e6e44e63f6ed986ee6a68add2cba",
"text": "This paper presents a high-input impedance analog front-end (AFE) for low-power bio-potential acquisition. In order to boost input impedance, parasitic capacitance from PCB, pad, and amplifier gate input were cancelled using shielding buffer and positive feedback. To maximize the amount of positive feedback while guaranteeing stability, a self-calibration scheme is proposed for the positive feedback. A prototype IC fabricated in 0.18μm CMOS consumes 255nW from 0.8V supply. Measured input impedance with the proposed calibration is 50GΩ at 50Hz which is equivalent to an input capacitance of 60fF. It is also verified that the proposed scheme is resilient to supply and temperature variations and applicable to non-contact ECG monitoring.",
"title": ""
},
{
"docid": "193aaba6fb6584304fc3183dacd8d527",
"text": "We discuss the rationale and design of a Generic Memory management Interface, for a family of scalable operating systems. It consists of a general interface for managing virtual memory, independently of the underlying hardware architecture (e.g. paged versus segmented memory), and independently of the operating system kernel in which it is to be integrated. In particular, this interface provides abstractions for support of a single, consistent cache for both mapped objects and explicit I/O, and control of data caching in real memory. Data management policies are delegated to external managers.\nA portable implementation of the Generic Memory management Interface for paged architectures, the Paged Virtual Memory manager, is detailed. The PVM uses the novel history object technique for efficient deferred copying. The GMI is used by the Chorus Nucleus, in particular to support a distributed version of Unix. Performance measurements compare favorably with other systems.",
"title": ""
},
{
"docid": "65e297211555a88647eb23a65698531c",
"text": "Game theoretical techniques have recently become prevalen t in many engineering applications, notably in communications. With the emergence of cooperation as a new communicat ion paradigm, and the need for self-organizing, decentrali zed, and autonomic networks, it has become imperative to seek sui table game theoretical tools that allow to analyze and study the behavior and interactions of the nodes in future communi cation networks. In this context, this tutorial introduces the concepts of cooperative game theory, namely coalitiona l games, and their potential applications in communication and wireless networks. For this purpose, we classify coalit i nal games into three categories: Canonical coalitional g ames, coalition formation games, and coalitional graph games. Th is new classification represents an application-oriented a pproach for understanding and analyzing coalitional games. For eac h class of coalitional games, we present the fundamental components, introduce the key properties, mathematical te hniques, and solution concepts, and describe the methodol ogies for applying these games in several applications drawn from the state-of-the-art research in communications. In a nuts hell, this article constitutes a unified treatment of coalitional g me theory tailored to the demands of communications and",
"title": ""
},
{
"docid": "94a35547a45c06a90f5f50246968b77e",
"text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.",
"title": ""
},
{
"docid": "f87ad429b5ff40811b1f840d32543719",
"text": "Electrical balance between the antenna and the balance network impedances is crucial for achieving high isolation in a hybrid transformer duplexer. In this paper an auto calibration loop for tuning a novel integrated balance network to track the antenna impedance variations is introduced. It achieves an isolation of more than 50 dB in the transmit and receive bands with an antenna VSWR within 2:1 and between 1.7 and 2.2 GHz. The duplexer along with a cascaded direct-conversion receiver achieves a noise figure of 5.3 dB a conversion gain of 45 dB and consumes 34 mA. The insertion loss in the transmit path was less than 3.8 dB. Implemented in a 65-nm CMOS process the chip occupies an active area of 2.2 mm2.",
"title": ""
},
{
"docid": "4523c880e099da9bbade4870da04f0c4",
"text": "Despite the hype about blockchains and distributed ledgers, formal abstractions of these objects are scarce1. To face this issue, in this paper we provide a proper formulation of a distributed ledger object. In brief, we de ne a ledger object as a sequence of records, and we provide the operations and the properties that such an object should support. Implemen- tation of a ledger object on top of multiple (possibly geographically dispersed) computing devices gives rise to the distributed ledger object. In contrast to the centralized object, dis- tribution allows operations to be applied concurrently on the ledger, introducing challenges on the consistency of the ledger in each participant. We provide the de nitions of three well known consistency guarantees in terms of the operations supported by the ledger object: (1) atomic consistency (linearizability), (2) sequential consistency, and (3) eventual consistency. We then provide implementations of distributed ledgers on asynchronous message passing crash- prone systems using an Atomic Broadcast service, and show that they provide eventual, sequen- tial or atomic consistency semantics respectively. We conclude with a variation of the ledger the validated ledger which requires that each record in the ledger satis es a particular validation rule.",
"title": ""
},
{
"docid": "1600d4662fc5939c5f737756e2d3e823",
"text": "Predicate encryption is a new paradigm for public-key encryption that generalizes identity-based encryption and more. In predicate encryption, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SK f corresponding to a predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I)=1. Constructions of such schemes are currently known only for certain classes of predicates. We construct a scheme for predicates corresponding to the evaluation of inner products over ℤ N (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulas, thresholds, and more. Besides serving as a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right.",
"title": ""
},
{
"docid": "d83d672642531e1744afe77ed8379b90",
"text": "Customer churn prediction in Telecom Industry is a core research topic in recent years. A huge amount of data is generated in Telecom Industry every minute. On the other hand, there is lots of development in data mining techniques. Customer churn has emerged as one of the major issues in Telecom Industry. Telecom research indicates that it is more expensive to gain a new customer than to retain an existing one. In order to retain existing customers, Telecom providers need to know the reasons of churn, which can be realized through the knowledge extracted from Telecom data. This paper surveys the commonly used data mining techniques to identify customer churn patterns. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered.",
"title": ""
},
{
"docid": "f958c7d3d27ee79c9dee944716139025",
"text": "We present a tunable flipflop-based frequency divider and a fully differential push-push VCO designed in a 200GHz fT SiGe BiCMOS technology. A new technique for tuning the sensitivity of the divider in the frequency range of interest is presented. The chip works from 60GHz up to 113GHz. The VCO is based on a new topology which allows generating differential push-push outputs. The VCO shows a tuning range larger than 7GHz. The phase noise is 75dBc/Hz at 100kHz offset. The chip shows a frequency drift of 12.3MHz/C. The fundamental signal suppression is larger than 50dB. The output power is 2×5dBm. At a 3.3V supply, the circuits consume 35mA and 65mA, respectively.",
"title": ""
},
{
"docid": "43d5075b65758780d50fc038763241ce",
"text": "Friendship is an important part of normal social functioning, yet there are precious few instruments for measuring individual differences in this domain. In this article, we report a new self-report questionnaire, the Friendship Questionnaire (FQ), for use with adults of normal intelligence. A high score on the FQ is achieved by the respondent reporting that they enjoy close, empathic, supportive, caring friendships that are important to them; that they like and are interested in people; and that they enjoy interacting with others for its own sake. The FQ has a maximum score of 135 and a minimum of zero. In Study 1, we carried out a study of n = 76 (27 males and 49 females) adults from a general population, to test for previously reported sex differences in friendships. This confirmed that women scored significantly higher than men. In Study 2, we employed the FQ with n = 68 adults (51 males, 17 females) with Asperger Syndrome or high-functioning autism to test the theory that autism is an extreme form of the male brain. The adults with Asperger Syndrome or high-functioning autism scored significantly lower on the FQ than both the male and female controls from Study 1. The FQ thus reveals both a sex difference in the style of friendship in the general population, and provides support for the extreme male brain theory of autism.",
"title": ""
},
{
"docid": "674477f1d9ed9699ad582967c5bac290",
"text": "We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled, replaced, or dropped. On two standard datasets, Penn Treebank and WikiText-2, we find that the model is capable of using about 200 tokens of context on average, but sharply distinguishes nearby context (recent 50 tokens) from the distant history. The model is highly sensitive to the order of words within the most recent sentence, but ignores word order in the long-range context (beyond 50 tokens), suggesting the distant past is modeled only as a rough semantic field or topic. We further find that the neural caching model (Grave et al., 2017b) especially helps the LSTM to copy words from within this distant context. Overall, our analysis not only provides a better understanding of how neural LMs use their context, but also sheds light on recent success from cache-based models.",
"title": ""
},
{
"docid": "0ecb65da4effb562bfa29d06769b1a4c",
"text": "A new algorithm for testing primality is presented. The algorithm is distinguishable from the lovely algorithms of Solvay and Strassen [36], Miller [27] and Rabin [32] in that its assertions of primality are certain (i.e., provable from Peano's axioms) rather than dependent on unproven hypothesis (Miller) or probability (Solovay-Strassen, Rabin). An argument is presented which suggests that the algorithm runs within time c1ln(n)c2ln(ln(ln(n))) where n is the input, and C1, c2 constants independent of n. Unfortunately no rigorous proof of this running time is yet available.",
"title": ""
},
{
"docid": "6ea91574db57616682cf2a9608b0ac0b",
"text": "METHODOLOGY AND PRINCIPAL FINDINGS\nOleuropein promoted cultured human follicle dermal papilla cell proliferation and induced LEF1 and Cyc-D1 mRNA expression and β-catenin protein expression in dermal papilla cells. Nuclear accumulation of β-catenin in dermal papilla cells was observed after oleuropein treatment. Topical application of oleuropein (0.4 mg/mouse/day) to C57BL/6N mice accelerated the hair-growth induction and increased the size of hair follicles in telogenic mouse skin. The oleuropein-treated mouse skin showed substantial upregulation of Wnt10b, FZDR1, LRP5, LEF1, Cyc-D1, IGF-1, KGF, HGF, and VEGF mRNA expression and β-catenin protein expression.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nThese results demonstrate that topical oleuroepin administration induced anagenic hair growth in telogenic C57BL/6N mouse skin. The hair-growth promoting effect of oleuropein in mice appeared to be associated with the stimulation of the Wnt10b/β-catenin signaling pathway and the upregulation of IGF-1, KGF, HGF, and VEGF gene expression in mouse skin tissue.",
"title": ""
},
{
"docid": "dfbf5c12d8e5a8e5e81de5d51f382185",
"text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.",
"title": ""
},
{
"docid": "5660cc1d24b6ca4e2c62e5742817eaa8",
"text": "This paper presents a gain-scheduled design for a missile longitudinal autopilot. The gain-scheduled design is novel in that it does not involve linearizations about trim conditions of the missile dynamics. Rather, the missile dynamics are brought to a quasilinear parameter varying (LPV) form via a state transformation. An LPV system is defined as a linear system whose dynamics depend on an exogenous variable whose values are unknown a priori but can be measured upon system operation. In this case, the variable is the angle of attack. This is actually an endogenous variable, hence the expression \"quasi-LPV.\" Once in a quasi-LPV form, a robust controller using H synthesis is designed to achieve angle-of-attack control via fin deflections. The final design is an inner/outerloop structure, with angle-of-attack control being the inner loop and normal acceleration control being the outer loop.",
"title": ""
},
{
"docid": "805b3ad67a562c5b73bdc02c146e09ae",
"text": "Wireless sensor networks for environmental monitoring and distributed control will be deployed on a large scale in the near future. Due to the low per-node cost, these networks are expected to be both large and dense. However, because of the limited computation, storage, and power available to each node, conventional ad-hoc routing techniques are not feasible in this domain, and more novel routing algorithms are required. Despite the need for a simpler approach, routing in sensor networks still needs to be both robust to failures and secure against compromised and malicious nodes. We propose ARRIVE, a probabilistic algorithm that leverages the high node density and the inherent broadcast medium found in sensor networks to achieve routing robust to both link failures and patterned node failures without resorting to periodic flooding of the network. Our algorithm is based on a tree-like topology rooted at the sink of the network, and nodes use localized observed behavior of the surrounding nodes to make probabilistic decisions for forwarding packets. We have found that ARRIVE adapts to large patterned failures within a relatively short period of time at the cost of only moderate increases in overall power consumption and source-to-sink latency.",
"title": ""
},
{
"docid": "c13a9dae58064b9aeab7bf8918edce72",
"text": "The conventional classification schemes — notably multinomial logistic regression — used in conjunction with convolutional networks (convnets) are classical in statistics, designed without consideration for the usual coupling with convnets, stochastic gradient descent, and backpropagation. In the specific application to supervised learning for convnets, a simple scale-invariant classification stage turns out to be more robust than multinomial logistic regression, appears to result in slightly lower errors on several standard test sets, has similar computational costs, and features precise control over the actual rate of learning. “Scale-invariant” means that multiplying the input values by any nonzero scalar leaves the output unchanged.",
"title": ""
}
] |
scidocsrr
|
494618e843cad4d38743b862d5b3d3a7
|
Measuring the Lifetime Value of Customers Acquired from Google Search Advertising
|
[
{
"docid": "bfe762fc6e174778458b005be75d8285",
"text": "The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed istribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a randomeffects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.",
"title": ""
}
] |
[
{
"docid": "5b9488755fb3146adf5b6d8d767b7c8f",
"text": "This paper presents an overview of our activities for spoken and written language resources for Vietnamese implemented at CLIPSIMAG Laboratory and International Research Center MICA. A new methodology for fast text corpora acquisition for minority languages which has been applied to Vietnamese is proposed. The first results of a process of building a large Vietnamese speech database (VNSpeechCorpus) and a phonetic dictionary, which is used for automatic alignment process, are also presented.",
"title": ""
},
{
"docid": "bda892eb6cdcc818284f56b74c932072",
"text": "In this paper, a low power and low jitter 12-bit CMOS digitally controlled oscillator (DCO) design is presented. The CMOS DCO is designed based on a ring oscillator implemented with Schmitt trigger based inverters. Simulations of the proposed DCO using 32 nm CMOS predictive transistor model (PTM) achieves controllable frequency range of 570 MHz~850 MHz with a wide linearity. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 75 ps and the power consumption is 2.3 mW at 800 MHz with 0.9 V power supply.",
"title": ""
},
{
"docid": "24d0d2a384b2f9cefc6e5162cdc52c45",
"text": "Food classification from images is a fine-grained classification problem. Manual curation of food images is cost, time and scalability prohibitive. On the other hand, web data is available freely but contains noise. In this paper, we address the problem of classifying food images with minimal data curation. We also tackle a key problems with food images from the web where they often have multiple cooccuring food types but are weakly labeled with a single label. We first demonstrate that by sequentially adding a few manually curated samples to a larger uncurated dataset from two web sources, the top-1 classification accuracy increases from 50.3% to 72.8%. To tackle the issue of weak labels, we augment the deep model with Weakly Supervised learning (WSL) that results in an increase in performance to 76.2%. Finally, we show some qualitative results to provide insights into the performance improvements using the proposed ideas.",
"title": ""
},
{
"docid": "723f7d157cacfcad4523f7544a9d1c77",
"text": "The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https://github.com/hehefan/Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning.",
"title": ""
},
{
"docid": "faf83822de9f583bebc120aecbcd107a",
"text": "Relapsed B-cell lymphomas are incurable with conventional chemotherapy and radiation therapy, although a fraction of patients can be cured with high-dose chemoradiotherapy and autologous stemcell transplantation (ASCT). We conducted a phase I/II trial to estimate the maximum tolerated dose (MTD) of iodine 131 (131I)–tositumomab (anti-CD20 antibody) that could be combined with etoposide and cyclophosphamide followed by ASCT in patients with relapsed B-cell lymphomas. Fifty-two patients received a trace-labeled infusion of 1.7 mg/kg 131Itositumomab (185-370 MBq) followed by serial quantitative gamma-camera imaging and estimation of absorbed doses of radiation to tumor sites and normal organs. Ten days later, patients received a therapeutic infusion of 1.7 mg/kg tositumomab labeled with an amount of 131I calculated to deliver the target dose of radiation (20-27 Gy) to critical normal organs (liver, kidneys, and lungs). Patients were maintained in radiation isolation until their total-body radioactivity was less than 0.07 mSv/h at 1 m. They were then given etoposide and cyclophosphamide followed by ASCT. The MTD of 131Itositumomab that could be safely combined with 60 mg/kg etoposide and 100 mg/kg cyclophosphamide delivered 25 Gy to critical normal organs. The estimated overall survival (OS) and progressionfree survival (PFS) of all treated patients at 2 years was 83% and 68%, respectively. These findings compare favorably with those in a nonrandomized control group of patients who underwent transplantation, external-beam total-body irradiation, and etoposide and cyclophosphamide therapy during the same period (OS of 53% and PFS of 36% at 2 years), even after adjustment for confounding variables in a multivariable analysis. (Blood. 2000;96:2934-2942)",
"title": ""
},
{
"docid": "6838d497f81c594cb1760c075b0f5d48",
"text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.",
"title": ""
},
{
"docid": "ea1d408c4e4bfe69c099412da30949b0",
"text": "The amount of scientific papers in the Molecular Biology field has experienced an enormous growth in the last years, prompting the need of developing automatic Information Extraction (IE) systems. This work is a first step towards the ontology-based domain-independent generalization of a system that identifies Escherichia coli regulatory networks. First, a domain ontology based on the RegulonDB database was designed and populated. After that, the steps of the existing IE system were generalized to use the knowledge contained in the ontology, so that it could be potentially applied to other domains. The resulting system has been tested both with abstract and full articles that describe regulatory interactions for E. coli, obtaining satisfactory results. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4b94082787aed8e947ae798b74bdd552",
"text": "AIM\nThe aim of the study was to determine the prevalence of high anxiety and substance use among university students in the Republic of Macedonia.\n\n\nMATERIAL AND METHODS\nThe sample comprised 742 students, aged 18-22 years, who attended the first (188 students) and second year studies at the Medical Faculty (257), Faculty of Dentistry (242), and Faculty of Law (55) within Ss. Cyril and Methodius University in Skopje. As a psychometric test the Beck Anxiety Inventory (BAI) was used. It is a self-rating questionnaire used for measuring the severity of anxiety. A psychiatric interview was performed with students with BAI scores > 25. A self-administered questionnaire consisted of questions on the habits of substance (alcohol, nicotine, sedative-hypnotics, and illicit drugs) use and abuse was also used. For statistical evaluation Statistica 7 software was used.\n\n\nRESULTS\nThe highest mean BAI scores were obtained by first year medical students (16.8 ± 9.8). Fifteen percent of all students and 20% of first year medical students showed high levels of anxiety. Law students showed the highest prevalence of substance use and abuse.\n\n\nCONCLUSION\nHigh anxiety and substance use as maladaptive behaviours among university students are not systematically investigated in our country. The study showed that students show these types of unhealthy reactions, regardless of the curriculum of education. More attention should be paid to students in the early stages of their education. A student counselling service which offers mental health assistance needs to be established within University facilities in R. Macedonia alongside the existing services in our health system.",
"title": ""
},
{
"docid": "d1525fdab295a16d5610210e80fb8104",
"text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.",
"title": ""
},
{
"docid": "7884c51de6f53d379edccac50fd55caa",
"text": "Objective. We analyze the process of changing ethical attitudes over time by focusing on a specific set of ‘‘natural experiments’’ that occurred over an 18-month period, namely, the accounting scandals that occurred involving Enron/Arthur Andersen and insider-trader allegations related to ImClone. Methods. Given the amount of media attention devoted to these ethical scandals, we test whether respondents in a cross-sectional sample taken over 18 months become less accepting of ethically charged vignettes dealing with ‘‘accounting tricks’’ and ‘‘insider trading’’ over time. Results. We find a significant and gradual decline in the acceptance of the vignettes over the 18-month period. Conclusions. Findings presented here may provide valuable insight into potential triggers of changing ethical attitudes. An intriguing implication of these results is that recent highly publicized ethical breaches may not be only a symptom, but also a cause of changing attitudes.",
"title": ""
},
{
"docid": "8d208bb5318dcbc5d941df24906e121f",
"text": "Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.",
"title": ""
},
{
"docid": "584de328ade02c34e36e2006f3e66332",
"text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.",
"title": ""
},
{
"docid": "7aa6b9cb3a7a78ec26aff130a1c9015a",
"text": "As critical infrastructures in the Internet, data centers have evolved to include hundreds of thousands of servers in a single facility to support dataand/or computing-intensive applications. For such large-scale systems, it becomes a great challenge to design an interconnection network that provides high capacity, low complexity, low latency and low power consumption. The traditional approach is to build a hierarchical packet network using switches and routers. This approach suffers from limited scalability in the aspects of power consumption, wiring and control complexity, and delay caused by multi-hop store-andforwarding. In this paper we tackle the challenge by designing a novel switch architecture that supports direct interconnection of huge number of server racks and provides switching capacity at the level of Petabit/s. Our design combines the best features of electronics and optics. Exploiting recent advances in optics, we propose to build a bufferless optical switch fabric that includes interconnected arrayed waveguide grating routers (AWGRs) and tunable wavelength converters (TWCs). The optical fabric is integrated with electronic buffering and control to perform highspeed switching with nanosecond-level reconfiguration overhead. In particular, our architecture reduces the wiring complexity from O(N) to O(sqrt(N)). We design a practical and scalable scheduling algorithm to achieve high throughput under various traffic load. We also discuss implementation issues to justify the feasibility of this design. Simulation shows that our design achieves good throughput and delay performance.",
"title": ""
},
{
"docid": "ef9cea211dfdc79f5044a0da606bafb5",
"text": "Gender identity disorder (GID) refers to transsexual individuals who feel that their assigned biological gender is incongruent with their gender identity and this cannot be explained by any physical intersex condition. There is growing scientific interest in the last decades in studying the neuroanatomy and brain functions of transsexual individuals to better understand both the neuroanatomical features of transsexualism and the background of gender identity. So far, results are inconclusive but in general, transsexualism has been associated with a distinct neuroanatomical pattern. Studies mainly focused on male to female (MTF) transsexuals and there is scarcity of data acquired on female to male (FTM) transsexuals. Thus, our aim was to analyze structural MRI data with voxel based morphometry (VBM) obtained from both FTM and MTF transsexuals (n = 17) and compare them to the data of 18 age matched healthy control subjects (both males and females). We found differences in the regional grey matter (GM) structure of transsexual compared with control subjects, independent from their biological gender, in the cerebellum, the left angular gyrus and in the left inferior parietal lobule. Additionally, our findings showed that in several brain areas, regarding their GM volume, transsexual subjects did not differ significantly from controls sharing their gender identity but were different from those sharing their biological gender (areas in the left and right precentral gyri, the left postcentral gyrus, the left posterior cingulate, precuneus and calcarinus, the right cuneus, the right fusiform, lingual, middle and inferior occipital, and inferior temporal gyri). These results support the notion that structural brain differences exist between transsexual and healthy control subjects and that majority of these structural differences are dependent on the biological gender.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "f740191f7c6d27811bb09bf40e8da021",
"text": "Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activities have to be executed and which design choices have to be made to create a process design. We report on a four year design science study, in which we developed a design approach for Collaboration Engineering that",
"title": ""
},
{
"docid": "af1ddb07f08ad6065c004edae74a3f94",
"text": "Human decisions are prone to biases, and this is no less true for decisions made within data visualizations. Bias mitigation strategies often focus on the person, by educating people about their biases, typically with little success. We focus instead on the system, presenting the first evidence that altering the design of an interactive visualization tool can mitigate a strong bias – the attraction effect. Participants viewed 2D scatterplots where choices between superior alternatives were affected by the placement of other suboptimal points. We found that highlighting the superior alternatives weakened the bias, but did not eliminate it. We then tested an interactive approach where participants completely removed locally dominated points from the view, inspired by the elimination by aspects strategy in the decision-making literature. This approach strongly decreased the bias, leading to a counterintuitive suggestion: tools that allow removing inappropriately salient or distracting data from a view may help lead users to make more rational decisions.",
"title": ""
},
{
"docid": "b141c5a1b7a92856b9dc3e3958a91579",
"text": "Field-programmable analog arrays (FPAAs) provide a method for rapidly prototyping analog systems. Currently available commercial and academic FPAAs are typically based on operational amplifiers (or other similar analog primitives) with only a few computational elements per chip. While their specific architectures vary, their small sizes and often restrictive interconnect designs leave current FPAAs limited in functionality and flexibility. For FPAAs to enter the realm of large-scale reconfigurable devices such as modern field-programmable gate arrays (FPGAs), new technologies must be explored to provide area-efficient accurately programmable analog circuitry that can be easily integrated into a larger digital/mixed-signal system. Recent advances in the area of floating-gate transistors have led to a core technology that exhibits many of these qualities, and current research promises a digitally controllable analog technology that can be directly mated to commercial FPGAs. By leveraging these advances, a new generation of FPAAs is introduced in this paper that will dramatically advance the current state of the art in terms of size, functionality, and flexibility. FPAAs have been fabricated using floating-gate transistors as the sole programmable element, and the results of characterization and system-level experiments on the most recent FPAA are shown.",
"title": ""
},
{
"docid": "3dcce7058de4b41ad3614561832448a4",
"text": "Declarative models play an important role in most software design activities, by allowing designs to be constructed that selectively abstract over complex implementation details. In the user interface setting, Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which declarative models can be constructed and related, as part of the interface design process. However, such declarative models are not usually directly executable, and may be difficult to relate to existing software components. It is therefore important that MB-UIDEs both fit in well with existing software architectures and standards, and provide an effective route from declarative interface specification to running user interfaces. This paper describes how user interface software is generated from declarative descriptions in the Teallach MB-UIDE. Distinctive features of Teallach include its open architecture, which connects directly to existing applications and widget sets, and the generation of executable interface applications in Java. This paper focuses on how Java programs, organized using the model-view-controller pattern (MVC), are generated from the task, domain and presentation models of Teallach.",
"title": ""
}
] |
scidocsrr
|
2b22bedc6f58481917af3d5656987d6b
|
Natural Hand Gestures Recognition System for Intelligent HCI: A Survey
|
[
{
"docid": "7cc20934720912ad1c056dc9afd97e18",
"text": "Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that. demonstrate a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon.",
"title": ""
},
{
"docid": "fb7f079d104e81db41b01afe67cdf3b0",
"text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.",
"title": ""
},
{
"docid": "e0919ddaddfbf307f33b7442ee99cbad",
"text": "With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactions with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.",
"title": ""
}
] |
[
{
"docid": "bc6418ef8b51eb409a79838a88bf6ae1",
"text": "The growth of outsourced storage in the form of storage service providers underlines the importance of developing efficient security mechanisms to protect the data stored in a networked storage system. For securing the data stored remotely, we consider an architecture in which clients have access to a small amount of trusted storage, which could either be local to each client or, alternatively, could be provided by a client’s organization through a dedicated server. In this thesis, we propose new approaches for various mechanisms that are currently employed in implementations of secure networked storage systems. In designing the new algorithms for securing storage systems, we set three main goals. First, security should be added by clients transparently for the storage servers so that the storage interface does not change; second, the amount of trusted storage used by clients should be minimized; and, third, the performance overhead of the security algorithms should not be prohibitive. The first contribution of this dissertation is the construction of novel space-efficient integrity algorithms for both block-level storage systems and cryptographic file systems. These constructions are based on the observation that block contents typically written to disks feature low entropy, and as such are efficiently distinguishable from uniformly random blocks. We provide a rigorous analysis of security of the new integrity algorithms and demonstrate that they maintain the same security properties as existing algorithms (e.g., Merkle tree). We implement the new algorithms for integrity checking of files in the EncFS cryptographic file system and measure their performance cost, as well as the amount of storage needed for integrity and the integrity bandwidth (i.e., the amount of information needed to update or check the integrity of a file block) used. We evaluate the block-level integrity algorithms using a disk trace we collected, and the integrity algorithms for file systems using NFS traces collected at Harvard university. We also construct efficient key management schemes for cryptographic file systems in which the re-encryption of a file following a user revocation is delayed until the next write to that file, a model called lazy revocation. The encryption key evolves at each revocation and we devise an efficient algorithm to recover previous encryption keys with only logarithmic cost in the number of revocations supported. The novel key management scheme is based on a binary tree to derive the keys and improves existing techniques by several orders of magnitude, as shown by our experiments. Our final contribution is to analyze theoretically the consistency of encrypted shared file objects used to implement cryptographic file systems. We provide sufficient conditions for the realization of a given level of consistency, when concurrent writes to both the file and encryption key objects are possible. We show that the consistency of both the key",
"title": ""
},
{
"docid": "19b283a1438058088f9f9e337dd5aac7",
"text": "Analysis on Web search query logs has revealed that there is a large portion of entity-bearing queries, reflecting the increasing demand of users on retrieving relevant information about entities such as persons, organizations, products, etc. In the meantime, significant progress has been made in Web-scale information extraction, which enables efficient entity extraction from free text. Since an entity is expected to capture the semantic content of documents and queries more accurately than a term, it would be interesting to study whether leveraging the information about entities can improve the retrieval accuracy for entity-bearing queries. In this paper, we propose a novel retrieval approach, i.e., latent entity space (LES), which models the relevance by leveraging entity profiles to represent semantic content of documents and queries. In the LES, each entity corresponds to one dimension, representing one semantic relevance aspect. We propose a formal probabilistic framework to model the relevance in the high-dimensional entity space. Experimental results over TREC collections show that the proposed LES approach is effective in capturing latent semantic content and can significantly improve the search accuracy of several state-of-the-art retrieval models for entity-bearing queries.",
"title": ""
},
{
"docid": "223252b8bf99671eedd622c99bc99aaf",
"text": "We present a novel dataset for natural language generation (NLG) in spoken dialogue systems which includes preceding context (user utterance) along with each system response to be generated, i.e., each pair of source meaning representation and target natural language paraphrase. We expect this to allow an NLG system to adapt (entrain) to the user’s way of speaking, thus creating more natural and potentially more successful responses. The dataset has been collected using crowdsourcing, with several stages to obtain natural user utterances and corresponding relevant, natural, and contextually bound system responses. The dataset is available for download under the Creative Commons 4.0 BY-SA license.",
"title": ""
},
{
"docid": "1c5e17c7acff27e3b10aecf15c5809e7",
"text": "Recent years witness a growing interest in nonstandard epistemic logics of “knowing whether”, “knowing what”, “knowing how” and so on. These logics are usually not normal, i.e., the standard axioms and reasoning rules for modal logic may be invalid. In this paper, we show that the conditional “knowing value” logic proposed by Wang and Fan [10] can be viewed as a disguised normal modal logic by treating the negation of Kv operator as a special diamond. Under this perspective, it turns out that the original first-order Kripke semantics can be greatly simplified by introducing a ternary relation R i in standard Kripke models which associates one world with two i-accessible worlds that do not agree on the value of constant c. Under intuitive constraints, the modal logic based on such Kripke models is exactly the one studied by Wang and Fan [10,11]. Moreover, there is a very natural binary generalization of the “knowing value” diamond, which, surprisingly, does not increase the expressive power of the logic. The resulting logic with the binary diamond has a transparent normal modal system which sharpens our understanding of the “knowing value” logic and simplifies some previous hard problems.",
"title": ""
},
{
"docid": "f334f49a1e21e3278c25ca0d63b2ef8a",
"text": "We show that if (J,,} is a sequence of uniformly LI-bounded functions on a measure space, and if.f, -fpointwise a.e., then lim,,_(I{lf,, 1 -IIf,, fII) If I,' for all 0 < p < oc. This result is also generalized in Theorem 2 to some functionals other than the L P norm, namely I. /( J,, -(f, f) f ) -1 0 for suitablej: C -C and a suitable sequence (fJ}. A brief discussion is given of the usefulness of this result in variational problems.",
"title": ""
},
{
"docid": "c27fb42cf33399c9c84245eeda72dd46",
"text": "The proliferation of technology has empowered the web applications. At the same time, the presences of Cross-Site Scripting (XSS) vulnerabilities in web applications have become a major concern for all. Despite the many current detection and prevention approaches, attackers are exploiting XSS vulnerabilities continuously and causing significant harm to the web users. In this paper, we formulate the detection of XSS vulnerabilities as a prediction model based classification problem. A novel approach based on text-mining and pattern-matching techniques is proposed to extract a set of features from source code files. The extracted features are used to build prediction models, which can discriminate the vulnerable code files from the benign ones. The efficiency of the developed models is evaluated on a publicly available labeled dataset that contains 9408 PHP labeled (i.e. safe, unsafe) source code files. The experimental results depict the superiority of the proposed approach over existing ones.",
"title": ""
},
{
"docid": "feed386f42b9e4940adb4ce6db0e947b",
"text": "We proposed an algorithm to significantly reduce of the number of neurons in a convolutional neural network by adding sparse constraints during the training step. The forward-backward splitting method is applied to solve the sparse constrained problem. We also analyze the benefits of using rectified linear units as non-linear activation function to remove a larger number of neurons. Experiments using four popular CNNs including AlexNet and VGG-B demonstrate the capacity of the proposed method to reduce the number of neurons, therefore, the number of parameters and memory footprint, with a negligible loss in performance.",
"title": ""
},
{
"docid": "8d0066400985b2577f4fbe8013d5ba1d",
"text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective counter-measures have drawn significant investment from governments, companies, and empirical research. Despite a large number of emerging scientific studies to address the problem, a major limitation of existing work is the lack of comparative evaluations, which makes it difficult to assess the contribution of individual works. This paper introduces a new method based on a deep neural network combining convolutional and gated recurrent networks. We conduct an extensive evaluation of the method against several baselines and state of the art on the largest collection of publicly available Twitter datasets to date, and show that compared to previously reported results on these datasets, our proposed method is able to capture both word sequence and order information in short texts, and it sets new benchmark by outperforming on 6 out of 7 datasets by between 1 and 13 percents in F1. We also extend the existing dataset collection on this task by creating a new dataset covering different topics.",
"title": ""
},
{
"docid": "d369d3bd03f54e9cb912f53cdaf51631",
"text": "This paper presents a method to detect table regions in document images by identifying the column and row line-separators and their properties. The method employs a run-length approach to identify the horizontal and vertical lines present in the input image. From each group of intersecting horizontal and vertical lines, a set of 26 low-level features are extracted and an SVM classifier is used to test if it belongs to a table or not. The performance of the method is evaluated on a heterogeneous corpus of French, English and Arabic documents that contain various types of table structures and compared with that of the Tesseract OCR system.",
"title": ""
},
{
"docid": "54ec681832cd276b6641f7e7e08205a7",
"text": "In this paper, we proposed PRPRS (Personalized Research Paper Recommendation System) that designed expansively and implemented a UserProfile-based algorithm for extracting keyword by keyword extraction and keyword inference. If the papers don't have keyword section, we consider the title and text as an argument of keyword and execute the algorithm. Then, we create the possible combination from each word of title. We extract the combinations presented in the main text among the longest word combinations which include the same words. If the number of extracted combinations is more than the standard number, we used that combination as keyword. Otherwise, we refer the main text and extract combination as much as standard in order of high Term-Frequency. Whenever collected research papers by topic are selected, a renewal of UserProfile increases the frequency of each Domain, Topic and keyword. Each ratio of occurrence is recalculated and reflected on UserProfile. PRPRS calculates the similarity between given topic and collected papers by using Cosine Similarity which is used to recommend initial paper for each topic in Information retrieval. We measured satisfaction and accuracy for each system-recommended paper to test and evaluated performances of the suggested system. Finally PRPRS represents high level of satisfaction and accuracy.",
"title": ""
},
{
"docid": "ff4c034ecbd01e0308b68df353ce1751",
"text": "Social media is a rich data source for analyzing the social impact of hazard processes and human behavior in disaster situations; it is used by rescue agencies for coordination and by local governments for the distribution of official information. In this paper, we propose a method for data mining in Twitter to retrieve messages related to an event. We describe an automated process for the collection of hashtags highly related to the event and specific only to it. We compare our method with existing keyword-based methods and prove that hashtags are good markers for the separation of similar, simultaneous incidents; therefore, the retrieved messages have higher relevancy. The method uses disaster databases to find the location of an event and to estimate the impact area. The proposed method can also be adapted to retrieve messages about other types of events with a known location, such as riots, festivals and exhibitions.",
"title": ""
},
{
"docid": "4261306ca632ada117bdb69af81dcb3f",
"text": "Real-world deployments of wireless sensor networks (WSNs) require secure communication. It is important that a receiver is able to verify that sensor data was generated by trusted nodes. In some cases it may also be necessary to encrypt sensor data in transit. Recently, WSNs and traditional IP networks are more tightly integrated using IPv6 and 6LoWPAN. Available IPv6 protocol stacks can use IPsec to secure data exchange. Thus, it is desirable to extend 6LoWPAN such that IPsec communication with IPv6 nodes is possible. It is beneficial to use IPsec because the existing end-points on the Internet do not need to be modified to communicate securely with the WSN. Moreover, using IPsec, true end-to-end security is implemented and the need for a trustworthy gateway is removed. In this paper we provide End-to-End (E2E) secure communication between an IP enabled sensor nodes and a device on traditional Internet. This is the first compressed lightweight design, implementation, and evaluation of 6LoWPAN extension for IPsec on Contiki. Our extension supports both IPsec’s Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, communication endpoints are able to authenticate, encrypt and check the integrity of messages using standardized and established IPv6 mechanisms.",
"title": ""
},
{
"docid": "561e9f599e5dc470ca6f57faa62ebfce",
"text": "Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a dynamic representation space and use it for oneshot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.",
"title": ""
},
{
"docid": "7831c93b0c09c1690b4a2f1fefa766c4",
"text": "Amazon Web Services offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to deployment tools, directories to content delivery, over 140 AWS services are available. New services can be provisioned quickly, without the upfront capital expense. This allows enterprises, start-ups, small and medium-sized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements. This whitepaper provides you with an overview of the benefits of the AWS Cloud and introduces you to the services that make up the platform.",
"title": ""
},
{
"docid": "51f5ba274068c0c03e5126bda056ba98",
"text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7a356a485b46c6fc712a0174947e142e",
"text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related forearm, wrist, and hand injuries and illnesses was conducted as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review provides a comprehensive overview and analysis of 36 studies that addressed many of the interventions commonly used in hand rehabilitation. Findings reveal that the use of occupation-based activities has reasonable yet limited evidence to support its effectiveness. This review supports the premise that many client factors can be positively affected through the use of several commonly used occupational therapy-related modalities and methods. The implications for occupational therapy practice, research, and education and limitations of reviewed studies are also discussed.",
"title": ""
},
{
"docid": "f5b027fedefe929e9530f038c3fb219a",
"text": "Outfits in online fashion data are composed of items of many different types (e.g . top, bottom, shoes) that share some stylistic relationship with one another. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and jointly learns notions of item similarity and compatibility in an end-toend model. To evaluate the learned representation, we crawled 68,306 outfits created by users on the Polyvore website. Our approach obtains 3-5% improvement over the state-of-the-art on outfit compatibility prediction and fill-in-the-blank tasks using our dataset, as well as an established smaller dataset, while supporting a variety of useful queries.",
"title": ""
},
{
"docid": "abdd8eb3c08b63762cb0a0dffdbade12",
"text": "Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by describing some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.",
"title": ""
},
{
"docid": "7359729fe4bb369798c05c8c7c258111",
"text": "By considering various situations of climatologically phenomena affecting local weather conditions in various parts of the world. These weather conditions have a direct effect on crop yield. Various researches have been done exploring the connections between large-scale climatologically phenomena and crop yield. Artificial neural networks have been demonstrated to be powerful tools for modeling and prediction, to increase their effectiveness. Crop prediction methodology is used to predict the suitable crop by sensing various parameter of soil and also parameter related to atmosphere. Parameters like type of soil, PH, nitrogen, phosphate, potassium, organic carbon, calcium, magnesium, sulphur, manganese, copper, iron, depth, temperature, rainfall, humidity. For that purpose we are used artificial neural network (ANN).",
"title": ""
},
{
"docid": "b8bd0e7a31e4ae02f845fa5f57a5297f",
"text": "In this paper, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field (MRF), inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that: 1) it does not relearn everything from scratch given new interactions (i.e., it is online); and 2) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online, and robust: it is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.",
"title": ""
}
] |
scidocsrr
|
d2f41b7b54666c0c6d95140ca3095cc6
|
PALM-COEIN FIGO Classification for diagnosis of Abnormal Uterine Bleeding : Practical Utility of same at Tertiary Care Centre in North India
|
[
{
"docid": "cfa8e5af1a37c96617164ea319dba4a5",
"text": "In 2011, the FIGO classification system (PALM-COEIN) was published to standardize terminology, diagnostic and investigations of causes of abnormal uterine bleeding (AUB). According to FIGO new classification, in the absence of structural etiology, the formerly called \"dysfunctional uterine bleeding\" should be avoided and clinicians should state if AUB are caused by coagulation disorders (AUB-C), ovulation disorder (AUB-O), or endometrial primary dysfunction (AUB-E). Since this publication, some societies have released or revised their guidelines for the diagnosis and the management of the formerly called \"dysfunctional uterine bleeding\" according new FIGO classification. In this review, we summarize the most relevant new guidelines for the diagnosis and the management of AUB-C, AUB-O, and AUB-E.",
"title": ""
}
] |
[
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "c69e249b0061057617eb8c70d26df0b4",
"text": "This paper explores the use of GaN MOSFETs and series-connected inverter segments to realize an IMMD. The proposed IMMD topology reduces the segment voltage and offers an opportunity to utilize wide bandgap 200V GaN MOSFETs. Consequently, a reduction in IMMD size is achieved by eliminating inverter heat sink and optimizing the choice of DC-link capacitors. Gate signals of the IMMD segments are shifted (interleaved) to cancel the capacitor voltage ripple and further reduce the capacitor size. Motor winding configuration and coupling effect are also investigated to match with the IMMD design. An actively controlled balancing resistor is programmed to balance the voltages of series connected IMMD segments. Furthermore, this paper presents simulation results as well as experiment results to validate the proposed design.",
"title": ""
},
{
"docid": "981d140731d8a3cdbaebacc1fd26484a",
"text": "A new wideband bandpass filter (BPF) with composite short- and open-circuited stubs has been proposed in this letter. With the two kinds of stubs, two pairs of transmission zeros (TZs) can be produced on the two sides of the desired passband. The even-/odd-mode analysis method is used to derive the input admittances of its bisection circuits. After the Richard's transformation, these bisection circuits are in the same format of two LC circuits. By combining these two LC circuits, the equivalent circuit of the proposed filter is obtained. Through the analysis of the equivalent circuit, the open-circuited stubs introduce transmission poles in the complex frequencies and one pair of TZs in the real frequencies, and the short-circuited stubs generate one pair of TZs to block the dc component. A wideband BPF is designed and fabricated to verify the proposed design principle.",
"title": ""
},
{
"docid": "b68001bf953e63db5ef12be3b20a90aa",
"text": "Contrast sensitivity (CS) is the ability of the observer to discriminate between adjacent stimuli on the basis of their differences in relative luminosity (contrast) rather than their absolute luminances. In previous studies, using a narrow range of species, birds have been reported to have low contrast detection thresholds relative to mammals and fishes. This was an unexpected finding because birds had been traditionally reported to have excellent visual acuity and color vision. This study reports CS in six species of birds that represent a range of visual adaptations to varying environments. The species studied were American kestrels (Falco sparverius), barn owls (Tyto alba), Japanese quail (Coturnix coturnix japonica), white Carneaux pigeons (Columba livia), starlings (Sturnus vulgaris), and red-bellied woodpeckers (Melanerpes carolinus). Contrast sensitivity functions (CSFs) were obtained from these birds using the pattern electroretinogram and compared with CSFs from the literature when possible. All of these species exhibited low CS relative to humans and most mammals, which suggests that low CS is a general characteristic of birds. Their low maximum CS may represent a trade-off of contrast detection for some other ecologically vital capacity such as UV detection or other aspects of their unique color vision.",
"title": ""
},
{
"docid": "8e7d3462f93178f6c2901a429df22948",
"text": "This article analyzes China's pension arrangement and notes that China has recently established a universal non-contributory pension plan covering urban non-employed workers and all rural residents, combined with the pension plan covering urban employees already in place. Further, in the latest reform, China has discontinued the special pension plan for civil servants and integrated this privileged welfare class into the urban old-age pension insurance program. With these steps, China has achieved a degree of universalism and integration of its pension arrangement unprecedented in the non-Western world. Despite this radical pension transformation strategy, we argue that the current Chinese pension arrangement represents a case of \"incomplete\" universalism. First, its benefit level is low. Moreover, the benefit level varies from region to region. Finally, universalism in rural China has been undermined due to the existence of the \"policy bundle.\" Additionally, we argue that the 2015 pension reform has created a situation in which the stratification of Chinese pension arrangements has been \"flattened,\" even though it remains stratified to some extent.",
"title": ""
},
{
"docid": "d9791131cefcf0aa18befb25c12b65b2",
"text": "Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or long-est common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.",
"title": ""
},
{
"docid": "645395d46f653358d942742711d50c0b",
"text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets",
"title": ""
},
{
"docid": "f05cb5a3aeea8c4151324ad28ad4dc93",
"text": "With the discovery of induced pluripotent stem (iPS) cells, it is now possible to convert differentiated somatic cells into multipotent stem cells that have the capacity to generate all cell types of adult tissues. Thus, there is a wide variety of applications for this technology, including regenerative medicine, in vitro disease modeling, and drug screening/discovery. Although biological and biochemical techniques have been well established for cell reprogramming, bioengineering technologies offer novel tools for the reprogramming, expansion, isolation, and differentiation of iPS cells. In this article, we review these bioengineering approaches for the derivation and manipulation of iPS cells and focus on their relevance to regenerative medicine.",
"title": ""
},
{
"docid": "4345ed089e019402a5a4e30497bccc8a",
"text": "BACKGROUND\nFluridil, a novel topical antiandrogen, suppresses the human androgen receptor. While highly hydrophobic and hydrolytically degradable, it is systemically nonresorbable. In animals, fluridil demonstrated high local and general tolerance.\n\n\nOBJECTIVE\nTo evaluate the safety and efficacy of a topical anti- androgen, fluridil, in male androgenetic alopecia.\n\n\nMETHODS\nIn 20 men, for 21 days, occlusive forearm patches with 2, 4, and 6% fluridil, isopropanol, and/or vaseline were applied. In 43 men with androgenetic alopecia (AGA), Norwood grade II-Va, 2% fluridil was evaluated in a double-blind, placebo-controlled study after 3 months clinically by phototrichograms, hematology, and blood chemistry including analysis for fluridil, and at 9 months by phototrichograms.\n\n\nRESULTS\nNeither fluridil nor isopropanol showed sensitization/irritation potential, unlike vaseline. In all AGA subjects, baseline anagen/telogen counts were equal. After 3 months, the average anagen percentage did not change in placebo subjects, but increased in fluridil subjects from 76% to 85%, and at 9 months to 87%. In former placebo subjects, fluridil increased the anagen percentage after 6 months from 76% to 85%. Sexual functions, libido, hematology, and blood chemistry values were normal throughout, except that at 3 months, in the spring, serum testosterone increased within the normal range equally in placebo and fluridil groups. No fluridil or its decomposition product, BP-34, was detectable in the serum at 0, 3, or 90 days.\n\n\nCONCLUSION\nTopical fluridil is nonirritating, nonsensitizing, nonresorbable, devoid of systemic activity, and anagen promoting after daily use in most AGA males.",
"title": ""
},
{
"docid": "dd211105651b376b40205eb16efe1c25",
"text": "WBAN based medical-health technologies have great potential for continuous monitoring in ambulatory settings, early detection of abnormal conditions, and supervised rehabilitation. They can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time.",
"title": ""
},
{
"docid": "f5e56872c66a126ada7d54c218c06836",
"text": "INTRODUCTION\nGender dysphoria, a marked incongruence between one's experienced gender and biological sex, is commonly believed to arise from discrepant cerebral and genital sexual differentiation. With the discovery that estrogen receptor β is associated with female-to-male (FtM) but not with male-to-female (MtF) gender dysphoria, and given estrogen receptor α involvement in central nervous system masculinization, it was hypothesized that estrogen receptor α, encoded by the ESR1 gene, also might be implicated.\n\n\nAIM\nTo investigate whether ESR1 polymorphisms (TA)n-rs3138774, PvuII-rs2234693, and XbaI-rs9340799 and their haplotypes are associated with gender dysphoria in adults.\n\n\nMETHODS\nMolecular analysis was performed in peripheral blood samples from 183 FtM subjects, 184 MtF subjects, and 394 sex- and ethnically-matched controls.\n\n\nMAIN OUTCOME MEASURES\nGenotype and haplotype analyses of the (TA)n-rs3138774, PvuII-rs2234693, and XbaI-rs9340799 polymorphisms.\n\n\nRESULTS\nAllele and genotype frequencies for the polymorphism XbaI were statistically significant only in FtM vs control XX subjects (P = .021 and P = .020). In XX individuals, the A/G genotype was associated with a low risk of gender dysphoria (odds ratio [OR] = 0.34; 95% CI = 0.16-0.74; P = .011); in XY individuals, the A/A genotype implied a low risk of gender dysphoria (OR = 0.39; 95% CI = 0.17-0.89; P = .008). Binary logistic regression showed partial effects for all three polymorphisms in FtM but not in MtF subjects. The three polymorphisms were in linkage disequilibrium: a small number of TA repeats was linked to the presence of PvuII and XbaI restriction sites (haplotype S-T-A), and a large number of TA repeats was linked to the absence of these restriction sites (haplotype L-C-G). In XX individuals, the presence of haplotype L-C-G carried a low risk of gender dysphoria (OR = 0.66; 95% CI = 0.44-0.99; P = .046), whereas the presence of haplotype L-C-A carried a high susceptibility to gender dysphoria (OR = 3.96; 95% CI = 1.04-15.02; P = .044). Global haplotype was associated with FtM gender dysphoria (P = .017) but not with MtF gender dysphoria.\n\n\nCONCLUSIONS\nXbaI-rs9340799 is involved in FtM gender dysphoria in adults. Our findings suggest different genetic programs for gender dysphoria in men and women. Cortés-Cortés J, Fernández R, Teijeiro N, et al. Genotypes and Haplotypes of the Estrogen Receptor α Gene (ESR1) Are Associated With Female-to-Male Gender Dysphoria. J Sex Med 2017;14:464-472.",
"title": ""
},
{
"docid": "c4d0a1cd8a835dc343b456430791035b",
"text": "Social networks offer an invaluable amount of data from which useful information can be obtained on the major issues in society, among which crime stands out. Research about information extraction of criminal events in Social Networks has been done primarily in English language, while in Spanish, the problem has not been addressed. This paper propose a system for extracting spatio-temporally tagged tweets about crime events in Spanish language. In order to do so, it uses a thesaurus of criminality terms and a NER (named entity recognition) system to process the tweets and extract the relevant information. The NER system is based on the implementation OSU Twitter NLP Tools, which has been enhanced for Spanish language. Our results indicate an improved performance in relation to the most relevant tools such as Standford NER and OSU Twitter NLP Tools, achieving 80.95% precision, 59.65% recall and 68.69% F-measure. The end result shows the crime information broken down by place, date and crime committed through a webservice.",
"title": ""
},
{
"docid": "489015cc236bd20f9b2b40142e4b5859",
"text": "We present an experimental study which demonstrates that model checking techniques can be effective in finding synchronization errors in safety critical software when they are combined with a design for verification approach. We apply the concurrency controller design pattern to the implementation of the synchronization operations in Java programs. This pattern enables a modular verification strategy by decoupling the behaviors of the concurrency controllers from the behaviors of the threads that use them using interfaces specified as finite state machines. The behavior of a concurrency controller can be verified with respect to arbitrary numbers of threads using infinite state model checking techniques, and the threads which use the controller classes can be checked for interface violations using finite state model checking techniques. We present techniques for thread isolation which enables us to analyze each thread in the program separately during interface verification. We conducted an experimental study investigating the effectiveness of the presented design for verification approach on safety critical air traffic control software. In this study, we first reengineered the Tactical Separation Assisted Flight Environment (TSAFE) software using the concurrency controller design pattern. Then, using fault seeding, we created 40 faulty versions of TSAFE and used both infinite and finite state verification techniques for finding the seeded faults. The experimental study demonstrated the effectiveness of the presented modular verification approach and resulted in a classification of faults that can be found using the presented approach.",
"title": ""
},
{
"docid": "ae8292c58a58928594d5f3730a6feacf",
"text": "Photoplethysmography (PPG) signals, captured using smart phones are generally noisy in nature. Although they have been successfully used to determine heart rate from frequency domain analysis, further indirect markers like blood pressure (BP) require time domain analysis for which the signal needs to be substantially cleaned. In this paper we propose a methodology to clean such noisy PPG signals. Apart from filtering, the proposed approach reduces the baseline drift of PPG signal to near zero. Furthermore it models each cycle of PPG signal as a sum of 2 Gaussian functions which is a novel contribution of the method. We show that, the noise cleaning effect produces better accuracy and consistency in estimating BP, compared to the state of the art method that uses the 2-element Windkessel model on features derived from raw PPG signal, captured from an Android phone.",
"title": ""
},
{
"docid": "fc2f99fff361e68f154d88da0739bac4",
"text": "Mondor's disease is characterized by thrombophlebitis of the superficial veins of the breast and the chest wall. The list of causes is long. Various types of clothing, mainly tight bras and girdles, have been postulated as causes. We report a case of a 34-year-old woman who referred typical symptoms and signs of Mondor's disease, without other possible risk factors, and showed the cutaneous findings of the tight bra. Therefore, after distinguishing benign causes of Mondor's disease from hidden malignant causes, the clinicians should consider this clinical entity.",
"title": ""
},
{
"docid": "269e2f8bca42d5369f9337aea6191795",
"text": "Today, exposure to new and unfamiliar environments is a necessary part of daily life. Effective communication of location-based information through location-based services has become a key concern for cartographers, geographers, human-computer interaction and professional designers alike. Recently, much attention was directed towards Augmented Reality (AR) interfaces. Current research, however, focuses primarily on computer vision and tracking, or investigates the needs of urban residents, already familiar with their environment. Adopting a user-centred design approach, this paper reports findings from an empirical mobile study investigating how tourists acquire knowledge about an unfamiliar urban environment through AR browsers. Qualitative and quantitative data was used in the development of a framework that shifts the perspective towards a more thorough understanding of the overall design space for such interfaces. The authors analysis provides a frame of reference for the design and evaluation of mobile AR interfaces. The authors demonstrate the application of the framework with respect to optimization of current design of AR.",
"title": ""
},
{
"docid": "95fe3badecc7fa92af6b6aa49b6ff3b2",
"text": "As low-resolution position sensors, a high placement accuracy of Hall-effect sensors is hard to achieve. Accordingly, a commutation angle error is generated. The commutation angle error will inevitably increase the loss of the low inductance motor and even cause serious consequence, which is the abnormal conduction of a freewheeling diode in the unexcited phase especially at high speed. In this paper, the influence of the commutation angle error on the power loss for the high-speed brushless dc motor with low inductance and nonideal back electromotive force in a magnetically suspended control moment gyro (MSCMG) is analyzed in detail. In order to achieve low steady-state loss of an MSCMG for space application, a straightforward method of self-compensation of commutation angle based on dc-link current is proposed. Both simulation and experimental results confirm the feasibility and effectiveness of the proposed method.",
"title": ""
},
{
"docid": "0b6ce2e4f3ef7f747f38068adef3da54",
"text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.",
"title": ""
},
{
"docid": "488c7437a32daec6fbad12e07bb31f4c",
"text": "Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.",
"title": ""
},
{
"docid": "cd3d9bb066729fc7107c0fef89f664fe",
"text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.",
"title": ""
}
] |
scidocsrr
|
ea9b364a78fc2387e1dad358f0192471
|
Advances in Clickstream Data Analysis in Marketing
|
[
{
"docid": "6db749b222a44764cf07bde527c230a3",
"text": "There have been many claims that the Internet represents a new “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products — books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments — presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we simply compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.",
"title": ""
},
{
"docid": "c02d207ed8606165e078de53a03bf608",
"text": "School of Business, University of Maryland (e-mail: mtrusov@rhsmith. umd.edu). Anand V. Bodapati is Associate Professor of Marketing (e-mail: anand.bodapati@anderson.ucla.edu), and Randolph E. Bucklin is Peter W. Mullin Professor (e-mail: rbucklin@anderson.ucla.edu), Anderson School of Management, University of California, Los Angeles. The authors are grateful to Christophe Van den Bulte and Dawn Iacobucci for their insightful and thoughtful comments on this work. John Hauser served as associate editor for this article. MICHAEL TRUSOV, ANAND V. BODAPATI, and RANDOLPH E. BUCKLIN*",
"title": ""
}
] |
[
{
"docid": "87be04b184d27c006bb06dd9906a9422",
"text": "With the significant growth of the markets for consumer electronics and various embedded systems, flash memory is now an economic solution for storage systems design. Because index structures require intensively fine-grained updates/modifications, block-oriented access over flash memory could introduce a significant number of redundant writes. This might not only severely degrade the overall performance, but also damage the reliability of flash memory. In this paper, we propose a very different approach, which can efficiently handle fine-grained updates/modifications caused by B-tree index access over flash memory. The implementation is done directly over the flash translation layer (FTL); hence, no modifications to existing application systems are needed. We demonstrate that when index structures are adopted over flash memory, the proposed methodology can significantly improve the system performance and, at the same time, reduce both the overhead of flash-memory management and the energy dissipation. The average response time of record insertions and deletions was also significantly reduced.",
"title": ""
},
{
"docid": "742dbd75ad995d5c51c4cbce0cc7f8cc",
"text": "Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used twofingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.",
"title": ""
},
{
"docid": "c02697087e8efd4c1ba9f9a26fa1115b",
"text": "OBJECTIVE\nTo estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.\n\n\nDESIGN\nEstimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.\n\n\nSETTING\nCommunity, nonfederal, short-term hospitals in the United States.\n\n\nPARTICIPANTS\nPersons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.\n\n\nRESULTS\nIn the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.\n\n\nCONCLUSIONS\nOne in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050.",
"title": ""
},
{
"docid": "74ca823c5dfb41e3566a29549c8137ab",
"text": "\"Experimental realization of quantum algorithm for solving linear systems of equations\" (2014). Many important problems in science and engineering can be reduced to the problem of solving linear equations. The quantum algorithm discovered recently indicates that one can solve an N-dimensional linear equation in O(log N) time, which provides an exponential speedup over the classical counterpart. Here we report an experimental demonstration of the quantum algorithm when the scale of the linear equation is 2 × 2 using a nuclear magnetic resonance quantum information processor. For all sets of experiments, the fidelities of the final four-qubit states are all above 96%. This experiment gives the possibility of solving a series of practical problems related to linear systems of equations and can serve as the basis to realize many potential quantum algorithms.",
"title": ""
},
{
"docid": "c3b07d5c9a88c1f9430615d5e78675b6",
"text": "Two new algorithms and associated neuron-like network architectures are proposed for solving the eigenvalue problem in real-time. The first approach is based on the solution of a set of nonlinear algebraic equations by employing optimization techniques. The second approach employs a multilayer neural network with linear artificial neurons and it exploits the continuous-time error back-propagation learning algorithm. The second approach enables us to find all the eigenvalues and the associated eigenvectors simultaneously by training the network to match some desired patterns, while the first approach is suitable to find during one run only one particular eigenvalue (e.g. an extreme eigenvalue) and the corresponding eigenvector in realtime. In order to find all eigenpairs the optimization process must be repeated in this case many times for different initial conditions. The performance and convergence behaviour of the proposed neural network architectures are investigated by extensive computer simulations.",
"title": ""
},
{
"docid": "2b09ae15fe7756df3da71cfc948e9506",
"text": "Repair of the injured spinal cord by regeneration therapy remains an elusive goal. In contrast, progress in medical care and rehabilitation has resulted in improved health and function of persons with spinal cord injury (SCI). In the absence of a cure, raising the level of achievable function in mobility and self-care will first and foremost depend on creative use of the rapidly advancing technology that has been so widely applied in our society. Building on achievements in microelectronics, microprocessing and neuroscience, rehabilitation medicine scientists have succeeded in developing functional electrical stimulation (FES) systems that enable certain individuals with SCI to use their paralyzed hands, arms, trunk, legs and diaphragm for functional purposes and gain a degree of control over bladder and bowel evacuation. This review presents an overview of the progress made, describes the current challenges and suggests ways to improve further FES systems and make these more widely available.",
"title": ""
},
{
"docid": "79e2e4af34e8a2b89d9439ff83b9fd5a",
"text": "PROBLEM\nThe current nursing workforce is composed of multigenerational staff members creating challenges and at times conflict for managers.\n\n\nMETHODS\nGenerational cohorts are defined and two multigenerational scenarios are presented and discussed using the ACORN imperatives and Hahn's Five Managerial Strategies for effectively managing a multigenerational staff.\n\n\nFINDINGS\nCommunication and respect are the underlying key strategies to understanding and bridging the generational gap in the workplace.\n\n\nCONCLUSION\nEmbracing and respecting generational differences can bring strength and cohesiveness to nursing teams on the managerial or unit level.",
"title": ""
},
{
"docid": "6ad90319d07abce021eda6f3a1d3886e",
"text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "78744205cf17be3ee5a61d12e6a44180",
"text": "Modeling of photovoltaic (PV) systems is essential for the designers of solar generation plants to do a yield analysis that accurately predicts the expected power output under changing environmental conditions. This paper presents a comparative analysis of PV module modeling methods based on the single-diode model with series and shunt resistances. Parameter estimation techniques within a modeling method are used to estimate the five unknown parameters in the single diode model. Two sets of estimated parameters were used to plot the I-V characteristics of two PV modules, i.e., SQ80 and KC200GT, for the different sets of modeling equations, which are classified into models 1 to 5 in this study. Each model is based on the different combinations of diode saturation current and photogenerated current plotted under varying irradiance and temperature. Modeling was done using MATLAB/Simulink software, and the results from each model were first verified for correctness against the results produced by their respective authors. Then, a comparison was made among the different models (models 1 to 5) with respect to experimentally measured and datasheet I-V curves. The resultant plots were used to draw conclusions on which combination of parameter estimation technique and modeling method best emulates the manufacturer specified characteristics.",
"title": ""
},
{
"docid": "b266069e91c24120b1732c5576087a90",
"text": "Reactions of organic molecules on Montmorillonite c lay mineral have been investigated from various asp ects. These include catalytic reactions for organic synthesis, chemical evolution, the mechanism of humus-formatio n, and environmental problems. Catalysis by clay minerals has attracted much interest recently, and many repo rts including the catalysis by synthetic or modified cl ays have been published. In this review, we will li mit the review to organic reactions using Montmorillonite clay as cat alyst.",
"title": ""
},
{
"docid": "b9652cf6647d9c7c1f91a345021731db",
"text": "Context: The processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. Objective: This paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. Results: The results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. Conclusion: It is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "85719d4bc86c7c8bbe5799a716d6533b",
"text": "We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on ”expander-like” properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. 1 ar X iv :1 70 6. 05 68 3v 1 [ cs .L G ] 1 8 Ju n 20 17",
"title": ""
},
{
"docid": "e5a18d6df921ab96da8e106cdb4eeac7",
"text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.",
"title": ""
},
{
"docid": "7e9dbc7f1c3855972dbe014e2223424c",
"text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.",
"title": ""
},
{
"docid": "7340866fa3965558e1571bcc5294b896",
"text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.",
"title": ""
},
{
"docid": "ad2546a681a3b6bcef689f0bb71636b5",
"text": "Data and computation integrity and security are major concerns for users of cloud computing facilities. Many production-level clouds optimistically assume that all cloud nodes are equally trustworthy when dispatching jobs; jobs are dispatched based on node load, not reputation. This increases their vulnerability to attack, since compromising even one node suffices to corrupt the integrity of many distributed computations. This paper presents and evaluates Hatman: the first full-scale, data-centric, reputation-based trust management system for Hadoop clouds. Hatman dynamically assesses node integrity by comparing job replica outputs for consistency. This yields agreement feedback for a trust manager based on EigenTrust. Low overhead and high scalability is achieved by formulating both consistency-checking and trust management as secure cloud computations; thus, the cloud's distributed computing power is leveraged to strengthen its security. Experiments demonstrate that with feedback from only 100 jobs, Hatman attains over 90% accuracy when 25% of the Hadoop cloud is malicious.",
"title": ""
}
] |
scidocsrr
|
044c327bc2efa9c369a9376be47b5647
|
Reservoir computing approaches to recurrent neural network training
|
[
{
"docid": "6ea55a91df6f65ff9a52a793d09fadeb",
"text": "Many applications of Reservoir Computing (and other signal processing techniques) have to deal with information processing of signals with multiple time-scales. Classical Reservoir Computing approaches can only cope with multiple frequencies to a limited degree. In this work we investigate reservoirs build of band-pass filter neurons which can be made sensitive to a specified frequency band. We demonstrate that many currently difficult tasks for reservoirs can be handled much better by a band-pass filter reservoir.",
"title": ""
}
] |
[
{
"docid": "20b6d457acf80a2171880ca312def57f",
"text": "Recent evidence points to a possible overlap in the neural systems underlying the distressing experience that accompanies physical pain and social rejection (Eisenberger et al., 2003). The present study tested two hypotheses that stem from this suggested overlap, namely: (1) that baseline sensitivity to physical pain will predict sensitivity to social rejection and (2) that experiences that heighten social distress will heighten sensitivity to physical pain as well. In the current study, participants' baseline cutaneous heat pain unpleasantness thresholds were assessed prior to the completion of a task that manipulated feelings of social distress. During this task, participants played a virtual ball-tossing game, allegedly with two other individuals, in which they were either continuously included (social inclusion condition) or they were left out of the game by either never being included or by being overtly excluded (social rejection conditions). At the end of the game, three pain stimuli were delivered and participants rated the unpleasantness of each. Results indicated that greater baseline sensitivity to pain (lower pain unpleasantness thresholds) was associated with greater self-reported social distress in response to the social rejection conditions. Additionally, for those in the social rejection conditions, greater reports of social distress were associated with greater reports of pain unpleasantness to the thermal stimuli delivered at the end of the game. These results provide additional support for the hypothesis that pain distress and social distress share neurocognitive substrates. Implications for clinical populations are discussed.",
"title": ""
},
{
"docid": "90568129b677670b636b461639e853ee",
"text": "Much of the work conducted on adult stem cells has focused on mesenchymal stem cells (MSCs) found within the bone marrow stroma. Adipose tissue, like bone marrow, is derived from the embryonic mesenchyme and contains a stroma that is easily isolated. Preliminary studies have recently identified a putative stem cell population within the adipose stromal compartment. This cell population, termed processed lipoaspirate (PLA) cells, can be isolated from human lipoaspirates and, like MSCs, differentiate toward the osteogenic, adipogenic, myogenic, and chondrogenic lineages. To confirm whether adipose tissue contains stem cells, the PLA population and multiple clonal isolates were analyzed using several molecular and biochemical approaches. PLA cells expressed multiple CD marker antigens similar to those observed on MSCs. Mesodermal lineage induction of PLA cells and clones resulted in the expression of multiple lineage-specific genes and proteins. Furthermore, biochemical analysis also confirmed lineage-specific activity. In addition to mesodermal capacity, PLA cells and clones differentiated into putative neurogenic cells, exhibiting a neuronal-like morphology and expressing several proteins consistent with the neuronal phenotype. Finally, PLA cells exhibited unique characteristics distinct from those seen in MSCs, including differences in CD marker profile and gene expression.",
"title": ""
},
{
"docid": "f06aaad6da36bfd60c1937c20390f3bb",
"text": "Spinal cord injury (SCI) is a devastating neurological disorder. Autophagy is induced and plays a crucial role in SCI. Ginsenoside Rb1 (Rb1), one of the major active components extracted from Panax Ginseng CA Meyer, has exhibited neuroprotective effects in various neurodegenerative diseases. However, it remains unknown whether autophagy is involved in the neuroprotection of Rb1 on SCI. In this study, we examined the regulation of autophagy following Rb1 treatment and its involvement in the Rb1-induced neuroprotection in SCI and in vitro injury model. Firstly, we found that Rb1 treatment decreased the loss of motor neurons and promoted function recovery in the SCI model. Furthermore, we found that Rb1 treatment inhibited autophagy in neurons, and suppressed neuronal apoptosis and autophagic cell death in the SCI model. Finally, in the in vitro injury model, Rb1 treatment increased the viability of PC12 cells and suppressed apoptosis by inhibiting excessive autophagy, whereas stimulation of autophagy by rapamycin abolished the anti-apoptosis effect of Rb1. Taken together, these findings suggest that the inhibition of autophagy is involved in the neuroprotective effects of Rb1 on SCI.",
"title": ""
},
{
"docid": "6b16bc1aeb9ad7bc25bf2154c534d5dc",
"text": "Neighbor Discovery for IP Version 6 (IPv6) | <draft-ietf-ipngwg-discovery-v2-01.txt> | Status of this Memo This document is an Internet-Draft. Internet-Drafts are working * documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as ''work in progress.'' To learn the current status of any Internet-Draft, please check the ''1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow Directories on ds.internic.net (US East Coast), nic.nordu.net Abstract This document specifies the Neighbor Discovery protocol for IP * Version 6. IPv6 nodes on the same link use Neighbor Discovery to discover each other's presence, to determine each other's link-layer addresses, to find routers and to maintain reachability information about the paths to active neighbors. * draft-ietf-ipngwg-discovery-v2-01.txt [Page 1]",
"title": ""
},
{
"docid": "470810494ae81cc2361380c42116c8d7",
"text": "Sustainability is significantly important for fashion business due to consumers’ increasing awareness of environment. When a fashion company aims to promote sustainability, the main linkage is to develop a sustainable supply chain. This paper contributes to current knowledge of sustainable supply chain in the textile and clothing industry. We first depict the structure of sustainable fashion supply chain including eco-material preparation, sustainable manufacturing, green distribution, green retailing, and ethical consumers based on the extant literature. We study the case of the Swedish fast fashion company, H&M, which has constructed its sustainable supply chain in developing eco-materials, providing safety training, monitoring sustainable manufacturing, reducing carbon emission in distribution, and promoting eco-fashion. Moreover, based on the secondary data and analysis, we learn the lessons of H&M’s sustainable fashion supply chain from the country perspective: (1) the H&M’s sourcing managers may be more likely to select suppliers in the countries with lower degrees of human wellbeing; (2) the H&M’s supply chain manager may set a higher level of inventory in a country with a higher human wellbeing; and (3) the H&M CEO may consider the degrees of human wellbeing and economic wellbeing, instead of environmental wellbeing when launching the online shopping channel in a specific country.",
"title": ""
},
{
"docid": "44c65e6d783e646034b60c99f8958250",
"text": "Extreme learning machine (ELM) randomly generates parameters of hidden nodes and then analytically determines the output weights with fast learning speed. The ill-posed problem of parameter matrix of hidden nodes directly causes unstable performance, and the automatical selection problem of the hidden nodes is critical to holding the high efficiency of ELM. Focusing on the ill-posed problem and the automatical selection problem of the hidden nodes, this paper proposes the variational Bayesian extreme learning machine (VBELM). First, the Bayesian probabilistic model is involved into ELM, where the Bayesian prior distribution can avoid the ill-posed problem of hidden node matrix. Then, the variational approximation inference is employed in the Bayesian model to compute the posterior distribution and the independent variational hyperparameters approximately, which can be used to select the hidden nodes automatically. Theoretical analysis and experimental results elucidate that VBELM has stabler performance with more compact architectures, which presents probabilistic predictions comparison with traditional point predictions, and it also provides the hyperparameter criterion for hidden node selection.",
"title": ""
},
{
"docid": "e519d705cd52b4eb24e4e936b849b3ce",
"text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.",
"title": ""
},
{
"docid": "cba9f80ab39de507e84b68dc598d0bb9",
"text": "In this paper we construct a noncommutative space of “pointed Drinfeld modules” that generalizes to the case of function fields the noncommutative spaces of commensurability classes of Q-lattices. It extends the usual moduli spaces of Drinfeld modules to possibly degenerate level structures. In the second part of the paper we develop some notions of quantum statistical mechanics in positive characteristic and we show that, in the case of Drinfeld modules of rank one, there is a natural time evolution on the associated noncommutative space, which is closely related to the positive characteristic L-functions introduced by Goss. The points of the usual moduli space of Drinfeld modules define KMS functionals for this time evolution. We also show that the scaling action on the dual system is induced by a Frobenius action, up to a Wick rotation to imaginary time. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "cf6eb57b4740d3e14a73fd6197769bf5",
"text": "Microwave Materials such as Rogers RO3003 are subject to process-related fluctuations in terms of the relative permittivity. The behavior of high frequency circuits like patch-antenna arrays and their distribution networks is dependent on the effective wavelength. Therefore, fluctuations of the relative permittivity will influence the resonance frequency and antenna beam direction. This paper presents a grounded coplanar wave-guide based sensor, which can measure the relative permittivity at 77 GHz, as well as at other resonance frequencies, by applying it on top of the manufactured depaneling. In addition, the sensor is robust against floating ground metallizations on inner printed circuit board layers, which are typically distributed over the entire surface below antennas.",
"title": ""
},
{
"docid": "386fa6e79c948925337651c4c04d326b",
"text": "Thyroid associated orbitopathy, also known as Graves' orbitopathy, is typically a self-limiting autoimmune process associated with dysthyroid states. The clinical presentation may vary from very mild disease to severe irreversible sight-threatening complications. Despite ongoing basic science and clinical research, the pathogenesis and highly effective therapeutic strategies remain elusive. The present article reviews the pathophysiology, clinical presentation, and management of this common, yet poorly understood disease, which remains a challenge to the ophthalmologist.",
"title": ""
},
{
"docid": "8b38fd43c9d418b356ef009e9612e564",
"text": "English. This work aims at evaluating and comparing two different frameworks for the unsupervised topic modelling of the CompWHoB Corpus, namely our political-linguistic dataset. The first approach is represented by the application of the latent DirichLet Allocation (henceforth LDA), defining the evaluation of this model as baseline of comparison. The second framework employs Word2Vec technique to learn the word vector representations to be later used to topic-model our data. Compared to the previously defined LDA baseline, results show that the use of Word2Vec word embeddings significantly improves topic modelling performance but only when an accurate and taskoriented linguistic pre-processing step is carried out. Italiano. L’obiettivo di questo contributo è di valutare e confrontare due differenti framework per l’apprendimento automatico del topic sul CompWHoB Corpus, la nostra risorsa testuale. Dopo aver implementato il modello della latent DirichLet Allocation, abbiamo definito come standard di riferimento la valutazione di questo stesso approccio. Come secondo framework, abbiamo utilizzato il modello Word2Vec per apprendere le rappresentazioni vettoriali dei termini successivamente impiegati come input per la fase di apprendimento automatico del topic. I risulati mostrano che utilizzando i ‘word embeddings’ generati da Word2Vec, le prestazioni del modello aumentano significativamente ma solo se supportati da una accurata fase di ‘pre-processing’ linguisti-",
"title": ""
},
{
"docid": "c7188c78b818b9d487b76b9d2c731992",
"text": "Overview 7 Purpose of the study 7 Background to the study 7 The place of CIL in relation to traditional disciplines 10 Research questions, participants, and instruments 12 Computer and information literacy framework 15 Overview 15 Defining computer and information literacy 16 Structure of the computer and information literacy construct 18 Strands and aspects 19 Contextual framework 25 Overview 25 Classification of contextual factors 25 Contextual levels and variables 27 Assessment design 35 The ICILS test design 35 The ICILS test instrument 36 Types of assessment task 36 Mapping test items to the CIL framework 43 The ICILS student questionnaire and context instruments 44 Foreword As an international, nonprofit cooperative of national research institutions and governmental research agencies, the International Association for the Evaluation of Educational Achievement (IEA) has conducted more than 30 large-scale comparative studies in countries around the world. These studies have reported on educational policies, practices, and learning outcomes on a wide range of topics and subject matters. These investigations have proven to be a key resource for monitoring educational quality and progress within individual countries and across a broad international context. The International Computer and Information Literacy Study (ICILS) follows a series of earlier IEA studies that had, as their particular focus, information and communication technologies (ICT) in education. The first of these, the Computers in Education Study (COMPED), was carried out in 1989 and again in 1992 for the purpose of reporting on the educational use of computers in the context of emerging governmental initiatives to implement ICT in schools. The next series of projects in this area was the Second These projects provided an update on the implementation of computer technology resources in schools and their utilization in the teaching process. The continuing rapid development of computer and other information technologies has transformed the environment in which young people access, create, and share information. Many countries, having recognized the imperative of digital technology in all its forms, acknowledge the need to educate their citizens in the use of these technologies so that they and their society can secure the future economic and social benefits of proficiency in the use of digital technologies. Within this context, many questions relating to the efficacy of instructional programs and how instruction is progressed in the area of digital literacy arise. ICILS represents the first international comparative study to investigate how students are developing the set of knowledge, understanding, …",
"title": ""
},
{
"docid": "1e139fa9673f83ac619a5da53391b1ef",
"text": "In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.",
"title": ""
},
{
"docid": "bd13f54cd08fe2626fe8de4edce49197",
"text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "36537442a340363be73bbdfb319b91eb",
"text": "Future sensor networks will be composed of a large number of densely deployed sensors/actuators. A key feature of such networks is that their nodes are untethered and unattended. Consequently, energy efficiency is an important design consideration for these networks. Motivated by the fac t that sensor network queries may often be geographical, we design and evaluate an energy efficient routing algorithm that propagates a query to the appropriate geographical region, without flooding. The proposed Geographic and Energy Aware Routing (GEAR) algorithm uses energy aware neighbor selection to route a packet towards the target regi on and Recursive Geographic Forwarding or Restricted Flooding algorithm to disseminate the packet inside the destina-",
"title": ""
},
{
"docid": "1a0ed30b64fa7f8d39a12acfcadfd763",
"text": "This letter presents a smart shelf configuration for radio frequency identification (RFID) application. The proposed shelf has an embedded leaking microstrip transmission line with extended ground plane. This structure, when connected to an RFID reader, allows detecting tagged objects in close proximity with proper field confinement to avoid undesired reading of neighboring shelves. The working frequency band covers simultaneously the three world assigned RFID subbands at ultrahigh frequency (UHF). The concept is explored by full-wave simulations and it is validated with thorough experimental tests.",
"title": ""
},
{
"docid": "ba8d73938ea51f1b41add8c572c1667b",
"text": "Traditionally, when storage systems employ erasure codes, they are designed to tolerate the failures of entire disks. However, the most common types of failures are latent sector failures, which only affect individual disk sectors, and block failures which arise through wear on SSD’s. This paper introduces SD codes, which are designed to tolerate combinations of disk and sector failures. As such, they consume far less storage resources than traditional erasure codes. We specify the codes with enough detail for the storage practitioner to employ them, discuss their practical properties, and detail an open-source implementation.",
"title": ""
},
{
"docid": "0ca588e42d16733bc8eef4e7957e01ab",
"text": "Three-dimensional (3D) finite element (FE) models are commonly used to analyze the mechanical behavior of the bone under different conditions (i.e., before and after arthroplasty). They can provide detailed information but they are numerically expensive and this limits their use in cases where large or numerous simulations are required. On the other hand, 2D models show less computational cost, but the precision of results depends on the approach used for the simplification. Two main questions arise: Are the 3D results adequately represented by a 2D section of the model? Which approach should be used to build a 2D model that provides reliable results compared to the 3D model? In this paper, we first evaluate if the stem symmetry plane used for generating the 2D models of bone-implant systems adequately represents the results of the full 3D model for stair climbing activity. Then, we explore three different approaches that have been used in the past for creating 2D models: (1) without side-plate (WOSP), (2) with variable thickness side-plate and constant cortical thickness (SPCT), and (3) with variable thickness side-plate and variable cortical thickness (SPVT). From the different approaches investigated, a 2D model including a side-plate best represents the results obtained with the full 3D model with much less computational cost. The side-plate needs to have variable thickness, while the cortical bone thickness can be kept constant.",
"title": ""
}
] |
scidocsrr
|
98c6cf3806fab0c28b4e273947cd36e8
|
IoT Edge Device Based Key Frame Extraction for Face in Video Recognition
|
[
{
"docid": "b76af76207fa3ef07e8f2fbe6436dca0",
"text": "Face recognition applications for airport security and surveillance can benefit from the collaborative coupling of mobile and cloud computing as they become widely available today. This paper discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The challenge lies with how to perform task partitioning from mobile devices to cloud and distribute compute load among cloud servers (cloudlet) to minimize the response time given diverse communication latencies and server compute powers. Our preliminary simulation results show that optimal task partitioning algorithms significantly affect response time with heterogeneous latencies and compute powers. Motivated by these results, we design, implement, and validate the basic functionalities of MOCHA as a proof-of-concept, and develop algorithms that minimize the overall response time for face recognition. Our experimental results demonstrate that high-powered cloudlets are technically feasible and indeed help reduce overall processing time when face recognition applications run on mobile devices using the cloud as the backend servers.",
"title": ""
}
] |
[
{
"docid": "49c1754d0d36122538e0a1721d1afce6",
"text": "Definition of GCA (TA) . Is a chronic vasculitis of large and medium vessels. . Leads to granulomatous inflammation histologically. . Predominantly affects the cranial branches of arteries arising from the arch of the aorta. . Incidence is reported as 2.2/10 000 patient-years in the UK [1] and between 7 and 29/100 000 in population age >50 years in Europe. . Incidence rates appear higher in northern climates.",
"title": ""
},
{
"docid": "84ced44b9f9a96714929ad78ed3f8732",
"text": "The CUNY-BLENDER team participated in the following tasks in TAC-KBP2010: Regular Entity Linking, Regular Slot Filling and Surprise Slot Filling task (per:disease slot). In the TAC-KBP program, the entity linking task is considered as independent from or a pre-processing step of the slot filling task. Previous efforts on this task mainly focus on utilizing the entity surface information and the sentence/document-level contextual information of the entity. Very little work has attempted using the slot filling results as feedback features to enhance entity linking. In the KBP2010 evaluation, the CUNY-BLENDER entity linking system explored the slot filling attributes that may potentially help disambiguate entity mentions. Evaluation results show that this feedback approach can achieve 9.1% absolute improvement on micro-average accuracy over the baseline using vector space model. For Regular Slot Filling we describe two bottom-up Information Extraction style pipelines and a top-down Question Answering style pipeline. Experiment results have shown that these pipelines are complementary and can be combined in a statistical re-ranking model. In addition, we present several novel approaches to enhance these pipelines, including query expansion, Markov Logic Networks based cross-slot/cross-system reasoning. Finally, as a diagnostic test, we also measured the impact of using external knowledge base and Wikipedia text mining on Slot Filling.",
"title": ""
},
{
"docid": "393ba48bf72e535bdd8a735583fae5ba",
"text": "The PCR is used widely for the study of rRNA genes amplified from mixed microbial populations. These studies resemble quantitative applications of PCR in that the templates are mixtures of homologs and the relative abundance of amplicons is thought to provide some measure of the gene ratios in the starting mixture. Although such studies have established the presence of novel rRNA genes in many natural ecosystems, inferences about gene abundance have been limited by uncertainties about the relative efficiency of gene amplification in the PCR. To address this question, three rRNA gene standards were prepared by PCR, mixed in known proportions, and amplified a second time by using primer pairs in which one primer was labeled with a fluorescent nucleotide derivative. The PCR products were digested with restriction endonucleases, and the frequencies of genes in the products were determined by electrophoresis on an Applied Biosystems 373A automated DNA sequencer in Genescan mode. Mixtures of two templates amplified with the 519F-1406R primer pair yielded products in the predicted proportions. A second primer pair (27F-338R) resulted in strong bias towards 1:1 mixtures of genes in final products, regardless of the initial proportions of the templates. This bias was strongly dependent on the number of cycles of replication. The results fit a kinetic model in which the reannealing of genes progressively inhibits the formation of template-primer hybrids.",
"title": ""
},
{
"docid": "1a44645ee469e4bbaa978216d01f7e0d",
"text": "The growing popularity of mobile search and the advancement in voice recognition technologies have opened the door for web search users to speak their queries, rather than type them. While this kind of voice search is still in its infancy, it is gradually becoming more widespread. In this paper, we examine the logs of a commercial search engine's mobile interface, and compare the spoken queries to the typed-in queries. We place special emphasis on the semantic and syntactic characteristics of the two types of queries. %Our analysis suggests that voice queries focus more on audio-visual content and question answering, and less on social networking and adult domains. We also conduct an empirical evaluation showing that the language of voice queries is closer to natural language than typed queries. Our analysis reveals further differences between voice and text search, which have implications for the design of future voice-enabled search tools.",
"title": ""
},
{
"docid": "9feeeabb8491a06ae130c99086a9d069",
"text": "Dopamine (DA) is a key transmitter in the basal ganglia, yet DA transmission does not conform to several aspects of the classic synaptic doctrine. Axonal DA release occurs through vesicular exocytosis and is action potential- and Ca²⁺-dependent. However, in addition to axonal release, DA neurons in midbrain exhibit somatodendritic release by an incompletely understood, but apparently exocytotic, mechanism. Even in striatum, axonal release sites are controversial, with evidence for DA varicosities that lack postsynaptic specialization, and largely extrasynaptic DA receptors and transporters. Moreover, DA release is often assumed to reflect a global response to a population of activities in midbrain DA neurons, whether tonic or phasic, with precise timing and specificity of action governed by other basal ganglia circuits. This view has been reinforced by anatomical evidence showing dense axonal DA arbors throughout striatum, and a lattice network formed by DA axons and glutamatergic input from cortex and thalamus. Nonetheless, localized DA transients are seen in vivo using voltammetric methods with high spatial and temporal resolution. Mechanistic studies using similar methods in vitro have revealed local regulation of DA release by other transmitters and modulators, as well as by proteins known to be disrupted in Parkinson's disease and other movement disorders. Notably, the actions of most other striatal transmitters on DA release also do not conform to the synaptic doctrine, with the absence of direct synaptic contacts for glutamate, GABA, and acetylcholine (ACh) on striatal DA axons. Overall, the findings reviewed here indicate that DA signaling in the basal ganglia is sculpted by cooperation between the timing and pattern of DA input and those of local regulatory factors.",
"title": ""
},
{
"docid": "98ecd6eeb4e8764b3ecb0ed03105ef38",
"text": "Autonomous navigation is an important feature that allows a mobile robot to independently move from a point to another without an intervention from a human operator. Autonomous navigation within an unknown area requires the robot to explore, localize and map its surroundings. By solving a maze, the pertaining algorithms and behaviour of the robot can be studied and improved upon. This paper describes an implementation of a maze-solving robot designed to solve a maze with turning indicators. The black turning indicators tell the robot which way to turn at the intersections to reach at the centre of the maze. Detection of intersection line and turning indicators in the maze was done by using LDR sensors. Algorithm for straight-line correction was based on PI(D) controller.",
"title": ""
},
{
"docid": "0618e88e1319a66cd7f69db491f78aca",
"text": "The rich dependency structure found in the columns of real-world relational databases can be exploited to great advantage, but can also cause query optimizers---which usually assume that columns are statistically independent---to underestimate the selectivities of conjunctive predicates by orders of magnitude. We introduce CORDS, an efficient and scalable tool for automatic discovery of correlations and soft functional dependencies between columns. CORDS searches for column pairs that might have interesting and useful dependency relations by systematically enumerating candidate pairs and simultaneously pruning unpromising candidates using a flexible set of heuristics. A robust chi-squared analysis is applied to a sample of column values in order to identify correlations, and the number of distinct values in the sampled columns is analyzed to detect soft functional dependencies. CORDS can be used as a data mining tool, producing dependency graphs that are of intrinsic interest. We focus primarily on the use of CORDS in query optimization. Specifically, CORDS recommends groups of columns on which to maintain certain simple joint statistics. These \"column-group\" statistics are then used by the optimizer to avoid naive selectivity estimates based on inappropriate independence assumptions. This approach, because of its simplicity and judicious use of sampling, is relatively easy to implement in existing commercial systems, has very low overhead, and scales well to the large numbers of columns and large table sizes found in real-world databases. Experiments with a prototype implementation show that the use of CORDS in query optimization can speed up query execution times by an order of magnitude. CORDS can be used in tandem with query feedback systems such as the LEO learning optimizer, leveraging the infrastructure of such systems to correct bad selectivity estimates and ameliorating the poor performance of feedback systems during slow learning phases.",
"title": ""
},
{
"docid": "79cdb154262b6588abec7c374f6a289f",
"text": "We propose a new family of description logics (DLs), called DL-Lite, specifically tailored to capture basic ontology languages, while keeping low complexity of reasoning. Reasoning here means not only computing subsumption between concepts and checking satisfiability of the whole knowledge base, but also answering complex queries (in particular, unions of conjunctive queries) over the instance level (ABox) of the DL knowledge base. We show that, for the DLs of the DL-Lite family, the usual DL reasoning tasks are polynomial in the size of the TBox, and query answering is LogSpace in the size of the ABox (i.e., in data complexity). To the best of our knowledge, this is the first result of polynomial-time data complexity for query answering over DL knowledge bases. Notably our logics allow for a separation between TBox and ABox reasoning during query evaluation: the part of the process requiring TBox reasoning is independent of the ABox, and the part of the process requiring access to the ABox can be carried out by an SQL engine, thus taking advantage of the query optimization strategies provided by current database management systems. Since even slight extensions to the logics of the DL-Lite family make query answering at least NLogSpace in data complexity, thus ruling out the possibility of using on-the-shelf relational technology for query processing, we can conclude that the logics of the DL-Lite family are the maximal DLs supporting efficient query answering over large amounts of instances.",
"title": ""
},
{
"docid": "704f4681b724a0e4c7c10fd129f3378b",
"text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.",
"title": ""
},
{
"docid": "12cac87e781307224db2c3edf0d217b8",
"text": "Fetal ventriculomegaly (VM) refers to the enlargement of the cerebral ventricles in utero. It is associated with the postnatal diagnosis of hydrocephalus. VM is clinically diagnosed on ultrasound and is defined as an atrial diameter greater than 10 mm. Because of the anatomic detailed seen with advanced imaging, VM is often further characterized by fetal magnetic resonance imaging (MRI). Fetal VM is a heterogeneous condition with various etiologies and a wide range of neurodevelopmental outcomes. These outcomes are heavily dependent on the presence or absence of associated anomalies and the direct cause of the ventriculomegaly rather than on the absolute degree of VM. In this review article, we discuss diagnosis, work-up, counseling, and management strategies as they relate to fetal VM. We then describe imaging-based research efforts aimed at using prenatal data to predict postnatal outcome. Finally, we review the early experience with fetal therapy such as in utero shunting, as well as the advances in prenatal diagnosis and fetal surgery that may begin to address the limitations of previous therapeutic efforts.",
"title": ""
},
{
"docid": "2512c057299a86d3e461a15b67377944",
"text": "Compressive sensing (CS) is an alternative to Shan-non/Nyquist sampling for the acquisition of sparse or compressible signals. Instead of taking periodic samples, CS measures inner products with M random vectors, where M is much smaller than the number of Nyquist-rate samples. The implications of CS are promising for many applications and enable the design of new kinds of analog-to-digital converters, imaging systems, and sensor networks. In this paper, we propose and study a wideband compressive radio receiver (WCRR) architecture that can efficiently acquire and track FM and other narrowband signals that live within a wide frequency bandwidth. The receiver operates below the Nyquist rate and has much lower complexity than either a traditional sampling system or CS recovery system. Our methods differ from most standard approaches to the problem of CS recovery in that we do not assume that the signals of interest are confined to a discrete set of frequencies, and we do not rely on traditional recovery methods such as l1-minimization. Instead, we develop a simple detection system that identifies the support of the narrowband FM signals and then applies compressive filtering techniques based on discrete prolate spheroidal sequences to cancel interference and isolate the signals. Lastly, a compressive phase-locked loop (PLL) directly recovers the FM message signals.",
"title": ""
},
{
"docid": "7d25c646a8ce7aa862fba7088b8ea915",
"text": "Neuro-dynamic programming (NDP for short) is a relatively new class of dynamic programming methods for control and sequential decision making under uncertainty. These methods have the potential of dealing with problems that for a long time were thought to be intractable due to either a large state space or the lack of an accurate model. They combine ideas from the fields of neural networks, artificial intelligence, cognitive science, simulation, and approximation theory. We will delineate the major conceptual issues, survey a number of recent developments, describe some computational experience, and address a number of open questions. We consider systems where decisions are made in stages. The outcome of each decision is not fully predictable but can be anticipated to some extent before the next decision is made. Each decision results in some immediate cost but also affects the context in which future decisions are to be made and therefore affects the cost incurred in future stages. Dynamic programming (DP for short) provides a mathematical formalization of the tradeoff between immediate and future costs. Generally, in DP formulations there is a discrete-time dynamic system whose state evolves according to given transition probabilities that depend on a decision/control u. In particular, if we are in state i and we choose decision u, we move to state j with given probability pij(u). Simultaneously with this transition, we incur a cost g(i, u, j). In comparing, however, the available decisions u, it is not enough to look at the magnitude of the cost g(i, u, j); we must also take into account how desirable the next state j is. We thus need a way to rank or rate states j. This is done by using the optimal cost (over all remaining stages) starting from state j, which is denoted by J∗(j). These costs can be shown to",
"title": ""
},
{
"docid": "e4236031c7d165a48a37171c47de1c38",
"text": "We present a discrete event simulation model reproducing the adoption of Radio Frequency Identification (RFID) technology for the optimal management of common logistics processes of a Fast Moving Consumer Goods (FMCG) warehouse. In this study, simulation is exploited as a powerful tool to replicate both the reengineered RFID logistics processes and the flows of Electronic Product Code (EPC) data generated by such processes. Moreover, a complex tool has been developed to analyze data resulting from the simulation runs, thus addressing the issue of how the flows of EPC data generated by RFID technology can be exploited to provide value-added information for optimally managing the logistics processes. Specifically, an EPCIS compliant Data Warehouse has been designed to act as EPCIS Repository and store EPC data resulting from simulation. Starting from EPC data, properly designed tools, referred to as Business Intelligence Modules, provide value-added information for processes optimization. Due to the newness of RFID adoption in the logistics context and to the lack of real case examples that can be examined, we believe that both the model and the data management system developed can be very useful to understand the practical implications of the technology and related information flow, as well as to show how to leverage EPC data for process management. Results of the study can provide a proof-of-concept to substantiate the adoption of RFID technology in the FMCG industry.",
"title": ""
},
{
"docid": "343f45efbdbf654c421b99927c076c5d",
"text": "As software engineering educators, it is important for us to realize the increasing domain-specificity of software, and incorporate these changes in our design of teaching material. Bioinformatics software is an example of immensely complex and critical scientific software and this domain provides an excellent illustration of the role of computing in the life sciences. To study bioinformatics from a software engineering standpoint, we conducted an exploratory survey of bioinformatics developers. The survey had a range of questions about people, processes and products. We learned that practices like extreme programming, requirements engineering and documentation. As software engineering educators, we realized that the survey results had important implications for the education of bioinformatics professionals. We also investigated the current status of software engineering education in bioinformatics, by examining the curricula of more than fifty bioinformatics programs and the contents of over fifteen textbooks. We observed that there was no mention of the role and importance of software engineering practices essential for creating dependable software systems. Based on our findings and existing literature we present a set of recommendations for improving software engineering education in bioinformatics.",
"title": ""
},
{
"docid": "6be88914654c736c8e1575aeb37532a3",
"text": "Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and mis-interpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.",
"title": ""
},
{
"docid": "1dbdd4a6d39fe973b5c6f860ec9873a2",
"text": "Meaningful facial parts can convey key cues for both facial action unit detection and expression prediction. Textured 3D face scan can provide both detailed 3D geometric shape and 2D texture appearance cues of the face which are beneficial for Facial Expression Recognition (FER). However, accurate facial parts extraction as well as their fusion are challenging tasks. In this paper, a novel system for 3D FER is designed based on accurate facial parts extraction and deep feature fusion of facial parts. Experiments are conducted on the BU-3DFE database, demonstrating the effectiveness of combing different facial parts, texture and depth cues and reporting the state-of-the-art results in comparison with all existing methods under the same setting.",
"title": ""
},
{
"docid": "b4f19048d26c0620793da5f5422a865f",
"text": "Interest in supply chain management has steadily increased since the 1980s when firms saw the benefits of collaborative relationships within and beyond their own organization. Firms are finding that they can no longer compete effectively in isolation of their suppliers or other entities in the supply chain. A number of definitions of supply chain management have been proposed in the literature and in practice. This paper defines the concept of supply chain management and discusses its historical evolution. The term does not replace supplier partnerships, nor is it a description of the logistics function. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Introduction to supply chain concepts Firms can no longer effectively compete in isolation of their suppliers and other entities in the supply chain. Interest in the concept of supply chain management has steadily increased since the 1980s when companies saw the benefits of collaborative relationships within and beyond their own organization. A number of definitions have been proposed concerning the concept of “the supply chain” and its management. This paper defines the concept of the supply chain and discusses the evolution of supply chain management. The term does not replace supplier partnerships, nor is it a description of the logistics function. Industry groups are now working together to improve the integrative processes of supply chain management and accelerate the benefits available through successful implementation. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Definition of supply chain Various definitions of a supply chain have been offered in the past several years as the concept has gained popularity. The APICS Dictionary describes the supply chain as: 1 the processes from the initial raw materials to the ultimate consumption of the finished product linking across supplieruser companies; and 2 the functions within and outside a company that enable the value chain to make products and provide services to the customer (Cox et al., 1995). Another source defines supply chain as, the network of entities through which material flows. Those entities may include suppliers, carriers, manufacturing sites, distribution centers, retailers, and customers (Lummus and Alber, 1997). The Supply Chain Council (1997) uses the definition: “The supply chain – a term increasingly used by logistics professionals – encompasses every effort involved in producing and delivering a final product, from the supplier’s supplier to the customer’s customer. Four basic processes – plan, source, make, deliver – broadly define these efforts, which include managing supply and demand, sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, and delivery to the customer.” Quinn (1997) defines the supply chain as “all of those activities associated with moving goods from the raw-materials stage through to the end user. This includes sourcing and procurement, production scheduling, order processing, inventory management, transportation, warehousing, and customer service. Importantly, it also embodies the information systems so necessary to monitor all of those activities.” In addition to defining the supply chain, several authors have further defined the concept of supply chain management. As defined by Ellram and Cooper (1993), supply chain management is “an integrating philosophy to manage the total flow of a distribution channel from supplier to ultimate customer”. Monczka and Morgan (1997) state that “integrated supply chain management is about going from the external customer and then managing all the processes that are needed to provide the customer with value in a horizontal way”. They believe that supply chains, not firms, compete and that those who will be the strongest competitors are those that “can provide management and leadership to the fully integrated supply chain including external customer as well as prime suppliers, their suppliers, and their suppliers’ suppliers”. From these definitions, a summary definition of the supply chain can be stated as: all the activities involved in delivering a product from raw material through to the customer including sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, delivery to the customer, and the information systems necessary to monitor all of these activities. Supply chain management coordinates and integrates all of these activities into a seamless process. It links all of the partners in the chain including departments",
"title": ""
},
{
"docid": "b8087b15edb4be5771aef83b1b18f723",
"text": "The success of visual telecommunication systems depends on their ability to transmit and display users' natural nonverbal behavior. While video-mediated communication (VMC) is the most widely used form of interpersonal remote interaction, avatar-mediated communication (AMC) in shared virtual environments is increasingly common. This paper presents two experiments investigating eye tracking in AMC. The first experiment compares the degree of social presence experienced in AMC and VMC during truthful and deceptive discourse. Eye tracking data (gaze, blinking, and pupil size) demonstrates that oculesic behavior is similar in both mediation types, and uncovers systematic differences between truth telling and lying. Subjective measures show users' psychological arousal to be greater in VMC than AMC. The second experiment demonstrates that observers of AMC can more accurately detect truth and deception when viewing avatars with added oculesic behavior driven by eye tracking. We discuss implications for the design of future visual telecommunication media interfaces.",
"title": ""
},
{
"docid": "d13e3aa8d5dbb412390354fc2a0d1bda",
"text": "Over the past few years, mobile marketing has generated an increasing interest among academics and practitioners. While numerous studies have provided important insights into the mobile marketing, our understanding of this topic of growing interest and importance remains deficient. Therefore, the objective of this article is to provide a comprehensive framework intended to guide research efforts focusing on mobile media as well as to aid practitioners in their quest to achieve mobile marketing success. The framework builds on the literature from mobile commerce and integrated marketing communications (IMC) and provides a broad delineation as to how mobile marketing should be integrated into the firm’s overall marketing communications strategy. It also outlines the mobile marketing from marketing communications mix (also called promotion mix) perspective and provides a comprehensive overview of divergent mobile marketing activities. The article concludes with a detailed description of mobile marketing campaign planning and implementation.",
"title": ""
}
] |
scidocsrr
|
b969080b20d88fa20e58a41e5e3ebade
|
Immersion of rohu fingerlings in clove oil reduced handling and confinement stress and mortality
|
[
{
"docid": "3291f56f3052fe50a3064ad25f47f08a",
"text": "Tricaine methane-sulfonate (MS-222) application in fish anaesthesia By N. Topic Popovic, I. Strunjak-Perovic, R. Coz-Rakovac, J. Barisic, M. Jadan, A. Persin Berakovic and R. Sauerborn Klobucar Laboratory of Ichthyopathology – Biological Materials, Division for Materials Chemistry, Rudjer Boskovic Institute, Zagreb, Croatia; Department of Anaesthesiology, University Hospital Clinic, Zagreb, Croatia",
"title": ""
}
] |
[
{
"docid": "ea5dfaeaa63f4a0586955a6d60bf7a8a",
"text": "Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training. The same goal is pursued within the learning using privileged information paradigm which was recently introduced by Vapnik et al. and is aimed at utilizing additional information available only at training time-a framework implemented by SVM+. We relate the privileged information to importance weighting and show that the prior knowledge expressible with privileged features can also be encoded by weights associated with every training example. We show that a weighted SVM can always replicate an SVM+ solution, while the converse is not true and we construct a counterexample highlighting the limitations of SVM+. Finally, we touch on the problem of choosing weights for weighted SVMs when privileged features are not available.",
"title": ""
},
{
"docid": "f8bec2d19a98f9e56c1e46adddef5726",
"text": "The type of rationality we assume in economics--perfect, logical, deductive rationality--is extremely useful in generating solutions to theoretical problems. But it demands much of human behavior--much more in fact than it can usually deliver. If we were to imagine the vast collection of decision problems economic agents might conceivably deal with as a sea or an ocean, with the easier problems on top and more complicated ones at increasing depth, then deductive rationality would describe human behavior accurately only within a few feet of the surface. For example, the game Tic-Tac-Toe is simple, and we can readily find a perfectly rational, minimax solution to it. But we do not find rational \"solutions\" at the depth of Checkers; and certainly not at the still modest depths of Chess and Go.",
"title": ""
},
{
"docid": "3898b7f3d55e96781c4c1dd3d72f1045",
"text": "In addition to trait EI, Cherniss identifies three other EI models whose main limitations must be succinctly mentioned, not least because they provided the impetus for the development of the trait EI model. Bar-On’s (1997) model is predicated on the problematic assumption that emotional intelligence (or ‘‘ability’’ or ‘‘competence’’ or ‘‘skill’’ or ‘‘potential’’—terms that appear to be used interchangeably in his writings) can be validly assessed through self-report questions of the type ‘‘It is easy for me to understand my emotions.’’ Psychometrically, as pointed out in Petrides and Furnham (2001), this is not a viable position because such self-report questions can only be tapping into self-perceptions rather than into abilities or competencies. This poses a fundamental threat to the validity of this model, far more serious than the pervasive faking problem noted by several authors (e.g., Grubb & McDaniel, 2008). Goleman’s (1995) model is difficult to evaluate scientifically because of its reliance on",
"title": ""
},
{
"docid": "2e1a6dfb1208bc09a227c7e16ffc7b4f",
"text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.",
"title": ""
},
{
"docid": "bec41dd9e724598c8ab47fa1840cad61",
"text": "Described here is a case of suicide with the use of a chainsaw. A female suffering from schizophrenia committed suicide by an ingenious use of a chainsaw that resulted in the transection of her cervical spine and spinal cord. The findings of the resulting investigation are described and the mechanism of suicides with the use of a chainsaw is reviewed. A dry bone study was realized to determine the bone sections, the correlation between anatomic lesions and characteristics of chainsaw. The damage of organs and soft tissues is compared according to the kinds of chainsaw used.",
"title": ""
},
{
"docid": "a2b052b1ad2fcebe9ee45a0808101e79",
"text": "Mobile context-aware applications experience a constantly changing environment with increased dynamicity. In order to work efficiently, the location of mobile users needs to be predicted and properly exploited by mobile applications. We propose a spatial context model, which deals with the location prediction of mobile users. Such model is used for the classification of the users' trajectories through Machine Learning (ML) algorithms. Predicting spatial context is treated through supervised learning. We evaluate our model in terms of prediction accuracy w.r.t. specific prediction parameters. The proposed model is also compared with other ML algorithms for location prediction. Our findings are very promising for the efficient operation of mobile context-aware applications.",
"title": ""
},
{
"docid": "ed9b027bafedfa9305d11dca49ecc930",
"text": "This paper announces and discusses the experimental results from the Noisy Iris Challenge Evaluation (NICE), an iris biometric evaluation initiative that received worldwide participation and whose main innovation is the use of heavily degraded data acquired in the visible wavelength and uncontrolled setups, with subjects moving and at widely varying distances. The NICE contest included two separate phases: 1) the NICE.I evaluated iris segmentation and noise detection techniques and 2) the NICE:II evaluated encoding and matching strategies for biometric signatures. Further, we give the performance values observed when fusing recognition methods at the score level, which was observed to outperform any isolated recognition strategy. These results provide an objective estimate of the potential of such recognition systems and should be regarded as reference values for further improvements of this technology, which-if successful-may significantly broaden the applicability of iris biometric systems to domains where the subjects cannot be expected to cooperate.",
"title": ""
},
{
"docid": "dea571dbebe1392fb7e7ae8cbb260a67",
"text": "This paper presents an untyped lambda calculus, extended with object primitives that reflect the capabilities of so-called delegation-based object-oriented languages. A type inference system allows static detection of errors, such as message not understood, while at the same time allowing the type of an inherited method to be specialized to the type of the inheriting object. Type soundness is proved using operational semantics and examples illustrating the expressiveness of the pure calculus are presented. CR Classification: F.3.1, D.3.3, F.4.1",
"title": ""
},
{
"docid": "cc10051c413cfb6f87d0759100bc5182",
"text": "Social Media Hate Speech has continued to grow both locally and globally due to the increase of Online Social Media web forums like Facebook, Twitter and blogging. This has been propelled even further by smartphones and mobile data penetration locally. Global and Local terrorism has posed a vital question for technologists to investigate, prosecute, predict and prevent Social Media Hate Speech. This study provides a social media digital forensics tool through the design, development and implementation of a software application. The study will develop an application using Linux Apache MySQL PHP and Python. The application will use Scrapy Python page ranking algorithm to perform web crawling and the data will be placed in a MySQL database for data mining. The application used Agile Software development methodology with twenty websites being the subject of interest. The websites will be the sample size to demonstrate how the application",
"title": ""
},
{
"docid": "38a4f83778adea564e450146060ef037",
"text": "The last few years have seen a surge in the number of accurate, fast, publicly available dependency parsers. At the same time, the use of dependency parsing in NLP applications has increased. It can be difficult for a non-expert to select a good “off-the-shelf” parser. We present a comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of English. For our analysis, we developed a new web-based tool that gives a convenient way of comparing dependency parser outputs. Our analysis will help practitioners choose a parser to optimize their desired speed/accuracy tradeoff, and our tool will help practitioners examine and compare parser output.",
"title": ""
},
{
"docid": "5582e24516fb50f616698921714b7600",
"text": "A well-known attack on RSA with low secret-exponent d was given by Wiener in 1990. Wiener showed that using the equation ed − (p − 1)(q − 1)k = 1 and continued fractions, one can efficiently recover the secret-exponent d and factor N = pq from the public key (N, e) as long as d < 1 3 N 1 4 . In this paper, we present a generalization of Wiener’s attack. We show that every public exponent e that satisfies eX − (p− u)(q − v)Y = 1 with 1 ≤ Y < X < 2 1 4 N 1 4 , |u| < N 1 4 , v = [ − qu p− u ] , and all prime factors of p − u or q − v are less than 10 yields the factorization of N = pq. We show that the number of these exponents is at least N 1 2−ε.",
"title": ""
},
{
"docid": "578e069a88a6f885d5b5fcbfb9d1d658",
"text": "While a photograph is a visual artifact, studies reveal that a number of people with visual impairments are also interested in being able to share their memories and experiences with their sighted counterparts in the form of a photograph. We conducted an online survey to better understand the challenges faced by people with visual impairments in sharing and organizing photos, and reviewed existing tools and their limitations. Based on our analysis, we developed an accessible mobile application that enables a visually impaired user to capture photos along with audio recordings for the ambient sound and memo description and to browse through them eyes-free. Five visually impaired participants took part in a study in which they used our app to take photographs in naturalistic settings and to share them later with a sighted viewer. The participants were able to use our app to identify each photograph on their own during the photo sharing session, and reported high satisfaction in having been able to take the initiative during the process.",
"title": ""
},
{
"docid": "0a143c2d4af3cc726964a90927556399",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "cb66338562dd06203ad7293403f1147f",
"text": "This paper explores the usage of Facebook and YouTube among Malaysian students and the possibility of internet addiction in order to determine the effect of using social media in their social and academic lives. Data was collected from 667 Facebook users and 1056 YouTube users. Examining Young's [1]Internet addiction scale among the students revealed that 18% of Facebook users and 22% of YouTube users are addicted, and they spend more than two hours on Facebook and YouTube per day. They use Facebook for information, maintain relationships, academic learning, product inquiry, and meeting people, while YouTube is used for entertainment, information, academic learning, and product inquiry. These results create awareness for instructors and academic institution using YouTube videos and Facebook as complementary tools for teaching. They should be aware of the potential for compulsive and addicted users to be distracted from prescribed videos to unrelated materials.",
"title": ""
},
{
"docid": "0ba1f5e5828dfffa5dcb54b5f311453a",
"text": "BACKGROUND\nThe potential benefits of earthworm (Pheretima aspergillum) for healing have received considerable attention recently. Osteoblast and osteoclast activities are very important in bone remodeling, which is crucial to repair bone injuries. This study investigated the effects of earthworm extract on bone cell activities.\n\n\nMETHODS\nOsteoblast-like MG-63 cells and RAW 264.7 macrophage cells were used for identifying the cellular effects of different concentrations of earthworm extract on osteoblasts and osteoclasts, respectively. The optimal concentration of earthworm extract was determined by mitochondrial colorimetric assay, alkaline phosphatase activity, matrix calcium deposition, Western blotting and tartrate-resistant acid phosphatase activity.\n\n\nRESULTS\nEarthworm extract had a dose-dependent effect on bone cell activities. The most effective concentration of earthworm extract was 3 mg/ml, significantly increasing osteoblast proliferation and differentiation, matrix calcium deposition and the expression levels of alkaline phosphatase, osteopontin and osteocalcin. Conversely, 3 mg/ml earthworm extract significantly reduced the tartrate-resistant acid phosphatase activity of osteoclasts without altering cell viability.\n\n\nCONCLUSIONS\nEarthworm extract has beneficial effects on bone cell cultures, indicating that earthworm extract is a potential agent for use in bone regeneration.",
"title": ""
},
{
"docid": "169b3771d8fb2b60b6979260fa9ab8e1",
"text": "In this paper, we propose a novel data-driven schema for largescale heterogeneous knowledge graphs inspired by Formal Concept Analysis (FCA). We first extract the sets of properties associated with individual entities; these property sets (aka. characteristic sets) are annotatedwith cardinalities and used to induce a lattice based on set-containment relations, forming a natural hierarchical structure describing the knowledge graph. We then propose an algebra over such schema lattices, which allows to compute diffs between lattices (for example, to summarise the changes from one version of a knowledge graph to another), to add diffs to lattices (for example, to project future changes), and so forth.While we argue that this lattice structure (and associated algebra) may have various applications, we currently focus on the use-case of modelling and predicting the dynamic behaviour of knowledge graphs. Along those lines, we instantiate and evaluate our methods for analysing how versions of the Wikidata knowledge graph have changed over a period of 11 weeks. We propose algorithms for constructing the lattice-based schema from Wikidata, and evaluate their efficiency and scalability. We then evaluate use of the resulting schema(ta) for predicting how the knowledge graph will evolve in future versions.",
"title": ""
},
{
"docid": "0d603c72cd82beba29097ab9b9097c5f",
"text": "This paper presents a new sketch modeling system that is able to generate complex objects drawn from a unique viewpoint. The user draws the model in an iterative manner, adding simple parts to the existing object until completion. Each part is constructed from two construction lines (lines on the surface of the object that are planar and perpendicular to each other) whose orientation in the 3D space is uniquely determined by the system, and an optional silhouette. The system is then able to produce rough 3D reconstructions of drawings very easily by tracing over a sketch for example. Such models are perfectly suited to investigate their shade or shadow and they can be used as substitutes for more detailed models when the need for quick models is present. The user can also explore shapes directly on the system, refining the shape on the go in a oversketching way. The creation of models is very efficient, as the user models the shapes directly in the correct pose and orientation. Results show that the system is able to create complex objects without ever having to change the viewpoint.",
"title": ""
},
{
"docid": "7e5155e1dc02a2235c986fdcfe59e5fe",
"text": "Connected component labeling is an important but computationally expensive operation required in many fields of research. The goal in the present work is to label connected components on a 2D binary map. Two different iterative algorithms for doing this task are presented. The first algorithm (Row–Col Unify) is based upon the directional propagation labeling, whereas the second algorithm uses the Label Equivalence technique. The Row–Col Unify algorithm uses a local array of references and the reduction technique intrinsically. The usage of shared memory extensively makes the code efficient. The Label Equivalence algorithm is an extended version of the one presented by Hawick et al. (2010) [3]. At the end the comparison depending on the performances of both of the algorithms is presented. © 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "810dd7b98f55ac6ccd4040f1e6c8f10d",
"text": "This report describes simple mechanisms that allow autonomous software agents to en gage in bargaining behaviors in market based environments Groups of agents with such mechanisms could be used in applications including market based control internet com merce and economic modelling After an introductory discussion of the rationale for this work and a brief overview of key concepts from economics work in market based control is reviewed to highlight the need for bargaining agents Following this the early experimental economics work of Smith and the recent results of Gode and Sunder are de scribed Gode and Sunder s work using zero intelligence zi traders that act randomly within a structured market appears to imply that convergence to the theoretical equilib rium price is determined more by market structure than by the intelligence of the traders in that market if this is true developing mechanisms for bargaining agents is of very limited relevance However it is demonstrated here that the average transaction prices of zi traders can vary signi cantly from the theoretical equilibrium level when supply and demand are asymmetric and that the degree of di erence from equilibrium is predictable from a pri ori statistical analysis In this sense it is shown here that Gode and Sunder s results are artefacts of their experimental regime Following this zero intelligence plus zip traders are introduced like zi traders these simple agents make stochastic bids Unlike zi traders they employ an elementary form of machine learning Groups of zip traders interacting in experimental markets similar to those used by Smith and Gode and Sunder are demonstrated and it is shown that the performance of zip traders is signi cantly closer to the human data than is the performance of Gode and Sunder s zi traders This document reports on work done during February to September while the author held a Visiting Academic post at Hewlett Packard Laboratories Bristol Filton Road Bristol BS QZ U K",
"title": ""
},
{
"docid": "6e8466bd7b87c69c451e9312f1f05d15",
"text": "Novel physical phenomena can emerge in low-dimensional nanomaterials. Bulk MoS(2), a prototypical metal dichalcogenide, is an indirect bandgap semiconductor with negligible photoluminescence. When the MoS(2) crystal is thinned to monolayer, however, a strong photoluminescence emerges, indicating an indirect to direct bandgap transition in this d-electron system. This observation shows that quantum confinement in layered d-electron materials like MoS(2) provides new opportunities for engineering the electronic structure of matter at the nanoscale.",
"title": ""
}
] |
scidocsrr
|
0fd83d74ab36ececf73c967044b74754
|
Convolutional Neural Networks for Crop Yield Prediction using Satellite Images
|
[
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
}
] |
[
{
"docid": "efb52e33aee3e3cbf33d04cda77f4d7d",
"text": "With the growing amount of information and availability of opinion-rich resources, it is sometimes difficult for a common man to analyse what others think of. To analyse this information and to see what people in general think or feel of a product or a service is the problem of Sentiment Analysis. Sentiment analysis or Sentiment polarity labelling is an emerging field, so this needs to be accurate. In this paper, we explore various Machine Learning techniques for the classification of Telugu sentences into positive or negative polarities.",
"title": ""
},
{
"docid": "8b5f2d45852cf5c8e1edb6146d37abb7",
"text": "Portable, embedded systems place ever-increasing demands on high-performance, low-power microprocessor design. Dynamic voltage and frequency scaling (DVFS) is a well-known technique to reduce energy in digital systems, but the effectiveness of DVFS is hampered by slow voltage transitions that occur on the order of tens of microseconds. In addition, the recent trend towards chip-multiprocessors (CMP) executing multi-threaded workloads with heterogeneous behavior motivates the need for per-core DVFS control mechanisms. Voltage regulators that are integrated onto the same chip as the microprocessor core provide the benefit of both nanosecond-scale voltage switching and per-core voltage control. We show that these characteristics provide significant energy-saving opportunities compared to traditional off-chip regulators. However, the implementation of on-chip regulators presents many challenges including regulator efficiency and output voltage transient characteristics, which are significantly impacted by the system-level application of the regulator. In this paper, we describe and model these costs, and perform a comprehensive analysis of a CMP system with on-chip integrated regulators. We conclude that on-chip regulators can significantly improve DVFS effectiveness and lead to overall system energy savings in a CMP, but architects must carefully account for overheads and costs when designing next-generation DVFS systems and algorithms.",
"title": ""
},
{
"docid": "ccaa01441d7de9009dea10951a3ea2f3",
"text": "for Natural Language A First Course in Computational Semanti s Volume II Working with Dis ourse Representation Stru tures Patri k Bla kburn & Johan Bos September 3, 1999",
"title": ""
},
{
"docid": "4802e7ed9d911ccbe92b55f04998f3f1",
"text": "Sixteen incidents involving dog bites fitting the description \"severe\" were identified among 5,711 dog bite incidents reported to health departments in five South Carolina counties (population 750,912 in 1980) between July 1, 1979, and June 30, 1982. A \"severe\" attack was defined as one in which the dog \"repeatedly bit or vigorously shook its victim, and the victim or the person intervening had extreme difficulty terminating the attack.\" Information from health department records was clarified by interviews with animal control officers, health and police officials, and persons with firsthand knowledge of the events. Investigation disclosed that the dogs involved in the 16 severe attacks were reproductively intact males. The median age of the dogs was 3 years. A majority of the attacks were by American Staffordshire terriers, St. Bernards, and cocker spaniels. Ten of the dogs had been aggressive toward people or other dogs before the incident that was investigated. Ten of the 16 victims of severe attacks were 10 years of age or younger; the median age of all 16 victims was 8 years. Twelve of the victims either were members of the family that owned the attacking dog or had had contact with the dog before the attack. Eleven of the victims were bitten on the head, neck, or shoulders. In 88 percent of the cases, the attacks took place in the owner's yard or home, or in the adjoining yard. In 10 of the 16 incidents, members of the victims' families witnessed the attacks. The characteristics of these attacks, only one of which proved fatal, were similar in many respects to those that have been reported for other dog bite incidents that resulted in fatalities. On the basis of this study, the author estimates that a risk of 2 fatalities per 1,000 reported dog bites may exist nationwide. Suggestions made for the prevention of severe attacks focus on changing the behavior of both potential canine attackers and potential victims.",
"title": ""
},
{
"docid": "d3b6fcc353382c947cfb0b4a73eda0ef",
"text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.",
"title": ""
},
{
"docid": "799b39e8c8d8bd86b8eae0d74a8b5ee4",
"text": "The photovoltaic (PV) string under partially shaded conditions exhibits complex output characteristics, i.e., the current–voltage <inline-formula> <tex-math notation=\"LaTeX\">$(I\\mbox{--}V)$</tex-math></inline-formula> curve presents multiple current stairs, whereas the power–voltage <inline-formula> <tex-math notation=\"LaTeX\">$(P\\mbox{--}V)$</tex-math></inline-formula> curve shows multiple power peaks. Thus, the conventional maximum power point tracking (MPPT) method is not acceptable either on tracking accuracy or on tracking speed. In this paper, two global MPPT methods, namely, the search–skip–judge global MPPT (SSJ-GMPPT) and rapid global MPPT (R-GMPPT) methods are proposed in terms of reducing the searching voltage range based on comprehensive study of <inline-formula> <tex-math notation=\"LaTeX\">$I\\mbox{--}V$</tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$P\\mbox{--}V$</tex-math></inline-formula> characteristics of PV string. The SSJ-GMPPT method can track the real maximum power point under any shading conditions and achieve high accuracy and fast tracking speed without additional circuits and sensors. The R-GMPPT method aims to enhance the tracking speed of long string with vast PV modules and reduces more than 90% of the tracking time that is consumed by the conventional global searching method. The improved performance of the two proposed methods has been validated by experimental results on a PV string. The comparison with other methods highlights the two proposed methods more powerful.",
"title": ""
},
{
"docid": "3bc48489d80e824efb7e3512eafc6f30",
"text": "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.",
"title": ""
},
{
"docid": "7699f4fa25a47fca0de320b8bbe6ff00",
"text": "Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.",
"title": ""
},
{
"docid": "7143493c6a2abe3da9eb4c98da31c620",
"text": "We study probability measures induced by set functions with constraints. Such measures arise in a variety of real-world settings, where prior knowledge, resource limitations, or other pragmatic considerations impose constraints. We consider the task of rapidly sampling from such constrained measures, and develop fast Markov chain samplers for them. Our first main result is for MCMC sampling from Strongly Rayleigh (SR) measures, for which we present sharp polynomial bounds on the mixing time. As a corollary, this result yields a fast mixing sampler for Determinantal Point Processes (DPPs), yielding (to our knowledge) the first provably fast MCMC sampler for DPPs since their inception over four decades ago. Beyond SR measures, we develop MCMC samplers for probabilistic models with hard constraints and identify sufficient conditions under which their chains mix rapidly. We illustrate our claims by empirically verifying the dependence of mixing times on the key factors governing our theoretical bounds.",
"title": ""
},
{
"docid": "676f5528ea9fdc0337dcdac3a6a56383",
"text": "Online Social Networks (OSNs) are becoming a popular method of meeting people and keeping in touch with friends. OSNs resort to trust evaluation models and algorithms to improve service quality and enhance user experiences. Much research has been done to evaluate trust and predict the trustworthiness of a target, usually from the view of a source. Graph-based approaches make up a major portion of the existing works, in which the trust value is calculated through a trusted graph (or trusted network, web of trust, or multiple trust chains). In this article, we focus on graph-based trust evaluation models in OSNs, particularly in the computer science literature. We first summarize the features of OSNs and the properties of trust. Then we comparatively review two categories of graph-simplification-based and graph-analogy-based approaches and discuss their individual problems and challenges. We also analyze the common challenges of all graph-based models. To provide an integrated view of trust evaluation, we conduct a brief review of its pre- and postprocesses (i.e., the preparation and validation of trust models, including information collection, performance evaluation, and related applications). Finally, we identify some open challenges that all trust models are facing.",
"title": ""
},
{
"docid": "c18037d7efce8348f0f06e3f3f83e187",
"text": "Ovotesticular disorder of sex development (OTDSD) is a rare condition and defined as the presence of ovarian and testicular tissue in the same individual. Most of patients with OTDSD have female internal genital organs. In this report, we present a case in which, we demonstrated prostate tissue using endoscopic and radiologic methods in a 46-XX, sex determining region of the Y chromosome negative male phenotypic patient, with no female internal genitalia. Existence of prostate in an XX male without SRY is rarely seen and reveals a complete male phenotype. This finding is critical to figure out what happens in embryonal period.",
"title": ""
},
{
"docid": "a7bc0af9b764021d1f325b1edfbfd700",
"text": "BACKGROUND\nIn the treatment of schizophrenia, changing antipsychotics is common when one treatment is suboptimally effective, but the relative effectiveness of drugs used in this strategy is unknown. This randomized, double-blind study compared olanzapine, quetiapine, risperidone, and ziprasidone in patients who had just discontinued a different atypical antipsychotic.\n\n\nMETHOD\nSubjects with schizophrenia (N=444) who had discontinued the atypical antipsychotic randomly assigned during phase 1 of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) investigation were randomly reassigned to double-blind treatment with a different antipsychotic (olanzapine, 7.5-30 mg/day [N=66]; quetiapine, 200-800 mg/day [N=63]; risperidone, 1.5-6.0 mg/day [N=69]; or ziprasidone, 40-160 mg/day [N=135]). The primary aim was to determine if there were differences between these four treatments in effectiveness measured by time until discontinuation for any reason.\n\n\nRESULTS\nThe time to treatment discontinuation was longer for patients treated with risperidone (median: 7.0 months) and olanzapine (6.3 months) than with quetiapine (4.0 months) and ziprasidone (2.8 months). Among patients who discontinued their previous antipsychotic because of inefficacy (N=184), olanzapine was more effective than quetiapine and ziprasidone, and risperidone was more effective than quetiapine. There were no significant differences between antipsychotics among those who discontinued their previous treatment because of intolerability (N=168).\n\n\nCONCLUSIONS\nAmong this group of patients with chronic schizophrenia who had just discontinued treatment with an atypical antipsychotic, risperidone and olanzapine were more effective than quetiapine and ziprasidone as reflected by longer time until discontinuation for any reason.",
"title": ""
},
{
"docid": "4921d1967a5d05f72a53e5628cac1a8e",
"text": "This paper describes an architecture for controlling non-player characters (NPC) in the First Person Shooter (FPS) game Unreal Tournament 2004. Specifically, the DRE-Bot architecture is made up of three reinforcement learners, Danger, Replenish and Explore, which use the tabular Sarsa(λ) algorithm. This algorithm enables the NPC to learn through trial and error building up experience over time in an approach inspired by human learning. Experimentation is carried to measure the performance of DRE-Bot when competing against fixed strategy bots that ship with the game. The discount parameter, γ, and the trace parameter, λ, are also varied to see if their values have an effect on the performance.",
"title": ""
},
{
"docid": "d98f60a2a0453954543da840076e388a",
"text": "The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.",
"title": ""
},
{
"docid": "339f7a0031680a2d930f143700d66d5e",
"text": "We propose an approach to generate natural language questions from knowledge graphs such as DBpedia and YAGO. We stage this in the setting of a quiz game. Our approach, though, is general enough to be applicable in other settings. Given a topic of interest (e.g., Soccer) and a difficulty (e.g., hard), our approach selects a query answer, generates a SPARQL query having the answer as its sole result, before verbalizing the question.",
"title": ""
},
{
"docid": "1c4165c47ae9870e31a7106f1b82e94d",
"text": "INTRODUCTION\nPrevious studies found that aircraft maintenance workers may be exposed to organophosphates in hydraulic fluid and engine oil. Studies have also illustrated a link between long-term low-level organophosphate pesticide exposure and depression.\n\n\nMETHODS\nA questionnaire containing the Patient Health Questionnaire 8 depression screener was e-mailed to 52,080 aircraft maintenance workers (with N = 4801 complete responses) in a cross-sectional study to determine prevalence and severity of depression and descriptions of their occupational exposures.\n\n\nRESULTS\nThere was no significant difference between reported depression prevalence and severity in similar exposure groups in which aircraft maintenance workers were exposed or may have been exposed to organophosphate esters compared to similar exposure groups in which they were not exposed. However, a dichotomous measure of the prevalence of depression was significantly associated with self-reported exposure levels from low (OR: 1.21) to moderate (OR: 1.68) to high exposure (OR: 2.70) and with each exposure route including contact (OR: 1.68), inhalation (OR: 2.52), and ingestion (OR: 2.55). A self-reported four-level measure of depression severity was also associated with a self-reported four-level measure of exposure.\n\n\nDISCUSSION\nBased on self-reported exposures and outcomes, an association is observed between organophosphate exposure and depression; however, we cannot assume that the associations we observed are causal because some workers may have been more likely to report exposure to organophosphate esters and also more likely to report depression. Future studies should consider using a larger sample size, better methods for characterizing crew chief exposures, and bioassays to measure dose rather than exposure. Hardos JE, Whitehead LW, Han I, Ott DK, Waller DK. Depression prevalence and exposure to organophosphate esters in aircraft maintenance workers. Aerosp Med Hum Perform. 2016; 87(8):712-717.",
"title": ""
},
{
"docid": "db483f6aab0361ce5a3ad1a89508541b",
"text": "In this paper, we describe Swoop, a hypermedia inspired Ontology Browser and Editor based on OWL, the recently standardized Web-oriented ontology language. After discussing the design rationale and architecture of Swoop, we focus mainly on its features, using illustrative examples to highlight its use. We demonstrate that with its web-metaphor, adherence to OWL recommendations and key unique features such as Collaborative Annotation using Annotea, Swoop acts as a useful and efficient web ontology development tool. We conclude with a list of future plans for Swoop, that should further increase its overall appeal and accessibility.",
"title": ""
},
{
"docid": "b418470025d74d745e75225861a1ed7e",
"text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.",
"title": ""
},
{
"docid": "a52b452f1fb7e1b48a1f3f50ea8a95a7",
"text": "Domain Adaptation (DA) techniques aim at enabling machine learning methods learn effective classifiers for a “target” domain when the only available training data belongs to a different “source” domain. In this extended abstract we briefly describe a new DA method called Distributional Correspondence Indexing (DCI) for sentiment classification. DCI derives term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a pivot, i.e., to a highly predictive term that behaves similarly across domains. The experiments we have conducted show that DCI obtains better performance than current state-of-theart techniques for cross-lingual and cross-domain sentiment classification.",
"title": ""
},
{
"docid": "af1cc16cae083e8b07e53dc82d5ca68f",
"text": "People often share emotions with others in order to manage their emotional experiences. We investigate how social media properties such as visibility and directedness affect how people share emotions in Facebook, and their satisfaction after doing so. 141 participants rated 1,628 of their own recent status updates, posts they made on others' timelines, and private messages they sent for intensity, valence, personal relevance, and overall satisfaction felt after sharing each message. For network-visible channels-status updates and posts on others' timelines-they also rated their satisfaction with replies they received. People shared differently between channels, with more intense and negative emotions in private messages. People felt more satisfied after sharing more positive emotions in all channels and after sharing more personally relevant emotions in network-visible channels. Finally, people's overall satisfaction after sharing emotions in network-visible channels is strongly tied to their reply satisfaction. Quality of replies, not just quantity, matters, suggesting the need for designs that help people receive valuable responses to their shared emotions.",
"title": ""
}
] |
scidocsrr
|
5e9904141f3aec6cc3ab1047abe1a708
|
Learning to Identify Metaphors from a Corpus of Proverbs
|
[
{
"docid": "3028de6940fb7a5af5320c506946edfc",
"text": "Metaphor is ubiquitous in text, even in highly technical text. Correct inference about textual entailment requires computers to distinguish the literal and metaphorical senses of a word. Past work has treated this problem as a classical word sense disambiguation task. In this paper, we take a new approach, based on research in cognitive linguistics that views metaphor as a method for transferring knowledge from a familiar, well-understood, or concrete domain to an unfamiliar, less understood, or more abstract domain. This view leads to the hypothesis that metaphorical word usage is correlated with the degree of abstractness of the word’s context. We introduce an algorithm that uses this hypothesis to classify a word sense in a given context as either literal (denotative) or metaphorical (connotative). We evaluate this algorithm with a set of adjectivenoun phrases (e.g., in dark comedy , the adjective dark is used metaphorically; in dark hair, it is used literally) and with the TroFi (Trope Finder) Example Base of literal and nonliteral usage for fifty verbs. We achieve state-of-theart performance on both datasets.",
"title": ""
},
{
"docid": "0b587770a13ba76572a1e51df52d95a3",
"text": "Current approaches to supervised learning of metaphor tend to use sophisticated features and restrict their attention to constructions and contexts where these features apply. In this paper, we describe the development of a supervised learning system to classify all content words in a running text as either being used metaphorically or not. We start by examining the performance of a simple unigram baseline that achieves surprisingly good results for some of the datasets. We then show how the recall of the system can be improved over this strong baseline.",
"title": ""
},
{
"docid": "515fac2b02637ddee5e69a8a22d0e309",
"text": "The continuous expansion of the multilingual information society has led in recent years to a pressing demand for multilingual linguistic resources suitable to be used for different applications. In this paper we present the WordNet Domains Hierarchy (WDH), a language-independent resource composed of 164, hierarchically organized, domain labels (e.g. Architecture, Sport, Medicine). Although WDH has been successfully applied to various Natural Language Processing tasks, the first available version presented some problems, mostly related to the lack of a clear semantics of the domain labels. Other correlated issues were the coverage and the balancing of the domains. We illustrate a new version of WDH addressing these problems by an explicit and systematic reference to the Dewey Decimal Classification. The new version of WDH has a better defined semantics and is applicable to a wider range of tasks.",
"title": ""
}
] |
[
{
"docid": "00963af83a3c605adf7701c5d03952ef",
"text": "Convolutional neural networks (CNNs) have shown remarkable results over the last several years for a wide range of computer vision tasks. A new architecture recently introduced by Sabour et al. [2017], referred to as a capsule networks with dynamic routing, has shown great initial results for digit recognition and small image classification. The success of capsule networks lies in their ability to preserve more information about the input by replacing max-pooling layers with convolutional strides and dynamic routing, allowing for preservation of part-whole relationships in the data. This preservation of the input is demonstrated by reconstructing the input from the output capsule vectors. Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. We extend the idea of convolutional capsules with locally-connected routing and propose the concept of deconvolutional capsules. Further, we extend the masked reconstruction to reconstruct the positive input class. The proposed convolutionaldeconvolutional capsule network, called SegCaps, shows strong results for the task of object segmentation with substantial decrease in parameter space. As an example application, we applied the proposed SegCaps to segment pathological lungs from low dose CT scans and compared its accuracy and efficiency with other U-Net-based architectures. SegCaps is able to handle large image sizes (512 × 512) as opposed to baseline capsules (typically less than 32 × 32). The proposed SegCaps reduced the number of parameters of U-Net architecture by 95.4% while still providing a better segmentation accuracy.",
"title": ""
},
{
"docid": "85f67ab0e1adad72bbe6417d67fd4c81",
"text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.",
"title": ""
},
{
"docid": "7d603d154025f7160c0711bba92e1049",
"text": "Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.",
"title": ""
},
{
"docid": "a51a3e1ae86e4d178efd610d15415feb",
"text": "The availability of semantically annotated image and video assets constitutes a critical prerequisite for the realisation of intelligent knowledge management services pertaining to realistic user needs. Given the extend of the challenges involved in the automatic extraction of such descriptions, manually created metadata play a significant role, further strengthened by their deployment in training and evaluation tasks related to the automatic extraction of content descriptions. The different views taken by the two main approaches towards semantic content description, namely the Semantic Web and MPEG-7, as well as the traits particular to multimedia content due to the multiplicity of information levels involved, have resulted in a variety of image and video annotation tools, adopting varying description aspects. Aiming to provide a common framework of reference and furthermore to highlight open issues, especially with respect to the coverage and the interoperability of the produced metadata, in this chapter we present an overview of the state of the art in image and video annotation tools.",
"title": ""
},
{
"docid": "e8e9061164a297ea03ea857b30491657",
"text": "Music genre classification can be of great utility to musical database management. Most current classification methods are supervised and tend to be based on contrived taxonomies. However, due to the ambiguities and inconsistencies in the chosen taxonomies, these methods are not applicable for a much larger database. We proposed an unsupervised clustering method, based on a given measure of similarity which can be provided by hidden Markov models. In addition, in order to better characterize music content, a novel segmentation scheme is proposed, based on music intrinsic rhythmic structure analysis and features are extracted based on these segments. The performance of this feature segmentation scheme performs better than the traditional fixed-length method, according to experimental results. Our preliminary results also suggest that the proposed method is comparable to the supervised classification method.",
"title": ""
},
{
"docid": "b27dc4a19b44bf2fd13f299de8c33108",
"text": "A large proportion of the world’s population lives in remote rural areas that are geographically isolated and sparsely populated. This paper proposed a hybrid power generation system suitable for remote area application. The concept of hybridizing renewable energy sources is that the base load is to be covered by largest and firmly available renewable source(s) and other intermittent source(s) should augment the base load to cover the peak load of an isolated mini electric grid system. The study is based on modeling, simulation and optimization of renewable energy system in rural area in Sundargarh district of Orissa state, India. The model has designed to provide an optimal system conFigureuration based on hour-by-hour data for energy availability and demands. Various renewable/alternative energy sources, energy storage and their applicability in terms of cost and performance are discussed. The homer software is used to study and design the proposed hybrid alternative energy power system model. The Sensitivity analysis was carried out using Homer program. Based on simulation results, it has been found that renewable/alternative energy sources will replace the conventional energy sources and would be a feasible solution for distribution of electric power for stand alone applications at remote and distant locations.",
"title": ""
},
{
"docid": "b6d3b53a58d05da12a209d36b07a39b7",
"text": "This paper proposes a novel framework for detecting redundancy in supervised sentence categorisation. Unlike traditional singleton neural network, our model incorporates character- aware convolutional neural network (Char-CNN) with character-aware recurrent neural network (Char-RNN) to form a convolutional recurrent neural network (CRNN). Our model benefits from Char-CNN in that only salient features are selected and fed into the integrated Char-RNN. Char-RNN effectively learns long sequence semantics via sophisticated update mechanism. We compare our framework against the state-of-the- art text classification algorithms on four popular benchmarking corpus. For instance, our model achieves competing precision rate, recall ratio, and F1 score on the Google-news data-set. For twenty- news-groups data stream, our algorithm obtains the optimum on precision rate, recall ratio, and F1 score. For Brown Corpus, our framework obtains the best F1 score and almost equivalent precision rate and recall ratio over the top competitor. For the question classification collection, CRNN produces the optimal recall rate and F1 score and comparable precision rate. We also analyse three different RNN hidden recurrent cells' impact on performance and their runtime efficiency. We observe that MGU achieves the optimal runtime and comparable performance against GRU and LSTM. For TFIDF based algorithms, we experiment with word2vec, GloVe, and sent2vec embeddings and report their performance differences.",
"title": ""
},
{
"docid": "327bdee6cd94def49456bdd50a207836",
"text": "A new model for perceptual evaluation of speech quality (PESQ) was recently standardised by the ITU-T as recommendation P.862. Unlike previous codec assessment models, such as PSQM and MNB (ITU-T P.861), PESQ is able to predict subjective quality with good correlation in a very wide range of conditions, that may include coding distortions, errors, noise, filtering, delay and variable delay. This paper introduces time delay identification techniques, and outlines some causes of variable delay, before describing the processes that are integrated into PESQ and specified in P.862. More information on the structure of PESQ, and performance results, can be found in the accompanying paper on the PESQ psychoacoustic model.",
"title": ""
},
{
"docid": "cfaeeb000232ade838ad751b7b404a66",
"text": "Meyer has recently introduced an image decomposition model to split an image into two components: a geometrical component and a texture (oscillatory) component. Inspired by his work, numerical models have been developed to carry out the decomposition of gray scale images. In this paper, we propose a decomposition algorithm for color images. We introduce a generalization of Meyer s G norm to RGB vectorial color images, and use Chromaticity and Brightness color model with total variation minimization. We illustrate our approach with numerical examples. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d",
"text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.",
"title": ""
},
{
"docid": "55285f99e1783bcba47ab41e56171026",
"text": "Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.",
"title": ""
},
{
"docid": "13d1b0637c12d617702b4f80fd7874ef",
"text": "Linear-time algorithms for testing the planarity of a graph are well known for over 35 years. However, these algorithms are quite involved and recent publications still try to give simpler linear-time tests. We give a conceptually simple reduction from planarity testing to the problem of computing a certain construction of a 3-connected graph. This implies a linear-time planarity test. Our approach is radically different from all previous linear-time planarity tests; as key concept, we maintain a planar embedding that is 3-connected at each point in time. The algorithm computes a planar embedding if the input graph is planar and a Kuratowski-subdivision otherwise.",
"title": ""
},
{
"docid": "b059f8393adbf1b12a809b7041f90bce",
"text": "As robots are gradually leaving highly structured factory environments and moving into human populated environments, they need to possess more complex cognitive abilities. They do not only have to operate efficiently and safely in natural, populated environments, but also be able to achieve higher levels of cooperation and communication with humans. Human-robot collaboration (HRC) is a research field with a wide range of applications, future scenarios, and potentially a high economic impact. HRC is an interdisciplinary research area comprising classical robotics, cognitive sciences, and psychology. This article gives a survey of the state of the art of human-robot collaboration. Established methods for intention estimation, action planning, joint action, and machine learning are presented together with existing guidelines to hardware design. This article is meant to provide the reader with a good overview of technologies and methods for HRC.",
"title": ""
},
{
"docid": "a86bc0970dba249e1e53f9edbad3de43",
"text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.",
"title": ""
},
{
"docid": "1a44e040bbb5c81a53a1255fc7f5d4d7",
"text": "Information technology and the Internet have had a dramatic effect on business operations. Companies are making large investments in e-commerce applications but are hard pressed to evaluate the success of their e-commerce systems. The DeLone & McLean Information Systems Success Model can be adapted to the measurement challenges of the new e-commerce world. The six dimensions of the updated model are a parsimonious framework for organizing the e-commerce success metrics identified in the literature. Two case examples demonstrate how the model can be used to guide the identification and specification of e-commerce success metrics.",
"title": ""
},
{
"docid": "6cb58777b5161c387d3e985fa0fb6c7c",
"text": "Recent advances in artificial intelligence (AI) and machine learning have created a general perception that AI could be used to solve complex problems, and in some situations over-hyped as a tool that can be so easily used. Unfortunately, the barrier to realization of mass adoption of AI on various business domains is too high because most domain experts have no background in AI. Developing AI applications involves multiple phases, namely data preparation, application modeling, and product deployment. The effort of AI research has been spent mostly on new AI models (in the model training stage) to improve the performance of benchmark tasks such as image recognition. Many other factors such as usability, efficiency and security of AI have not been well addressed, and therefore form a barrier to democratizing AI. Further, for many real world applications such as healthcare and autonomous driving, learning via huge amounts of possibility exploration is not feasible since humans are involved. In many complex applications such as healthcare, subject matter experts (e.g. Clinicians) are the ones who appreciate the importance of features that affect health, and their knowledge together with existing knowledge bases are critical to the end results. In this paper, we take a new perspective on developing AI solutions, and present a solution for making AI usable. We hope that this resolution will enable all subject matter experts (eg. Clinicians) to exploit AI like data scientists.",
"title": ""
},
{
"docid": "50c493ce0ac1f60889fb2a4b490fc939",
"text": "Future cellular networks will be of high capacity and heterogeneity. The structure and architecture will require high efficiency and scalability in network operation and management. In this paper, we address main requirements and challenges of future cellular networks and introduce network function virtualisation (NFV) with software defined networking (SDN) to realize the self-organizing (SO) scheme. NFV integrates the hardware appliances together in industry standard servers. And SDN performs as core controller of the network. The proposed SO scheme is based on soft fractional frequency reuse (SFFR) framework. The scheme takes different traffic demands into consideration and allocates the power adaptively. Finally the system is proved to be more scalable, energy-saving, and intelligent.",
"title": ""
},
{
"docid": "29ce7251e5237b0666cef2aee7167126",
"text": "Chinese characters have a huge set of character categories, more than 20, 000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a compact set of about 500 fundamental and structural radicals. This paper introduces a novel radical analysis network (RAN) to recognize printed Chinese characters by identifying radicals and analyzing two-dimensional spatial structures among them. The proposed RAN first extracts visual features from input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, aiming at generating captions of Chinese characters by detecting radicals and two-dimensional structures through a spatial attention mechanism. The manner of treating a Chinese character as a composition of radicals rather than a single character class largely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes, namely zero-shot learning.",
"title": ""
},
{
"docid": "22bf1c80bb833a7cdf6dd70936b40cb7",
"text": "Text messaging has become a popular form of communication with mobile phones worldwide. We present findings from a large scale text messaging study of 70 university students in the United States. We collected almost 60, 000 text messages over a period of 4 months using a custom logging tool on our participants' phones. Our re- sults suggest that students communicate with a large number of contacts for extended periods of time, engage in simultaneous conversations with as many as 9 contacts, and often use text messaging as a method to switch between a variety of communication mediums. We also explore the content of text messages, and ways text message habits have changed over the last decade as it has become more popular. Finally, we offer design suggestions for future mobile communication tools.",
"title": ""
},
{
"docid": "0774820345f37dd1ae474fc4da1a3a86",
"text": "Several diseases and disorders are treatable with therapeutic proteins, but some of these products may induce an immune response, especially when administered as multiple doses over prolonged periods. Antibodies are created by classical immune reactions or by the breakdown of immune tolerance; the latter is characteristic of human homologue products. Many factors influence the immunogenicity of proteins, including structural features (sequence variation and glycosylation), storage conditions (denaturation, or aggregation caused by oxidation), contaminants or impurities in the preparation, dose and length of treatment, as well as the route of administration, appropriate formulation and the genetic characteristics of patients. The clinical manifestations of antibodies directed against a given protein may include loss of efficacy, neutralization of the natural counterpart and general immune system effects (including allergy, anaphylaxis or serum sickness). An upsurge in the incidence of antibody-mediated pure red cell aplasia (PRCA) among patients taking one particular formulation of recombinant human erythropoietin (epoetin-alpha, marketed as Eprex(R)/Erypo(R); Johnson & Johnson) in Europe caused widespread concern. The PRCA upsurge coincided with removal of human serum albumin from epoetin-alpha in 1998 and its replacement with glycine and polysorbate 80. Although the immunogenic potential of this particular product may have been enhanced by the way the product was stored, handled and administered, it should be noted that the subcutaneous route of administration does not confer immunogenicity per se. The possible role of micelle (polysorbate 80 plus epoetin-alpha) formation in the PRCA upsurge with Eprex is currently being investigated.",
"title": ""
}
] |
scidocsrr
|
03ace445db37807e2c9f592683978456
|
Filicide-suicide: common factors in parents who kill their children and themselves.
|
[
{
"docid": "5636a228fea893cd48cebe15f72c0bb0",
"text": "A familicide is a multiple-victim homicide incident in which the killer’s spouse and one or more children are slain. National archives of Canadian and British homicides, containing 109 familicide incidents, permit some elucidation of the characteristic and epidemiology of this crime. Familicides were almost exclusively perpetrated by men, unlike other spouse-killings and other filicides. Half the familicidal men killed themselves as well, a much higher rate of suicide than among other uxoricidal or filicidal men. De facto unions were overrepresented, compared to their prevalence in the populations-atlarge, but to a much lesser extent in familicides than in other uxoricides. Stepchildren were overrepresented as familicide victims, compared to their numbers in the populations-at-large, but to a much lesser extent than in other filicides; unlike killers of their genetic offspring, men who killed their stepchildren were rarely suicidal. An initial binary categorization of familicides as accusatory versus despondent is tentatively proposed. @ 19% wiley-Liss, Inc.",
"title": ""
}
] |
[
{
"docid": "773bd34632ce1afe27f994edf906fea3",
"text": "Crossed-guide X-band waveguide couplers with bandwidths of up to 40% and coupling factors of better than 5 dB are presented. The tight coupling and wide bandwidth are achieved by using reduced height waveguide. Design graphs and measured data are presented.",
"title": ""
},
{
"docid": "bc03f442a0785b4179f6eefb2c5d0a35",
"text": "Internet of Things (IoT)-generated data are characterized by its continuous generation, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such IoT-generated data due to the limited processing speed and the significant storage-expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement IoT-generated data repositories. In this paper, we propose a sensor-integrated radio frequency identification (RFID) data repository-implementation model using MongoDB, the most popular big data-savvy document-oriented database system now. First, we devise a data repository schema that can effectively integrate and store the heterogeneous IoT data sources, such as RFID, sensor, and GPS, by extending the event data types in electronic product code information services standard, a de facto standard for the information exchange services for RFID-based traceability. Second, we propose an effective shard key to maximize query speed and uniform data distribution over data servers. Last, through a series of experiments measuring query speed and the level of data distribution, we show that the proposed design strategy, which is based on horizontal data partitioning and a compound shard key, is effective and efficient for the IoT-generated RFID/sensor big data.",
"title": ""
},
{
"docid": "6eb4eb9b80b73bdcd039dfc8e07c3f5a",
"text": "Code duplication or copying a code fragment and then reuse by pasting with or without any modifications is a well known code smell in software maintenance. Several studies show that about 5% to 20% of a software systems can contain duplicated code, which is basically the results of copying existing code fragments and using then by pasting with or without minor modifications. One of the major shortcomings of such duplicated fragments is that if a bug is detected in a code fragment, all the other fragments similar to it should be investigated to check the possible existence of the same bug in the similar fragments. Refactoring of the duplicated code is another prime issue in software maintenance although several studies claim that refactoring of certain clones are not desirable and there is a risk of removing them. However, it is also widely agreed that clones should at least be detected. In this paper, we survey the state of the art in clone detection research. First, we describe the clone terms commonly used in the literature along with their corresponding mappings to the commonly used clone types. Second, we provide a review of the existing clone taxonomies, detection approaches and experimental evaluations of clone detection tools. Applications of clone detection research to other domains of software engineering and in the same time how other domain can assist clone detection research have also been pointed out. Finally, this paper concludes by pointing out several open problems related to clone detection research. ∗This document represents our initial findings and a further study is being carried on. Reader’s feedback is welcome at croy@cs.queensu.ca.",
"title": ""
},
{
"docid": "858f15a9fc0e014dd9ffa953ac0e70f7",
"text": "Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny’s work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112–133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.",
"title": ""
},
{
"docid": "767179a47047435dd2d49db15598c2ef",
"text": "We determine when a join/outerjoin query can be expressed unambiguously as a query graph, without an explicit specification of the order of evaluation. To do so, we first characterize the set of expression trees that implement a given join/outerjoin query graph, and investigate the existence of transformations among the various trees. Our main theorem is that a join/outerjoin query is freely reorderable if the query graph derived from it falls within a particular class, every tree that “implements” such a graph evaluates to the same result.\nThe result has applications to language design and query optimization. Languages that generate queries within such a class do not require the user to indicate priority among join operations, and hence may present a simplified syntax. And it is unnecessary to add extensive analyses to a conventional query optimizer in order to generate legal reorderings for a freely-reorderable language.",
"title": ""
},
{
"docid": "79fdfee8b42fe72a64df76e64e9358bc",
"text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.",
"title": ""
},
{
"docid": "b499ded5996db169e65282dd8b65f289",
"text": "For complex tasks, such as manipulation and robot navigation, reinforcement learning (RL) is well-known to be difficult due to the curse of dimensionality. To overcome this complexity and making RL feasible, hierarchical RL (HRL) has been suggested. The basic idea of HRL is to divide the original task into elementary subtasks, which can be learned using RL. In this paper, we propose a HRL architecture for learning robot’s movements, e.g. robot navigation. The proposed HRL consists of two layers: (i) movement planning and (ii) movement execution. In the planning layer, e.g. generating navigation trajectories, discrete RL is employed while using movement primitives. Given the movement planning and corresponding primitives, the policy for the movement execution can be learned in the second layer using continuous RL. The proposed approach is implemented and evaluated on a mobile robot platform for a",
"title": ""
},
{
"docid": "8a325971d268cafc25845654c8a520cf",
"text": "Lokale onkologische Tumorkontrolle bei malignen Knochentumoren. Erhalt der Arm- und Handfunktion ab Ellenbogen mit der Möglichkeit, die Hand zum Mund zu führen. Vermeiden der Amputation. Stabile Aufhängung des Arms im Schulter-/Neogelenk. Primäre Knochensarkome des proximalen Humerus oder der Skapula mit Gelenkbeteiligung ohne Infiltration der Gefäßnervenstraße bei Primärmanifestation. Knochenmetastasen solider Tumoren mit großen Knochendefekten bei Primärmanifestation in palliativer/kurativer Intention oder im Revisions-/Rezidivfall nach Versagen vorhergehender Versorgungen. Tumorinfiltration der Gefäßnervenstraße. Fehlende Möglichkeit der muskulären Prothesendeckung durch ausgeprägte Tumorinfiltration der Oberarmweichteile. Transdeltoidaler Zugang unter Splitt der Deltamuskulatur. Präparation des tumortragenden Humerus unter langstreckiger Freilegung des Gefäßnervenbündels. Belassen eines onkologisch ausreichenden allseitigen Sicherheitsabstands auf dem Resektat sowohl seitens der Weichteile als auch des knöchernen Absetzungsrands. Zementierte oder zementfreie Implantation der Tumorprothese. Rekonstruktion des Gelenks und Fixation des Arms unter Verwendung eines Anbindungsschlauchs. Ggf. Bildung eines artifiziellen Gelenks bei extraartikulärer Resektion. Möglichst anatomische Refixation der initial abgesetzten Muskulatur auf dem Implantat zur Wiederherstellung der Funktion. Lagerung des Arms im z. B. Gilchrist-Verband für 4–6 Wochen postoperativ. Passive Beübung im Ellenbogengelenk nach 3–4 Wochen. Aktive Beübung der Schulter und des Ellenbogengelenks frühestens nach 4–6 Wochen. Lymphdrainage und Venenpumpe ab dem 1.–2. postoperativen Tag. The aim of the operation is local tumor control in malignant primary and secondary bone tumors of the proximal humerus. Limb salvage and preservation of function with the ability to lift the hand to the mouth. Stable suspension of the arm in the shoulder joint or the artificial joint. Primary malignant bone tumors of the proximal humerus or the scapula with joint infiltration but without involvement of the vessel/nerve bundle. Metastases of solid tumors with osteolytic defects in palliative or curative intention or after failure of primary osteosynthesis. Tumor infiltration of the vessel/nerve bundle. Massive tumor infiltration of the soft tissues without the possibility of sufficient soft tissue coverage of the implant. Transdeltoid approach with splitting of the deltoid muscle. Preparation and removal of the tumor-bearing humerus with exposure of the vessel/nerve bundle. Ensure an oncologically sufficient soft tissue and bone margin in all directions of the resection. Cementless or cemented stem implantation. Reconstruction of the joint capsule and fixation of the prosthesis using a synthetic tube. Soft tissue coverage of the prosthesis with anatomical positioning of the muscle to regain function. Immobilization of the arm/shoulder joint for 4–6 weeks in a Gilchrist bandage. Passive mobilization of the elbow joint after 3–4 weeks. Active mobilization of the shoulder and elbow joint at the earliest after 4–6 weeks.",
"title": ""
},
{
"docid": "befc74d8dc478a67c009894c3ef963d3",
"text": "In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks.",
"title": ""
},
{
"docid": "dc3495ec93462e68f606246205a8416d",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
},
{
"docid": "86497dcdfd05162804091a3368176ad5",
"text": "This paper reviews the current status and implementation of battery chargers, charging power levels and infrastructure for plug-in electric vehicles and hybrids. Battery performance depends both on types and design of the batteries, and on charger characteristics and charging infrastructure. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical onboard chargers restrict the power because of weight, space and cost constraints. They can be integrated with the electric drive for avoiding these problems. The availability of a charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or inductive. While conductive chargers use direct contact, inductive chargers transfer power magnetically. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2 (primary), and Level 3 (fast) power levels are discussed. These system configurations vary from country to country depending on the source and plug capacity standards. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, effect on the grid, and other factors.",
"title": ""
},
{
"docid": "19937d689287ba81d2d01efd9ce8f2e4",
"text": "We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.",
"title": ""
},
{
"docid": "79833f074b2e06d5c56898ca3f008c00",
"text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.",
"title": ""
},
{
"docid": "14cc42c141a420cb354473a38e755091",
"text": "During software evolution, information about changes between different versions of a program is useful for a number of software engineering tasks. For example, configuration-management systems can use change information to assess possible conflicts among updates from different users. For another example, in regression testing, knowledge about which parts of a program are unchanged can help in identifying test cases that need not be rerun. For many of these tasks, a purely syntactic differencing may not provide enough information for the task to be performed effectively. This problem is especially relevant in the case of object-oriented software, for which a syntactic change can have subtle and unforeseen effects. In this paper, we present a technique for comparing object-oriented programs that identifies both differences and correspondences between two versions of a program. The technique is based on a representation that handles object-oriented features and, thus, can capture the behavior of object-oriented programs. We also present JDiff, a tool that implements the technique for Java programs. Finally, we present the results of four empirical studies, performed on many versions of two medium-sized subjects, that show the efficiency and effectiveness of the technique when used on real programs.",
"title": ""
},
{
"docid": "053b069a59b938c183c19e2938f89e66",
"text": "This paper examines the role and value of information security awareness efforts in defending against social engineering attacks. It categories the different social engineering threats and tactics used in targeting employees and the approaches to defend against such attacks. While we review these techniques, we attempt to develop a thorough understanding of human security threats, with a suitable balance between structured improvements to defend human weaknesses, and efficiently focused security training and awareness building. Finally, the paper shows that a multi-layered shield can mitigate various security risks and minimize the damage to systems and data.",
"title": ""
},
{
"docid": "da476e5448fa34e9f6fd7034dfa53576",
"text": "In this paper we propose a multi-agent approach for traffic-light control. According to this approach, our system consists of agents and their world. In this context, the world consists of cars, road networks, traffic lights, etc. Each of these agents controls all traffic lights at one road junction by an observe-think-act cycle. That is, each agent repeatedly observes the current traffic condition surrounding its junction, and then uses this information to reason with condition-action rules to determine in what traffic condition how the agent can efficiently control the traffic flows at its junction, or collaborate with neighboring agents so that they can efficiently control the traffic flows, at their junctions, in such a way that would affect the traffic flows at its junction. This research demonstrates that a rather complicated problem of traffic-light control on a large road network can be solved elegantly by our rule-based multi-agent approach.",
"title": ""
},
{
"docid": "51505087f5ae1a9f57fe04f5e9ad241e",
"text": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"title": ""
},
{
"docid": "ed3ed757804a423eef8b7394b64a971a",
"text": "This work is part of an eort aimed at developing computer-based systems for language instruction; we address the task of grading the pronunciation quality of the speech of a student of a foreign language. The automatic grading system uses SRI's Decipher continuous speech recognition system to generate phonetic segmentations. Based on these segmentations and probabilistic models we produce dierent pronunciation scores for individual or groups of sentences that can be used as predictors of the pronunciation quality. Dierent types of these machine scores can be combined to obtain a better prediction of the overall pronunciation quality. In this paper we review some of the bestperforming machine scores and discuss the application of several methods based on linear and nonlinear mapping and combination of individual machine scores to predict the pronunciation quality grade that a human expert would have given. We evaluate these methods in a database that consists of pronunciation-quality-graded speech from American students speaking French. With predictors based on spectral match and on durational characteristics, we ®nd that the combination of scores improved the prediction of the human grades and that nonlinear mapping and combination methods performed better than linear ones. Characteristics of the dierent nonlinear methods studied are discussed. Ó 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "e48f1b661691f941ea9c648c2c597b84",
"text": "Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KIF) and influences of content and perception from our results. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "959b487a51ae87b2d993e6f0f6201513",
"text": "The two-wheel differential drive mobile robots, are one of the simplest and most used structures in mobile robotics applications, it consists of a chassis with two fixed and in-line with each other electric motors. This paper presents new models for differential drive mobile robots and some considerations regarding design, modeling and control solutions. The presented models are to be used to help in facing the two top challenges in developing mechatronic mobile robots system; early identifying system level problems and ensuring that all design requirements are met, as well as, to simplify and accelerate Mechatronics mobile robots design process, including proper selection, analysis, integration and verification of the overall system and sub-systems performance throughout the development process.",
"title": ""
}
] |
scidocsrr
|
a67a6049fe809bf7f232ba7aed418aa2
|
Use of SIMD Vector Operations to Accelerate Application Code Performance on Low-Powered ARM and Intel Platforms
|
[
{
"docid": "9200498e7ef691b83bf804d4c5581ba2",
"text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.",
"title": ""
}
] |
[
{
"docid": "39070a1f503e60b8709050fc2a250378",
"text": "Plants in their natural habitats adapt to drought stress in the environment through a variety of mechanisms, ranging from transient responses to low soil moisture to major survival mechanisms of escape by early flowering in absence of seasonal rainfall. However, crop plants selected by humans to yield products such as grain, vegetable, or fruit in favorable environments with high inputs of water and fertilizer are expected to yield an economic product in response to inputs. Crop plants selected for their economic yield need to survive drought stress through mechanisms that maintain crop yield. Studies on model plants for their survival under stress do not, therefore, always translate to yield of crop plants under stress, and different aspects of drought stress response need to be emphasized. The crop plant model rice ( Oryza sativa) is used here as an example to highlight mechanisms and genes for adaptation of crop plants to drought stress.",
"title": ""
},
{
"docid": "d7a143bdb62e4aaeaf18b0aabe35588e",
"text": "BACKGROUND\nShort-acting insulin analogue use for people with diabetes is still controversial, as reflected in many scientific debates.\n\n\nOBJECTIVES\nTo assess the effects of short-acting insulin analogues versus regular human insulin in adults with type 1 diabetes.\n\n\nSEARCH METHODS\nWe carried out the electronic searches through Ovid simultaneously searching the following databases: Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R) (1946 to 14 April 2015), EMBASE (1988 to 2015, week 15), the Cochrane Central Register of Controlled Trials (CENTRAL; March 2015), ClinicalTrials.gov and the European (EU) Clinical Trials register (both March 2015).\n\n\nSELECTION CRITERIA\nWe included all randomised controlled trials with an intervention duration of at least 24 weeks that compared short-acting insulin analogues with regular human insulins in the treatment of adults with type 1 diabetes who were not pregnant.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted data and assessed trials for risk of bias, and resolved differences by consensus. We graded overall study quality using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) instrument. We used random-effects models for the main analyses and presented the results as odds ratios (OR) with 95% confidence intervals (CI) for dichotomous outcomes.\n\n\nMAIN RESULTS\nWe identified nine trials that fulfilled the inclusion criteria including 2693 participants. The duration of interventions ranged from 24 to 52 weeks with a mean of about 37 weeks. The participants showed some diversity, mainly with regard to diabetes duration and inclusion/exclusion criteria. The majority of the trials were carried out in the 1990s and participants were recruited from Europe, North America, Africa and Asia. None of the trials was carried out in a blinded manner so that the risk of performance bias, especially for subjective outcomes such as hypoglycaemia, was present in all of the trials. Furthermore, several trials showed inconsistencies in the reporting of methods and results.The mean difference (MD) in glycosylated haemoglobin A1c (HbA1c) was -0.15% (95% CI -0.2% to -0.1%; P value < 0.00001; 2608 participants; 9 trials; low quality evidence) in favour of insulin analogues. The comparison of the risk of severe hypoglycaemia between the two treatment groups showed an OR of 0.89 (95% CI 0.71 to 1.12; P value = 0.31; 2459 participants; 7 trials; very low quality evidence). For overall hypoglycaemia, also taking into account mild forms of hypoglycaemia, the data were generally of low quality, but also did not indicate substantial group differences. Regarding nocturnal severe hypoglycaemic episodes, two trials reported statistically significant effects in favour of the insulin analogue, insulin aspart. However, due to inconsistent reporting in publications and trial reports, the validity of the result remains questionable.We also found no clear evidence for a substantial effect of insulin analogues on health-related quality of life. However, there were few results only based on subgroups of the trial populations. None of the trials reported substantial effects regarding weight gain or any other adverse events. No trial was designed to investigate possible long-term effects (such as all-cause mortality, diabetic complications), in particular in people with diabetes related complications.\n\n\nAUTHORS' CONCLUSIONS\nOur analysis suggests only a minor benefit of short-acting insulin analogues on blood glucose control in people with type 1 diabetes. To make conclusions about the effect of short acting insulin analogues on long-term patient-relevant outcomes, long-term efficacy and safety data are needed.",
"title": ""
},
{
"docid": "91123d18f56d5aef473394e871c099ec",
"text": "Image-to-Image translation was proposed as a general form of many image learning problems. While generative adversarial networks were successfully applied on many image-to-image translations, many models were limited to specific translation tasks and were difficult to satisfy practical needs. In this work, we introduce a One-to-Many conditional generative adversarial network, which could learn from heterogeneous sources of images. This is achieved by training multiple generators against a discriminator in synthesized learning way. This framework supports generative models to generate images in each source, so output images follow corresponding target patterns. Two implementations, hybrid fake and cascading learning, of the synthesized adversarial training scheme are also proposed, and experimented on two benchmark datasets, UTZap50K and MVOD5K, as well as a new high-quality dataset BehTex7K. We consider five challenging image-to-image translation tasks: edges-to-photo, edges-to-similar-photo translation on UTZap50K, cross-view translation on MVOD5K, and grey-to-color, grey-to-Oil-Paint on BehTex7K. We show that both implementations are able to faithfully translate from an image to another image in edges-to-photo, edges-to-similar-photo, grey-to-color, and grey-to-Oil-Paint translation tasks. The quality of output images in cross-view translation need to be further boosted.",
"title": ""
},
{
"docid": "e0fb10bf5f0206c8cf3f97f5daa33fc0",
"text": "Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV.",
"title": ""
},
{
"docid": "20f05b48fa88283d649a3bcadf2ed818",
"text": "A great variety of native and introduced plant species were used as foods, medicines and raw materials by the Rumsen and Mutsun Costanoan peoples of central California. The information presented here has been abstracted from original unpublished field notes recorded during the 1920s and 1930s by John Peabody Harrington, who also directed the collection of some 500 plant specimens. The nature of Harrington’s data and their significance for California ethnobotany are described, followed by a summary of information on the ethnographic uses of each plant.",
"title": ""
},
{
"docid": "6b8be9199593200a58b4d265687fb1ae",
"text": "China is a large agricultural country with the largest population in the world. This creates a high demand for food, which is prompting the study of high quality and high-yielding crops. China's current agricultural production is sufficient to feed the nation; however, compared with developed countries agricultural farming is still lagging behind, mainly due to the fact that the system of growing agricultural crops is not based on maximizing output, the latter would include scientific sowing, irrigation and fertilization. In the past few years many seasonal fruits have been offered for sale in markets, but these crops are grown in traditional backward agricultural greenhouses and large scale changes are needed to modernize production. The reform of small-scale greenhouse agricultural production is relatively easy and could be implemented. The concept of the Agricultural Internet of Things utilizes networking technology in agricultural production, the hardware part of this agricultural IoT include temperature, humidity and light sensors and processors with a large data processing capability; these hardware devices are connected by short-distance wireless communication technology, such as Bluetooth, WIFI or Zigbee. In fact, Zigbee technology, because of its convenient networking and low power consumption, is widely used in the agricultural internet. The sensor network is combined with well-established web technology, in the form of a wireless sensor network, to remotely control and monitor data from the sensors.In this paper a smart system of greenhouse management based on the Internet of Things is proposed using sensor networks and web-based technologies. The system consists of sensor networks and asoftware control system. The sensor network consists of the master control center and various sensors using Zigbee protocols. The hardware control center communicates with a middleware system via serial network interface converters. The middleware communicates with a hardware network using an underlying interface and it also communicates with a web system using an upper interface. The top web system provides users with an interface to view and manage the hardware facilities ; administrators can thus view the status of agricultural greenhouses and issue commands to the sensors through this system in order to remotely manage the temperature, humidity and irrigation in the greenhouses. The main topics covered in this paper are:1. To research the current development of new technologies applicable to agriculture and summarizes the strong points concerning the application of the Agricultural Internet of Things both at home and abroad. Also proposed are some new methods of agricultural greenhouse management.2. An analysis of system requirements, the users’ expectations of the system and the response to needs analysis, and the overall design of the system to determine it’s architecture.3. Using software engineering to ensure that functional modules of the system, as far as possible, meet the requirements of high cohesion and low coupling between modules, also detailed design and implementation of each module is considered.",
"title": ""
},
{
"docid": "0366ab38a45f45a8655f4beb6d11d358",
"text": "BACKGROUND\nDeep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing.\n\n\nAIMS\nWe aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features.\n\n\nMATERIALS & METHODS\nWe present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]).\n\n\nRESULTS\nFrom ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]).\n\n\nDISCUSSION/CONCLUSION\nWe proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.",
"title": ""
},
{
"docid": "ca75798a9090810682f99400f6a8ff4e",
"text": "We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.",
"title": ""
},
{
"docid": "a129f0b1c95e17d7e6a587121b267fa9",
"text": "Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications.",
"title": ""
},
{
"docid": "f6feb6789c0c9d2d5c354e73d2aaf9ad",
"text": "In this paper we present SimpleElastix, an extension of SimpleITK designed to bring the Elastix medical image registration library to a wider audience. Elastix is a modular collection of robust C++ image registration algorithms that is widely used in the literature. However, its command-line interface introduces overhead during prototyping, experimental setup, and tuning of registration algorithms. By integrating Elastix with SimpleITK, Elastix can be used as a native library in Python, Java, R, Octave, Ruby, Lua, Tcl and C# on Linux, Mac and Windows. This allows Elastix to intregrate naturally with many development environments so the user can focus more on the registration problem and less on the underlying C++ implementation. As means of demonstration, we show how to register MR images of brains and natural pictures of faces using minimal amount of code. SimpleElastix is open source, licensed under the permissive Apache License Version 2.0 and available at https://github.com/kaspermarstal/SimpleElastix.",
"title": ""
},
{
"docid": "a0f24500f3729b0a2b6e562114eb2a45",
"text": "In this work, the smallest reported inkjet-printed UWB antenna is proposed that utilizes a fractal matching network to increase the performance of a UWB microstrip monopole. The antenna is inkjet-printed on a paper substrate to demonstrate the ability to produce small and low-cost UWB antennas with inkjet-printing technology which can enable compact, low-cost, and environmentally friendly wireless sensor network.",
"title": ""
},
{
"docid": "35b286999957396e1f5cab6e2370ed88",
"text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.",
"title": ""
},
{
"docid": "013b0ae55c64f322d61e1bf7e8d4c55a",
"text": "Binary neural networks for object recognition are desirable especially for small and embedded systems because of their arithmetic and memory efficiency coming from the restriction of the bit-depth of network weights and activations. Neural networks in general have a tradeoff between the accuracy and efficiency in choosing a model architecture, and this tradeoff matters more for binary networks because of the limited bit-depth. This paper then examines the performance of binary networks by modifying architecture parameters (depth and width parameters) and reports the best-performing settings for specific datasets. These findings will be useful for designing binary networks for practical uses.",
"title": ""
},
{
"docid": "64bcd606e039f731aec7cc4722a4d3cb",
"text": "Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partialinformation setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.",
"title": ""
},
{
"docid": "78e21364224b9aa95f86ac31e38916ef",
"text": "Gamification is the use of game design elements and game mechanics in non-game contexts. This idea has been used successfully in many web based businesses to increase user engagement. Some researchers suggest that it could also be used in web based education as a tool to increase student motivation and engagement. In an attempt to verify those theories, we have designed and built a gamification plugin for a well-known e-learning platform. We have made an experiment using this plugin in a university course, collecting quantitative and qualitative data in the process. Our findings suggest that some common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but our findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "568bc5272373a4e3fd38304f2c381e0f",
"text": "With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage.",
"title": ""
},
{
"docid": "8335faee33da234e733d8f6c95332ec3",
"text": "Myanmar script uses no space between words and syllable segmentation represents a significant process in many NLP tasks such as word segmentation, sorting, line breaking and so on. In this study, a rulebased approach of syllable segmentation algorithm for Myanmar text is proposed. Segmentation rules were created based on the syllable structure of Myanmar script and a syllable segmentation algorithm was designed based on the created rules. A segmentation program was developed to evaluate the algorithm. A training corpus containing 32,283 Myanmar syllables was tested in the program and the experimental results show an accuracy rate of 99.96% for segmentation.",
"title": ""
},
{
"docid": "0a967b130a6c4dbc93d6b135eeb3c0db",
"text": "This paper presents a universal ontology for smart environments aiming to overcome the limitations of the existing ontologies. We enrich our ontology by adding new environmental aspects such as the referentiality and environmental change, that can be used to describe domains as well as applications. We show through a case study how our ontology is used and integrated in a self-organising middleware for smart environments.",
"title": ""
},
{
"docid": "999c0785975052bda742f0620e95fe84",
"text": "List-based implementations of sets are a fundamental building block of many concurrent algorithms. A skiplist based on the lock-free list-based set algorithm of Michael will be included in the Java Concurrency Package of JDK 1.6.0. However, Michael’s lock-free algorithm has several drawbacks, most notably that it requires all list traversal operations, including membership tests, to perform cleanup operations of logically removed nodes, and that it uses the equivalent of an atomically markable reference, a pointer that can be atomically “marked,” which is expensive in some languages and unavailable in others. We present a novel “lazy” list-based implementation of a concurrent set object. It is based on an optimistic locking scheme for inserts and removes, eliminating the need to use the equivalent of an atomically markable reference. It also has a novel wait-free membership test operation (as opposed to Michael’s lock-free one) that does not need to perform cleanup operations and is more efficient than that of all previous algorithms. Empirical testing shows that the new lazy-list algorithm consistently outperforms all known algorithms, including Michael’s lock-free algorithm, throughout the concurrency range. At high load, with 90% membership tests, the lazy algorithm is more than twice as fast as Michael’s. This is encouraging given that typical search structure usage patterns include around 90% membership tests. By replacing the lock-free membership test of Michael’s algorithm with our new wait-free one, we achieve an algorithm that slightly outperforms our new lazy-list (though it may not be as efficient in other contexts as it uses Java’s RTTI mechanism to create pointers that can be atomically marked).",
"title": ""
},
{
"docid": "7d7db3f70ba6bcb5f9bf615bd8110eba",
"text": "Freshwater and energy are essential commodities for well being of mankind. Due to increasing population growth on the one hand, and rapid industrialization on the other, today’s world is facing unprecedented challenge of meeting the current needs for these two commodities as well as ensuring the needs of future generations. One approach to this global crisis of water and energy supply is to utilize renewable energy sources to produce freshwater from impaired water sources by desalination. Sustainable practices and innovative desalination technologies for water reuse and energy recovery (staging, waste heat utilization, hybridization) have the potential to reduce the stress on the existing water and energy sources with a minimal impact to the environment. This paper discusses existing and emerging desalination technologies and possible combinations of renewable energy sources to drive them and associated desalination costs. It is suggested that a holistic approach of coupling renewable energy sources with technologies for recovery, reuse, and recycle of both energy and water can be a sustainable and environment friendly approach to meet the world’s energy and water needs. High capital costs for renewable energy sources for small-scale applications suggest that a hybrid energy source comprising both grid-powered energy and renewable energy will reduce the desalination costs considering present economics of energy. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
38148009e005b5936464f4a362758271
|
Passwords and Perceptions
|
[
{
"docid": "8715a3b9ac7487adbb6d58e8a45ceef6",
"text": "Before the computer age, authenticating a user was a relatively simple process. One person could authenticate another by visual recognition, interpersonal communication, or, more formally, mutually agreed upon authentication methods. With the onset of the computer age, authentication has become more complicated. Face-to-face visual authentication has largely dissipated, with computers and networks intervening. Sensitive information is exchanged daily between humans and computers, and from computer to computer. This complexity demands more formal protection methods; in short, authentication processes to manage our routine interactions with such machines and networks. Authentication is the process of positively verifying identity, be it that of a user, device, or entity in a computer system. Often authentication is the prerequisite to accessing system resources. Positive verification is accomplished by means of matching some indicator of identity, such as a shared secret prearranged at the time a person was authorized to use the system. The most familiar user authenticator in use today is the password. The secure sockets layer (SSL) is an example of machine to machine authentication. Human–machine authentication is known as user authentication and it consists of verifying the identity of a user: is this person really who she claims to be? User authentication is much less secure than machine authentication and is known as the Achilles’ heel of secure systems. This paper introduces various human authenticators and compares them based on security, convenience, and cost. The discussion is set in the context of a larger analysis of security issues, namely, measuring a system’s vulnerability to attack. The focus is kept on remote computer authentication. Authenticators can be categorized into three main types: secrets (what you know), tokens (what you have), and IDs (who you are). A password is a secret word, phrase, or personal identification number. Although passwords are ubiquitously used, they pose vulnerabilities, the biggest being that a short mnemonic password can be guessed or searched by an ambitious attacker, while a longer, random password is difficult for a person to remember. A token is a physical device used to aid authentication. Examples include bank cards and smart cards. A token can be an active device that yields one-time passcodes (time-synchronous or",
"title": ""
}
] |
[
{
"docid": "2fd7cc65c34551c90a72fc3cb4665336",
"text": "Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on styleannotated translation examples.",
"title": ""
},
{
"docid": "3d45b63a4643c34c56633afd7e270922",
"text": "In this paper we perform a comparative analysis of three models for feature representation of text documents in the context of document classification. In particular, we consider the most often used family of models bag-of-words, recently proposed continuous space models word2vec and doc2vec, and the model based on the representation of text documents as language networks. While the bag-of-word models have been extensively used for the document classification task, the performance of the other two models for the same task have not been well understood. This is especially true for the network-based model that have been rarely considered for representation of text documents for classification. In this study, we measure the performance of the document classifiers trained using the method of random forests for features generated the three models and their variants. The results of the empirical comparison show that the commonly used bag-of-words model has performance comparable to the one obtained by the emerging continuous-space model of doc2vec. In particular, the low-dimensional variants of doc2vec generating up to 75 features are among the top-performing document representation models. The results finally point out that doc2vec shows a superior performance in the tasks of classifying large Corresponding Author: Department of Informatics, University of Rijeka, Radmile Matejčić 2, 51000 Rijeka, Croatia, +385 51 584 714 Email addresses: smarti@uniri.hr (Sanda Martinčić-Ipšić), tanja.milicic@student.uniri.hr (Tanja Miličić), Todorovski@fu.uni-lj.si (Ljupčo Todorovski) Preprint submitted to ?? July 6, 2017 documents.",
"title": ""
},
{
"docid": "f4cb0eb6d39c57779cf9aa7b13abef14",
"text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.",
"title": ""
},
{
"docid": "a57dc1e93116aa99ce00c671208bbd9f",
"text": "According to the IEEE 802.11aj (45 GHz) standard, a millimeter-wave planar substrate-integrated endfire antenna with wide beamwidths in both <italic>E</italic>- and <italic>H</italic>-planes, and good impedance matching over 42.3–48.4 GHz is proposed for the Q-band wireless local area network (WLAN) system. The proposed antenna comprises a printed angled dipole with bilateral symmetrical directors for generating wide-angle radiation, and the beamwidth in both <italic>E</italic>- and <italic>H</italic>-planes can be easily adjusted. The antenna is prototyped using the conventional printed circuit board process with a size of 6 × 26 × 0.508 mm<sup>3</sup>, and achieves beamwidths greater than 120° in two main planes, <inline-formula><tex-math notation=\"LaTeX\">${S}$ </tex-math></inline-formula><sub>11</sub> of less than –12.5 dB, and peak gain of 3.67–5.2 dBi over 42.3–48.4 GHz. The measurements are in good agreement with simulations, which shows that the proposed antenna is very promising for Q-band millimeter-wave WLAN system access-point applications.",
"title": ""
},
{
"docid": "c117da74c302d9e108970854d79e54fd",
"text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.",
"title": ""
},
{
"docid": "7490d342ffb59bd396421e198b243775",
"text": "Antioxidant activities of defatted sesame meal extract increased as the roasting temperature of sesame seed increased, but the maximum antioxidant activity was achieved when the seeds were roasted at 200 °C for 60 min. Roasting sesame seeds at 200 °C for 60 min significantly increased the total phenolic content, radical scavenging activity (RSA), reducing powers, and antioxidant activity of sesame meal extract; and several low-molecularweight phenolic compounds such as 2-methoxyphenol, 4-methoxy-3-methylthio-phenol, 5-amino-3-oxo-4hexenoic acid, 3,4-methylenedioxyphenol (sesamol), 3-hydroxy benzoic acid, 4-hydroxy benzoic acid, vanillic acid, filicinic acid, and 3,4-dimethoxy phenol were newly formed in the sesame meal after roasting sesame seeds at 200 °C for 60 min. These results indicate that antioxidant activity of defatted sesame meal extracts was significantly affected by roasting temperature and time of sesame seeds.",
"title": ""
},
{
"docid": "b32e0f8195780d15a61c9c3cc0213864",
"text": "With access to large datasets, deep neural networks (DNN) have achieved humanlevel accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and many chemical properties of research interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed from the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical properties that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models and other contemporary DNN models. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a general-purpose plug-and-play deep neural network for the prediction of novel small-molecule chemical properties.",
"title": ""
},
{
"docid": "be101b30dd67232c1973b4c4a78c7f98",
"text": "Recently, many colleges and universities have made significant investments in upgraded classrooms and learning centers, incorporating such factors as tiered seating, customized lighting packages, upgraded desk and seat quality, and individual computers. To date, few studies have examined the impact of classroom environment at post-secondary institutions. The purpose of this study is to analyze the impact of classroom environment factors on individual student satisfaction measures and on student evaluation of teaching in the university environment. Two-hundred thirty-seven undergraduate business students were surveyed regarding their perceptions of classroom environment factors and their satisfaction with their classroom, instructor, and course. The results of the study indicate that students do perceive significant differences between standard and upgraded classrooms. Additionally, students express a preference for several aspects of upgraded classrooms, including tiered seating, lighting, and classroom noise control. Finally, students rate course enjoyment, classroom learning, and instructor organization higher in upgraded classrooms than in standard classrooms. The results of this study should benefit administrators who make capital and infrastructure decisions regarding college and university classroom improvements, faculty members who develop and rely upon student evaluations of teaching, and researchers who examine the factors impacting student satisfaction and learning.",
"title": ""
},
{
"docid": "f35b8aec287285d18df656881642eb66",
"text": "We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models, we propose instead to use more flexible code distributions. These distributions are estimated non-parametrically by reversing the generator map during training. The benefits include: more powerful generative models, better modeling of latent structure and explicit control of the degree of generalization.",
"title": ""
},
{
"docid": "18c90883c96b85dc8b3ef6e1b84c3494",
"text": "Data Selection is a popular step in Machine Translation pipelines. Feature Decay Algorithms (FDA) is a technique for data selection that has shown a good performance in several tasks. FDA aims to maximize the coverage of n-grams in the test set. However, intuitively, more ambiguous n-grams require more training examples in order to adequately estimate their translation probabilities. This ambiguity can be measured by alignment entropy. In this paper we propose two methods for calculating the alignment entropies for n-grams of any size, which can be used for improving the performance of FDA. We evaluate the substitution of the n-gramspecific entropy values computed by these methods to the parameters of both the exponential and linear decay factor of FDA. The experiments conducted on German-to-English and Czechto-English translation demonstrate that the use of alignment entropies can lead to an increase in the quality of the results of FDA.",
"title": ""
},
{
"docid": "6f942f8ead4684f4943d1c82ea140b9a",
"text": "This paper considers the problem of approximate nearest neighbor search in the compressed domain. We introduce polysemous codes, which offer both the distance estimation quality of product quantization and the efficient comparison of binary codes with Hamming distance. Their design is inspired by algorithms introduced in the 90’s to construct channel-optimized vector quantizers. At search time, this dual interpretation accelerates the search. Most of the indexed vectors are filtered out with Hamming distance, letting only a fraction of the vectors to be ranked with an asymmetric distance estimator. The method is complementary with a coarse partitioning of the feature space such as the inverted multi-index. This is shown by our experiments performed on several public benchmarks such as the BIGANN dataset comprising one billion vectors, for which we report state-of-the-art results for query times below 0.3 millisecond per core. Last but not least, our approach allows the approximate computation of the k-NN graph associated with the Yahoo Flickr Creative Commons 100M, described by CNN image descriptors, in less than 8 hours on a single machine.",
"title": ""
},
{
"docid": "3cd7c3b3676626440ddd27de43fa5e1f",
"text": "A survey of the use of belief functions to quantify the beliefs held by an agent, and in particular of their interpretation in the transferable belief model.",
"title": ""
},
{
"docid": "545a7a98c79d14ba83766aa26cff0291",
"text": "Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.",
"title": ""
},
{
"docid": "e0f7c82754694084c6d05a2d37be3048",
"text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.",
"title": ""
},
{
"docid": "d164ead192d1ba25472935f517608faa",
"text": "Real-world machine learning applications may require functions to be fast-to-evaluate and interpretable, in particular, guaranteed monotonicity of the learned function can be critical to user trust. We propose meeting these goals for low-dimensional machine learning problems by learning flexible, monotonic functions using calibrated interpolated look-up tables. We extend the structural risk minimization framework of lattice regression to train monotonic functions by solving a convex problem with appropriate linear inequality constraints. In addition, we propose jointly learning interpretable calibrations of each feature to normalize continuous features and handle categorical or missing data, at the cost of making the objective non-convex. We address large-scale learning through parallelization, mini-batching, and propose random sampling of additive regularizer terms. Case studies for six real-world problems with five to sixteen features and thousands to millions of training samples demonstrate the proposed monotonic functions can achieve state-of-the-art accuracy on practical problems while providing greater transparency to users.",
"title": ""
},
{
"docid": "0be92a74f0ff384c66ef88dd323b3092",
"text": "When facing uncertainty, adaptive behavioral strategies demand that the brain performs probabilistic computations. In this probabilistic framework, the notion of certainty and confidence would appear to be closely related, so much so that it is tempting to conclude that these two concepts are one and the same. We argue that there are computational reasons to distinguish between these two concepts. Specifically, we propose that confidence should be defined as the probability that a decision or a proposition, overt or covert, is correct given the evidence, a critical quantity in complex sequential decisions. We suggest that the term certainty should be reserved to refer to the encoding of all other probability distributions over sensory and cognitive variables. We also discuss strategies for studying the neural codes for confidence and certainty and argue that clear definitions of neural codes are essential to understanding the relative contributions of various cortical areas to decision making.",
"title": ""
},
{
"docid": "bf338661988fd28c9bafe7ea1ca59f34",
"text": "We propose a system for landing unmanned aerial vehicles (UAV), specifically an autonomous rotorcraft, in uncontrolled, arbitrary, terrains. We present plans for and progress on a vision-based system for the recovery of the geometry and material properties of local terrain from a mounted stereo rig for the purposes of finding an optimal landing site. A system is developed which integrates motion estimation from tracked features, and an algorithm for approximate estimation of a dense elevation map in a world coordinate system.",
"title": ""
},
{
"docid": "28c0ce094c4117157a27f272dbb94b91",
"text": "This paper reports the design of a color dynamic and active-pixel vision sensor (C-DAVIS) for robotic vision applications. The C-DAVIS combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor (APS) pixels patterned with an RGBW color filter array. The C-DAVIS concurrently outputs rolling or global shutter RGBW coded VGA resolution frames and asynchronous monochrome QVGA resolution temporal contrast events. Hence the C-DAVIS is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast. The C-DAVIS chip is fabricated in TowerJazz 0.18um CMOS image sensor technology. An RGBW 2×2-pixel unit measures 20um × 20um. The chip die measures 8mm × 6.2mm.",
"title": ""
},
{
"docid": "74d4f8c69938eeae611696727286a1a7",
"text": "AES-GCM(Advanced Encryption Standard with Galois Counter Mode) is an encryption authentication algorithm, which includes two main components: an AES engine and Ghash module. Because of the computation feedback in Ghash operation, the Ghash module limits the performance of the whole AES-GCM system. In this study, an efficient architecture of Ghash is presented. The architecture uses an optimized bit-parallel multiplier. In addition, based on this multiplier, pipelined method is adopted to achieve higher clock rate and throughput. We also introduce a redundant register method, which is never mentioned before, for solving the big fan- out problem derived from the bit-parallel multiplier. In the end, the performance of proposed design is evaluated on Xilinx virtex4 FPGA platform. The experimental results show that our Ghash core has less clock delay and can easily achieve higher throughput, which is up to 40Gbps.",
"title": ""
},
{
"docid": "a442a5fd2ec466cac18f4c148661dd96",
"text": "BACKGROUND\nLong waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients.\n\n\nMETHODS\nData from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site.\n\n\nRESULTS\nA total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P<0.001). However, gender, urban residence, and valid waiting time showed no significant differences (P>0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (P<0.001).\n\n\nCONCLUSION\nCompared to the usual queuing method, the web-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.",
"title": ""
}
] |
scidocsrr
|
f59249b365825b8e136839c33b704c8f
|
Comparing Static and Dynamic Code Scheduling for Multiple-Instruction-Issue Processors
|
[
{
"docid": "515a1d01abc880c1b6f560ce5a10207d",
"text": "We report on a compiler for Warp, a high-performance systolic array developed at Carnegie Mellon. This compiler enhances the usefulness of Warp significantly and allows application programmers to code substantial algorithms.\nThe compiler combines a novel programming model, which is based on a model of skewed computation for the array, with powerful optimization techniques. Programming in W2 (the language accepted by the compiler) is orders of magnitude easier than coding in microcode, the only alternative available previously.",
"title": ""
}
] |
[
{
"docid": "c0549844f4e8813bd7b839a95c94a13d",
"text": "In this paper, we present a novel method to fuse observations from an inertial measurement unit (IMU) and visual sensors, such that initial conditions of the inertial integration, including gravity estimation, can be recovered quickly and in a linear manner, thus removing any need for special initialization procedures. The algorithm is implemented using a graphical simultaneous localization and mapping like approach that guarantees constant time output. This paper discusses the technical aspects of the work, including observability and the ability for the system to estimate scale in real time. Results are presented of the system, estimating the platforms position, velocity, and attitude, as well as gravity vector and sensor alignment and calibration on-line in a built environment. This paper discusses the system setup, describing the real-time integration of the IMU data with either stereo or monocular vision data. We focus on human motion for the purposes of emulating high-dynamic motion, as well as to provide a localization system for future human-robot interaction.",
"title": ""
},
{
"docid": "dd11a04de8288feba2b339cca80de41c",
"text": "A methodology for the automatic design optimization of analog circuits is presented. A non-fixed topology approach is followed. A symbolic simulator, called ISAAC, generates an analytic AC model for any analog circuit, time-continuous or time-discrete, CMOS or bipolar. ISAAC's expressions can be fully symbolic or mixed numeric-symbolic, exact or simplified. The model is passed to the design optimization program OPTIMAN. For a user selected circuit topology, the independent design variables are automatically extracted and OPTIMAN sizes all elements to satisfy the performance constraints, thereby optimizing a user defined design objective. The optimization algorithm is simulated annealing. Practical examples show that OPTIMAN quickly designs analog circuits, closely meeting the specifications, and that it is a flexible and reliable design and exploration tool.",
"title": ""
},
{
"docid": "d22d81ea0623a57d12314f58e4e5d9c6",
"text": "We study a well known noisy model of the graph isomorphism problem. In this model, the goal is to perfectly recover the vertex correspondence between two edge-correlated graphs, with an initial seed set of correctly matched vertex pairs revealed as side information. Specifically, the model first generates a parent graph G0 from Erdős-Rényi random graph G(n, p) and then obtains two children graphs G1 and G2 by subsampling the edge set of G0 twice independently with probability s = Θ(1). The vertex correspondence between G1 and G2 is obscured by randomly permuting the vertex labels of G1 according to a latent permutation π . Finally, for each i, π(i) is revealed independently with probability α as seeds. In the sparse graph regime where np ≤ n for any ǫ < 1/6, we give a polynomial-time algorithm which perfectly recovers π, provided that nps2− logn → +∞ and α ≥ n. This further leads to a sub-exponential-time, exp ( n ) , matching algorithm even without seeds. On the contrary, if nps2−logn = O(1), then perfect recovery is information-theoretically impossible as long as α is bounded away from 1. In the dense graph regime, where np = bn, for fixed constants a, b ∈ (0, 1], we give a polynomial-time algorithm which succeeds when b = O(s) and α = Ω ( (np) logn ) . In particular, when a = 1/k for an integer k ≥ 1, α = Ω(log n/n) suffices, yielding a quasipolynomial-time n algorithm matching the best known algorithm by Barak et al. for the problem of graph matching without seeds when k ≥ 153 and extending their result to new values of p for k = 2, . . . , 152. Unlike previous work on graph matching, which used small neighborhoods or small subgraphs with a logarithmic number of vertices in order to match vertices, our algorithms match vertices if their large neighborhoods have a significant overlap in the number of seeds.",
"title": ""
},
{
"docid": "4d3aea1bd30234f58013a1136d1f834b",
"text": "Predicting user response is one of the core machine learning tasks in computational advertising. Field-aware Factorization Machines (FFM) have recently been established as a state-of-the-art method for that problem and in particular won two Kaggle challenges. This paper presents some results from implementing this method in a production system that predicts click-through and conversion rates for display advertising and shows that this method it is not only effective to win challenges but is also valuable in a real-world prediction system. We also discuss some specific challenges and solutions to reduce the training time, namely the use of an innovative seeding algorithm and a distributed learning mechanism.",
"title": ""
},
{
"docid": "318d8e87d286b6417291942541061b9b",
"text": "There are numerous applications of unmanned aerial vehicles (UAVs) in the management of civil infrastructure assets. A few examples include routine bridge inspections, disaster management, power line surveillance and traffic surveying. As UAV applications become widespread, increased levels of autonomy and independent decision-making are necessary to improve the safety, efficiency, and accuracy of the devices. This paper details the procedure and parameters used for the training of convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. Potential application areas in the transportation field are also highlighted. The accuracy and reliability of CNNs depend on the network’s training and the selection of operational parameters. This paper details the CNN training procedure and parameter selection. The object recognition results show that by selecting a proper set of parameters, a CNN can detect and classify objects with a high level of accuracy (97.5%) and computational efficiency. Furthermore, using a convolutional neural network implemented in the “YOLO” (“You Only Look Once”) platform, objects can be tracked, detected (“seen”), and classified (“comprehended”) from video feeds supplied by UAVs in real-time.",
"title": ""
},
{
"docid": "b2f66e8508978c392045b5f9e99362a1",
"text": "In this paper we have proposed a linguistically informed recursive neural network architecture for automatic extraction of cause-effect relations from text. These relations can be expressed in arbitrarily complex ways. The architecture uses word level embeddings and other linguistic features to detect causal events and their effects mentioned within a sentence. The extracted events and their relations are used to build a causal-graph after clustering and appropriate generalization, which is then used for predictive purposes. We have evaluated the performance of the proposed extraction model with respect to two baseline systems,one a rule-based classifier, and the other a conditional random field (CRF) based supervised model. We have also compared our results with related work reported in the past by other authors on SEMEVAL data set, and found that the proposed bidirectional LSTM model enhanced with an additional linguistic layer performs better. We have also worked extensively on creating new annotated datasets from publicly available data, which we are willing to share with the community.",
"title": ""
},
{
"docid": "25c14589a19c2d1dea78f222d4a328ab",
"text": "BACKGROUND\nParkinson's disease (PD) is the most prevalent movement disorder of the central nervous system, and affects more than 6.3 million people in the world. The characteristic motor features include tremor, bradykinesia, rigidity, and impaired postural stability. Current therapy based on augmentation or replacement of dopamine is designed to improve patients' motor performance but often leads to levodopa-induced adverse effects, such as dyskinesia and motor fluctuation. Clinicians must regularly monitor patients in order to identify these effects and other declines in motor function as soon as possible. Current clinical assessment for Parkinson's is subjective and mostly conducted by brief observations made during patient visits. Changes in patients' motor function between visits are hard to track and clinicians are not able to make the most informed decisions about the course of therapy without frequent visits. Frequent clinic visits increase the physical and economic burden on patients and their families.\n\n\nOBJECTIVE\nIn this project, we sought to design, develop, and evaluate a prototype mobile cloud-based mHealth app, \"PD Dr\", which collects quantitative and objective information about PD and would enable home-based assessment and monitoring of major PD symptoms.\n\n\nMETHODS\nWe designed and developed a mobile app on the Android platform to collect PD-related motion data using the smartphone 3D accelerometer and to send the data to a cloud service for storage, data processing, and PD symptoms severity estimation. To evaluate this system, data from the system were collected from 40 patients with PD and compared with experts' rating on standardized rating scales.\n\n\nRESULTS\nThe evaluation showed that PD Dr could effectively capture important motion features that differentiate PD severity and identify critical symptoms. For hand resting tremor detection, the sensitivity was .77 and accuracy was .82. For gait difficulty detection, the sensitivity was .89 and accuracy was .81. In PD severity estimation, the captured motion features also demonstrated strong correlation with PD severity stage, hand resting tremor severity, and gait difficulty. The system is simple to use, user friendly, and economically affordable.\n\n\nCONCLUSIONS\nThe key contribution of this study was building a mobile PD assessment and monitoring system to extend current PD assessment based in the clinic setting to the home-based environment. The results of this study proved feasibility and a promising future for utilizing mobile technology in PD management.",
"title": ""
},
{
"docid": "8528524a102c8fb6f29a4e3f6378ad76",
"text": "Matrix multiplication is a fundamental kernel of many high performance and scientific computing applications. Most parallel implementations use classical O(n3) matrix multiplication, even though there exist algorithms with lower arithmetic complexity. We recently presented a new Communication-Avoiding Parallel Strassen algorithm (CAPS), based on Strassen's fast matrix multiplication, that minimizes communication (SPAA '12). It communicates asymptotically less than all classical and all previous Strassen-based algorithms, and it attains theoretical lower bounds.\n In this paper we show that CAPS is also faster in practice. We benchmark and compare its performance to previous algorithms on Hopper (Cray XE6), Intrepid (IBM BG/P), and Franklin (Cray XT4). We demonstrate significant speedups over previous algorithms both for large matrices and for small matrices on large numbers of processors. We model and analyze the performance of CAPS and predict its performance on future exascale platforms.",
"title": ""
},
{
"docid": "9464f2e308b5c8ab1f2fac1c008042c0",
"text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.",
"title": ""
},
{
"docid": "4532f89fecf2f43425dee5841da36ae6",
"text": "In this paper, different double ridged horn antenna (DRHA) designs are investigated for wideband applications. A classic design of 1–18 GHz DRHA with exponential ridges is modelled and the antenna pattern deficiencies are detected at frequencies above 12 GHz. The antenna pattern is optimized by modification of the antenna structure. However, the impedance matching is affected and the VSWR is increased at the frequencies below 2 GHz. The matching problem can be resolved by adding lossy materials to the back cavity of antenna. We have shown reduction of the antenna efficiency by 15% over the whole frequency range, except at the lower frequencies.",
"title": ""
},
{
"docid": "72a51dfdcdf5ff70c94922a048f218d1",
"text": "We have synthesized thermodynamically metastable Ca2IrO4 thin-films on YAlO3 (110) substrates by pulsed laser deposition. The epitaxial Ca2IrO4 thin-films are of K2NiF4-type tetragonal structure. Transport and optical spectroscopy measurements indicate that the electronic structure of the Ca2IrO4 thin-films is similar to that of Jeff = 1/2 spin-orbit-coupled Mott insulator Sr2IrO4 and Ba2IrO4, with the exception of an increased gap energy. The gap increase is to be expected in Ca2IrO4 due to its increased octahedral rotation and tilting, which results in enhanced electron-correlation, U/W. Our results suggest that the epitaxial stabilization growth of metastable-phase thin-films can be used effectively for investigating layered iridates and various complex-oxide systems.",
"title": ""
},
{
"docid": "88644bb236b0112bf4825a5020d67629",
"text": "A Graphical User Interface (GUI) is the most widely used method whereby information systems interact with users. According to ACM Computing Surveys, on average, more than 45% of software code in a software application is dedicated to the GUI. However, GUI testing is extremely expensive. In unit testing, 10,000 cases can often be automatically tested within a minute whereas, in GUI testing, 10,000 simple GUI test cases need more than 10 hours to complete. To facilitate GUI testing automation, the knowledge model representing the interaction between a user and a computer system is the core. The most advanced GUI testing model to date is the Event Flow Graph (EFG) model proposed by the team of Professor Atif M. Memon at the University of Maryland. The EFG model successfully enabled GUI testing automation for a range of applications. However, it has a number of flaws which prevent it from providing effective GUI testing. Firstly, the EFG model can only model knowledge for basic GUI test automation. Secondly, EFGs are not able to model events with variable follow-up event sets. Thirdly, test cases generation still involves tremendous manual work. This thesis effectively addresses the challenges of existing GUI testing methods and provides a unified solution to GUI testing automation. The three main contributions of this thesis are the proposal of the Graphic User Interface Testing Automation Model",
"title": ""
},
{
"docid": "c8722cd243c552811c767fc160020b75",
"text": "Touché proposes a novel Swept Frequency Capacitive Sensing technique that can not only detect a touch event, but also recognize complex configurations of the human hands and body. Such contextual information significantly enhances touch interaction in a broad range of applications, from conventional touchscreens to unique contexts and materials. For example, in our explorations we add touch and gesture sensitivity to the human body and liquids. We demonstrate the rich capabilities of Touché with five example setups from different application domains and conduct experimental studies that show gesture classification accuracies of 99% are achievable with our technology.",
"title": ""
},
{
"docid": "24b70a56261bfd93d2a05f7d12453dfb",
"text": "The spinal cord is frequently affected by atrophy and/or lesions in multiple sclerosis (MS) patients. Segmentation of the spinal cord and lesions from MRI data provides measures of damage, which are key criteria for the diagnosis, prognosis, and longitudinal monitoring in MS. Automating this operation eliminates inter-rater variability and increases the efficiency of large-throughput analysis pipelines. Robust and reliable segmentation across multi-site spinal cord data is challenging because of the large variability related to acquisition parameters and image artifacts. In particular, a precise delineation of lesions is hindered by a broad heterogeneity of lesion contrast, size, location, and shape. The goal of this study was to develop a fully-automatic framework - robust to variability in both image parameters and clinical condition - for segmentation of the spinal cord and intramedullary MS lesions from conventional MRI data of MS and non-MS cases. Scans of 1042 subjects (459 healthy controls, 471 MS patients, and 112 with other spinal pathologies) were included in this multi-site study (n = 30). Data spanned three contrasts (T1-, T2-, and T2∗-weighted) for a total of 1943 vol and featured large heterogeneity in terms of resolution, orientation, coverage, and clinical conditions. The proposed cord and lesion automatic segmentation approach is based on a sequence of two Convolutional Neural Networks (CNNs). To deal with the very small proportion of spinal cord and/or lesion voxels compared to the rest of the volume, a first CNN with 2D dilated convolutions detects the spinal cord centerline, followed by a second CNN with 3D convolutions that segments the spinal cord and/or lesions. CNNs were trained independently with the Dice loss. When compared against manual segmentation, our CNN-based approach showed a median Dice of 95% vs. 88% for PropSeg (p ≤ 0.05), a state-of-the-art spinal cord segmentation method. Regarding lesion segmentation on MS data, our framework provided a Dice of 60%, a relative volume difference of -15%, and a lesion-wise detection sensitivity and precision of 83% and 77%, respectively. In this study, we introduce a robust method to segment the spinal cord and intramedullary MS lesions on a variety of MRI contrasts. The proposed framework is open-source and readily available in the Spinal Cord Toolbox.",
"title": ""
},
{
"docid": "7752661edead3eb69375c9a17be2c52d",
"text": "This article explores the rich heritage of the boundary element method (BEM) by examining its mathematical foundation from the potential theory, boundary value problems, Green’s functions, Green’s identities, to Fredholm integral equations. The 18th to 20th century mathematicians, whose contributions were key to the theoretical development, are honored with short biographies. The origin of the numerical implementation of boundary integral equations can be traced to the 1960s, when the electronic computers had become available. The full emergence of the numerical technique known as the boundary element method occurred in the late 1970s. This article reviews the early history of the boundary element method up to the late 1970s. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0dbbe7d78944fc439b379eb28660feea",
"text": "The latency between machines on the Internet can dramatically affect users' experience for many distributed applications. Particularly, in multiplayer online games, players seek to cluster themselves so that those in the same session have low latency to each other. A system that predicts latencies between machine pairs allows such matchmaking to consider many more machine pairs than can be probed in a scalable fashion while users are waiting. Using a far-reaching trace of latencies between players on over 3.5 million game consoles, we designed Htrae, a latency prediction system for game matchmaking scenarios. One novel feature of Htrae is its synthesis of geolocation with a network coordinate system. It uses geolocation to select reasonable initial network coordinates for new machines joining the system, allowing it to converge more quickly than standard network coordinate systems and produce substantially lower prediction error than state-of-the-art latency prediction systems. For instance, it produces 90th percentile errors less than half those of iPlane and Pyxida. Our design is general enough to make it a good fit for other latency-sensitive peer-to-peer applications besides game matchmaking.",
"title": ""
},
{
"docid": "7c3c6aaa493d4a5d62e096db5bf1ea6d",
"text": "In this paper, a real-time robust voice activity detector (VAD) is proposed. The proposed VAD adopts the gammatone filter and modify the existing long-term signal variability (LTSV) measure, i.e. known as GMLTSV in short. The proposed VAD is an improved version of existing VAD which used gammatone filter and entropy. The LTSV measure is modified to adapt to the amplitude envelopes extracted using gammatone filter by swapping entropy and variance used in LTSV measure to reduce noise effect in the extracted temporal envelopes and improve discriminative power of the extracted feature. The proposed algorithm also implements an adaptive threshold that is computed using a nonlinear filter to track short-term trend of the extracted feature in real-time. The proposed VAD using GMLTSV feature is tested against clean speech signals from TIMIT test corpus which are degraded at SNR ranged from -10dB to 20dB by non-stationary noise, eg. airport noise, babble noise, exhibition noise from Aurora-2 database, and stationary noise, eg. additive white Gaussian noise. Based on the evaluation, it is proven that the proposed GMLTSV-based VAD is robust in speech and non-speech detection even at low signal-to-noise ratio (SNR) and outperformed other existing voice activity detectors which are compared in the evaluation. The proposed VAD achieved satisfactory accuracy when compared to the impractical single frequency filtering based VAD while implementing real-time scheme for practical application.",
"title": ""
},
{
"docid": "c9ad1daa4ee0d900c1a2aa9838eb9918",
"text": "A central question in human development is how young children gain knowledge so fast. We propose that analogical generalization drives much of this early learning and allows children to generate new abstractions from experience. In this paper, we review evidence for analogical generalization in both children and adults. We discuss how analogical processes interact with the child's changing knowledge base to predict the course of learning, from conservative to domain-general understanding. This line of research leads to challenges to existing assumptions about learning. It shows that (a) it is not enough to consider the distribution of examples given to learners; one must consider the processes learners are applying; (b) contrary to the general assumption, maximizing variability is not always the best route for maximizing generalization and transfer.",
"title": ""
},
{
"docid": "881615ecd53c20a93c96defee048f0e1",
"text": "Several research groups have previously constructed short forms of the MacArthur-Bates Communicative Development Inventories (CDI) for different languages. We consider the specific aim of constructing such a short form to be used for language screening in a specific age group. We present a novel strategy for the construction, which is applicable if results from a population-based study using the CDI long form are available for this age group. The basic approach is to select items in a manner implying a left-skewed distribution of the summary score and hence a reliable discrimination among children in the lower end of the distribution despite the measurement error of the instrument. We report on the application of the strategy in constructing a Danish CDI short form and present some results illustrating the validity of the short form. Finally we discuss the choice of the most appropriate age for language screening based on a vocabulary score.",
"title": ""
},
{
"docid": "8074ecf8bd73c4add9e01f0b84ed6e70",
"text": "This paper provides a survey on implementing wireless sensor network (WSN) technology on industrial process monitoring and control. First, the existing industrial applications are explored, following with a review of the advantages of adopting WSN technology for industrial control. Then, challenging factors influencing the design and acceptance of WSNs in the process control world are outlined, and the state-of-the-art research efforts and industrial solutions are provided corresponding to each factor. Further research issues for the realization and improvement of wireless sensor network technology on process industry are also mentioned.",
"title": ""
}
] |
scidocsrr
|
0519aa1993e289d59e4c9fa9eef00d99
|
Propp's Morphology of the Folk Tale as a Grammar for Generation
|
[
{
"docid": "c5f6a559d8361ad509ec10bbb6c3cc9b",
"text": "In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques.",
"title": ""
},
{
"docid": "683bad69cfb2c8980020dd1f8bd8cea4",
"text": "BRUTUS is a program that tells stories. The stories are intriguing, they hold a hint of mystery, and—not least impressive—they are written in correct English prose. An example (p. 124) is shown in Figure 1. This remarkable feat is grounded in a complex architecture making use of a number of levels, each of which is parameterized so as to become a locus of possible variation. The specific BRUTUS1 implementation that illustrates the program’s prowess exploits the theme of betrayal, which receives an elaborate analysis, culminating in a set",
"title": ""
}
] |
[
{
"docid": "bc890d9ecf02a89f5979053444daebdf",
"text": "The continued growth of mobile and interactive computing requires devices manufactured with low-cost processes, compatible with large-area and flexible form factors, and with additional functionality. We review recent advances in the design of electronic and optoelectronic devices that use colloidal semiconductor quantum dots (QDs). The properties of materials assembled of QDs may be tailored not only by the atomic composition but also by the size, shape, and surface functionalization of the individual QDs and by the communication among these QDs. The chemical and physical properties of QD surfaces and the interfaces in QD devices are of particular importance, and these enable the solution-based fabrication of low-cost, large-area, flexible, and functional devices. We discuss challenges that must be addressed in the move to solution-processed functional optoelectronic nanomaterials.",
"title": ""
},
{
"docid": "88aed0f7fe9022cfc2e2b95a1ed6d2fb",
"text": "Since the terrorist attacks of September 11, 2001, and the subsequent establishment of the U.S. Department of Homeland Security (DHS), considerable efforts have been made to estimate the risks of terrorism and the cost effectiveness of security policies to reduce these risks. DHS, industry, and the academic risk analysis communities have all invested heavily in the development of tools and approaches that can assist decisionmakers in effectively allocating limited resources across the vast array of potential investments that could mitigate risks from terrorism and other threats to the homeland. Decisionmakers demand models, analyses, and decision support that are useful for this task and based on the state of the art. Since terrorism risk analysis is new, no single method is likely to meet this challenge. In this article we explore a number of existing and potential approaches for terrorism risk analysis, focusing particularly on recent discussions regarding the applicability of probabilistic and decision analytic approaches to bioterrorism risks and the Bioterrorism Risk Assessment methodology used by the DHS and criticized by the National Academies and others.",
"title": ""
},
{
"docid": "96055f0e41d62dc0ef318772fa6d6d9f",
"text": "Building Information Modeling (BIM) has rapidly grown from merely being a three-dimensional (3D) model of a facility to serving as “a shared knowledge resource for information about a facility, forming a reliable basis for decisions during its life cycle from inception onward” [1]. BIM with three primary spatial dimensions (width, height, and depth) becomes 4D BIM when time (construction scheduling information) is added, and 5D BIM when cost information is added to it. Although the sixth dimension of the 6D BIM is often attributed to asset information useful for Facility Management (FM) processes, there is no agreement in the research literature on what each dimension represents beyond the fifth dimension [2]. BIM ultimately seeks to digitize the different stages of a building lifecycle such as planning, design, construction, and operation such that consistent digital information of a building project can be used by stakeholders throughout the building life-cycle [3]. The United States National Building Information Model Standard (NBIMS) initially characterized BIMs as digital representations of physical and functional aspects of a facility. But, in the most recent version released in July 2015, the NBIMS’ definition of BIM includes three separate but linked functions, namely business process, digital representation, and organization and control [4]. A number of national-level initiatives are underway in various countries to formally encourage the adoption of BIM technologies in the Architecture, Engineering, and Construction (AEC) and FM industries. Building SMART, with 18 chapters across the globe, including USA, UK, Australasia, etc., was established in 1995 with the aim of developing and driving the active use of open internationally-recognized standards to support the wider adoption of BIM across the building and infrastructure sectors [5]. The UK BIM Task Group, with experts from industry, government, public sector, institutes, and academia, is committed to facilitate the implementation of ‘collaborative 3D BIM’, a UK Government Construction Strategy initiative [6]. Similarly, the EUBIM Task Group was started with a vision to foster the common use of BIM in public works and produce a handbook containing the common BIM principles, guidance and practices for public contracting entities and policy makers [7].",
"title": ""
},
{
"docid": "580e0cc120ea9fd7aa9bb0a8e2a73cb3",
"text": "In the emerging field of micro-blogging and social communication services, users post millions of short messages every day. Keeping track of all the messages posted by your friends and the conversation as a whole can become tedious or even impossible. In this paper, we presented a study on automatically clustering and classifying Twitter messages, also known as “tweets”, into different categories, inspired by the approaches taken by news aggregating services like Google News. Our results suggest that the clusters produced by traditional unsupervised methods can often be incoherent from a topical perspective, but utilizing a supervised methodology that utilize the hash-tags as indicators of topics produce surprisingly good results. We also offer a discussion on temporal effects of our methodology and training set size considerations. Lastly, we describe a simple method of finding the most representative tweet in a cluster, and provide an analysis of the results.",
"title": ""
},
{
"docid": "0946b5cb25e69f86b074ba6d736cd50f",
"text": "Increase of malware and advanced cyber-attacks are now becoming a serious problem. Unknown malware which has not determined by security vendors is often used in these attacks, and it is becoming difficult to protect terminals from their infection. Therefore, a countermeasure for after infection is required. There are some malware infection detection methods which focus on the traffic data comes from malware. However, it is difficult to perfectly detect infection only using traffic data because it imitates benign traffic. In this paper, we propose malware process detection method based on process behavior in possible infected terminals. In proposal, we investigated stepwise application of Deep Neural Networks to classify malware process. First, we train the Recurrent Neural Network (RNN) to extract features of process behavior. Second, we train the Convolutional Neural Network (CNN) to classify feature images which are generated by the extracted features from the trained RNN. The evaluation result in several image size by comparing the AUC of obtained ROC curves and we obtained AUC= 0:96 in best case.",
"title": ""
},
{
"docid": "22beed9d31913f09e81063dbcb751c42",
"text": "In this paper an approach for 360 degree multi sensor fusion for static and dynamic obstacles is presented. The perception of static and dynamic obstacles is achieved by combining the advantages of model based object tracking and an occupancy map. For the model based object tracking a novel multi reference point tracking system, called best knowledge model, is introduced. The best knowledge model allows to track and describe objects with respect to a best suitable reference point. It is explained how the object tracking and the occupancy map closely interact and benefit from each other. Experimental results of the 360 degree multi sensor fusion system from an automotive test vehicle are shown.",
"title": ""
},
{
"docid": "c4b6df3abf37409d6a6a19646334bffb",
"text": "Classification in imbalanced domains is a recent challenge in data mining. We refer to imbalanced classification when data presents many examples from one class and few from the other class, and the less representative class is the one which has more interest from the point of view of the learning task. One of the most used techniques to tackle this problem consists in preprocessing the data previously to the learning process. This preprocessing could be done through under-sampling; removing examples, mainly belonging to the majority class; and over-sampling, by means of replicating or generating new minority examples. In this paper, we propose an under-sampling procedure guided by evolutionary algorithms to perform a training set selection for enhancing the decision trees obtained by the C4.5 algorithm and the rule sets obtained by PART rule induction algorithm. The proposal has been compared with other under-sampling and over-sampling techniques and the results indicate that the new approach is very competitive in terms of accuracy when comparing with over-sampling and it outperforms standard under-sampling. Moreover, the obtained models are smaller in terms of number of leaves or rules generated and they can considered more interpretable. The results have been contrasted through non-parametric statistical tests over multiple data sets. Crown Copyright 2009 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "89dcd15d3f7e2f538af4a2654f144dfb",
"text": "E-waste comprises discarded electronic appliances, of which computers and mobile telephones are disproportionately abundant because of their short lifespan. The current global production of E-waste is estimated to be 20-25 million tonnes per year, with most E-waste being produced in Europe, the United States and Australasia. China, Eastern Europe and Latin America will become major E-waste producers in the next ten years. Miniaturisation and the development of more efficient cloud computing networks, where computing services are delivered over the internet from remote locations, may offset the increase in E-waste production from global economic growth and the development of pervasive new technologies. E-waste contains valuable metals (Cu, platinum group) as well as potential environmental contaminants, especially Pb, Sb, Hg, Cd, Ni, polybrominated diphenyl ethers (PBDEs), and polychlorinated biphenyls (PCBs). Burning E-waste may generate dioxins, furans, polycyclic aromatic hydrocarbons (PAHs), polyhalogenated aromatic hydrocarbons (PHAHs), and hydrogen chloride. The chemical composition of E-waste changes with the development of new technologies and pressure from environmental organisations on electronics companies to find alternatives to environmentally damaging materials. Most E-waste is disposed in landfills. Effective reprocessing technology, which recovers the valuable materials with minimal environmental impact, is expensive. Consequently, although illegal under the Basel Convention, rich countries export an unknown quantity of E-waste to poor countries, where recycling techniques include burning and dissolution in strong acids with few measures to protect human health and the environment. Such reprocessing initially results in extreme localised contamination followed by migration of the contaminants into receiving waters and food chains. E-waste workers suffer negative health effects through skin contact and inhalation, while the wider community are exposed to the contaminants through smoke, dust, drinking water and food. There is evidence that E-waste associated contaminants may be present in some agricultural or manufactured products for export.",
"title": ""
},
{
"docid": "e79646606570464bccd27c3316a1f086",
"text": "BACKGROUND\nLower lid blepharoplasty has potential for significant long-lasting complications and marginal aesthetic outcomes if not performed correctly, or if one disregards the anatomical aspects of the orbicularis oculi muscle. This has detracted surgeons from performing the technical maneuvers necessary for optimal periorbital rejuvenation. A simplified, \"five-step\" clinical approach based on sound anatomical principles is presented.\n\n\nMETHODS\nA review of 50 lower lid blepharoplasty patients (each bilateral) using the five-step technique was conducted to delineate the efficacy in improving lower eyelid aesthetics. Digital images from 50 consecutive primary lower blepharoplasty patients (100 lower lids: 37 women and 13 men) were measured using a computer program with standardized data points that were later converted to ratios.\n\n\nRESULTS\nOf the 100 lower eyelid five-step blepharoplasties analyzed, complication rates were low and data points measured demonstrated improvements in all aesthetic parameters. The width and position of the tear trough, position of the lower lid relative to the pupil, and the intercanthal angle were all improved. There were no cases of lower lid malposition.\n\n\nCONCLUSIONS\nAesthetic outcomes in lower lid blepharoplasty can be improved using a five-step technical sequence that addresses all of the anatomical findings. Lower lid blepharoplasty results are improved when (1) the supportive deep malar fat compartment is augmented; (2) lower lid orbicularis oculi muscle is preserved with minimal fat removal (if at all); (3) the main retaining structure (orbicularis retaining ligament) is selectively released; (4) lateral canthal support is established or strengthened (lateral retinacular suspension); and (5) minimal skin is removed.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.",
"title": ""
},
{
"docid": "3c83c75715f1c69d6b2add2560994146",
"text": "Interacting Storytelling systems integrate AI techniques such as planning with narrative representations to generate stories. In this paper, we discuss the use of planning formalisms in Interactive Storytelling from the perspective of story generation and authoring. We compare two different planning formalisms, Hierarchical Task Network (HTN) planning and Heuristic Search Planning (HSP). While HTN provide a strong basis for narrative coherence in the context of interactivity, HSP offer additional flexibility and the generation of stories and the mechanisms for generating comic situations.",
"title": ""
},
{
"docid": "463eb90754d21c43ee61e7e18256c66b",
"text": "A low-profile metamaterial loaded antenna array with anti-interference and polarization reconfigurable features is proposed for base-station communication. Owing to the dual notches etched on the radiating electric dipoles, an impedance bandwidth of 75.6% ranging from 1.68 to 3.72 GHz with a notch band from 2.38 to 2.55 GHz can be achieved. By employing the metamaterial loadings that are arranged in the center of the magnetic dipole, the thickness of the proposed antenna can be decreased from 28 to 20 mm. Furthermore, a serial feeding network that consists of several Wilkinson power dividers and phase shifters is introduced to attain the conversion between dual-linear polarization and triple-circular polarization. Hence, the antenna could meet the demand of the future 5G intelligent application.",
"title": ""
},
{
"docid": "c5e2930d5a0f80a8d4a59a70db64cd68",
"text": "Gamification has become a trend over the last years, especially in non-game environments such as business systems. With the aim to increase the users' engagement and motivation, existing or new information systems are enriched with game design elements. Before the technical implementation, the gamification concept is created. However, creation of such concepts is an informal and error-prone process, i.e., the definition and exchange of game mechanics is done in natural language or using spreadsheets. This becomes especially relevant, if the gamification concept is handed over to the implementation phase in which IT-experts have to manually translate informal to formal concepts without having gamification expertise. In this paper, we describe a novel, declarative, and formal domain-specific language to define gamification concepts. Besides that the language is designed to be readable and partially write able by gamification experts, the language is automatically compilable into gamification platforms without involving IT-experts.",
"title": ""
},
{
"docid": "400be1fdbd0f1aebfb0da220fd62e522",
"text": "Understanding users' interactions with highly subjective content---like artistic images---is challenging due to the complex semantics that guide our preferences. On the one hand one has to overcome `standard' recommender systems challenges, such as dealing with large, sparse, and long-tailed datasets. On the other, several new challenges present themselves, such as the need to model content in terms of its visual appearance, or even social dynamics, such as a preference toward a particular artist that is independent of the art they create. In this paper we build large-scale recommender systems to model the dynamics of a vibrant digital art community, Behance, consisting of tens of millions of interactions (clicks and 'appreciates') of users toward digital art. Methodologically, our main contributions are to model (a) rich content, especially in terms of its visual appearance; (b) temporal dynamics, in terms of how users prefer 'visually consistent' content within and across sessions; and (c) social dynamics, in terms of how users exhibit preferences both towards certain art styles, as well as the artists themselves.",
"title": ""
},
{
"docid": "f9d8954e2061b5466e655552a5e13a24",
"text": "Sports tracking applications are increasingly available on the market, and research has recently picked up this topic. Tracking a user's running track and providing feedback on the performance are among the key features of such applications. However, little attention has been paid to the accuracy of the applications' localization measurements. In evaluating the nine currently most popular running applications, we found tremendous differences in the GPS measurements. Besides this finding, our study contributes to the scientific knowledge base by qualifying the findings of previous studies concerning accuracy with smartphones' GPS components.",
"title": ""
},
{
"docid": "0b407f1f4d771a34e6d0bc59bf2ef4c4",
"text": "Social advertisement is one of the fastest growing sectors in the digital advertisement landscape: ads in the form of promoted posts are shown in the feed of users of a social networking platform, along with normal social posts; if a user clicks on a promoted post, the host (social network owner) is paid a fixed amount from the advertiser. In this context, allocating ads to users is typically performed by maximizing click-through-rate, i.e., the likelihood that the user will click on the ad. However, this simple strategy fails to leverage the fact the ads can propagate virally through the network, from endorsing users to their followers. In this paper, we study the problem of allocating ads to users through the viral-marketing lens. Advertisers approach the host with a budget in return for the marketing campaign service provided by the host. We show that allocation that takes into account the propensity of ads for viral propagation can achieve significantly better performance. However, uncontrolled virality could be undesirable for the host as it creates room for exploitation by the advertisers: hoping to tap uncontrolled virality, an advertiser might declare a lower budget for its marketing campaign, aiming at the same large outcome with a smaller cost. This creates a challenging trade-off: on the one hand, the host aims at leveraging virality and the network effect to improve advertising efficacy, while on the other hand the host wants to avoid giving away free service due to uncontrolled virality. We formalize this as the problem of ad allocation with minimum regret, which we show is NP-hard and inapproximable w.r.t. any factor. However, we devise an algorithm that provides approximation guarantees w.r.t. the total budget of all advertisers. We develop a scalable version of our approximation algorithm, which we extensively test on four real-world data sets, confirming that our algorithm delivers high quality solutions, is scalable, and significantly outperforms several natural baselines.",
"title": ""
},
{
"docid": "f11dbf9c32b126de695801957171465c",
"text": "Continuum robots, which are composed of multiple concentric, precurved elastic tubes, can provide dexterity at diameters equivalent to standard surgical needles. Recent mechanics-based models of these “active cannulas” are able to accurately describe the curve of the robot in free space, given the preformed tube curves and the linear and angular positions of the tube bases. However, in practical applications, where the active cannula must interact with its environment or apply controlled forces, a model that accounts for deformation under external loading is required. In this paper, we apply geometrically exact rod theory to produce a forward kinematic model that accurately describes large deflections due to a general collection of externally applied point and/or distributed wrench loads. This model accommodates arbitrarily many tubes, with each having a general preshaped curve. It also describes the independent torsional deformation of the individual tubes. Experimental results are provided for both point and distributed loads. Average tip error under load was 2.91 mm (1.5% - 3% of total robot length), which is similar to the accuracy of existing free-space models.",
"title": ""
},
{
"docid": "1720517b913ce3974ab92239ff8a177e",
"text": "Honeypot is a closely monitored computer resource that emulates behaviors of production host within a network in order to lure and attract the attackers. The workability and effectiveness of a deployed honeypot depends on its technical configuration. Since honeypot is a resource that is intentionally made attractive to the attackers, it is crucial to make it intelligent and self-manageable. This research reviews at artificial intelligence techniques such as expert system and case-based reasoning, in order to build an intelligent honeypot.",
"title": ""
},
{
"docid": "a30de4a213fe05c606fb16d204b9b170",
"text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD",
"title": ""
},
{
"docid": "ecd144226fdb065c2325a0d3131fd802",
"text": "The unknown and the invisible exploit the unwary and the uninformed for illicit financial gain and reputation damage.",
"title": ""
},
{
"docid": "0cb0c5f181ef357cd81d4a290d2cbc14",
"text": "With 3D sensing becoming cheaper, environment-aware and visually-guided robot arms capable of safely working in collaboration with humans will become common. However, a reliable calibration is needed, both for camera internal calibration, as well as Eye-to-Hand calibration, to make sure the whole system functions correctly. We present a framework, using a novel combination of well proven methods, allowing a quick automatic calibration for the integration of systems consisting of the robot and a varying number of 3D cameras by using a standard checkerboard calibration grid. Our approach allows a quick camera-to-robot recalibration after any changes to the setup, for example when cameras or robot have been repositioned. Modular design of the system ensures flexibility regarding a number of sensors used as well as different hardware choices. The framework has been proven to work by practical experiments to analyze the quality of the calibration versus the number of positions of the checkerboard used for each of the calibration procedures.",
"title": ""
}
] |
scidocsrr
|
d575098c34de48087416d6963bbc4207
|
Malleability of the blockchain’s entropy
|
[
{
"docid": "886c284d72a01db9bc4eb9467e14bbbb",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "ca8c40d523e0c64f139ae2a3221e8ea4",
"text": "We propose Mixcoin, a protocol to facilitate anonymous payments in Bitcoin and similar cryptocurrencies. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal. Our scheme is efficient and fully compatible with Bitcoin. Against a passive attacker, our scheme provides an anonymity set of all other users mixing coins contemporaneously. This is an interesting new property with no clear analog in better-studied communication mixes. Against active attackers our scheme offers similar anonymity to traditional communication mixes.",
"title": ""
}
] |
[
{
"docid": "d69977627ad191c9c726c0ec7fe73c59",
"text": "Despite the progress since the first attempts of mankind to explore space, it appears that sending man in space remains challenging. While robotic systems are not yet ready to replace human presence, they provide an excellent support for astronauts during maintenance and hazardous tasks. This paper presents the development of a space qualified multi-fingered robotic hand and highlights the most interesting challenges. The design concept, the mechanical structure, the electronics architecture and the control system are presented throughout this overview paper.",
"title": ""
},
{
"docid": "0e380010be90bf3dabbc39b82da6192c",
"text": "We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.",
"title": ""
},
{
"docid": "8df0689ffe5c730f7a6ef6da65bec57e",
"text": "Image-based reconstruction of 3D shapes is inherently biased under the occurrence of interreflections, since the observed intensity at surface concavities consists of direct and global illumination components. This issue is commonly not considered in a Photometric Stereo (PS) framework. Under the usual assumption of only direct reflections, this corrupts the normal estimation process in concave regions and thus leads to inaccurate results. For this reason, global illumination effects need to be considered for the correct reconstruction of surfaces affected by interreflections. While there is ongoing research in the field of inverse lighting (i.e. separation of global and direct illumination components), the interreflection aspect remains oftentimes neglected in the field of 3D shape reconstruction. In this study, we present a computationally driven approach for iteratively solving that problem. Initially, we introduce a photometric stereo approach that roughly reconstructs a surface with at first unknown reflectance properties. Then, we show that the initial surface reconstruction result can be refined iteratively regarding non-distant light sources and, especially, interreflections. The benefit for the reconstruction accuracy is evaluated on real Lambertian surfaces using laser range scanner data as ground truth.",
"title": ""
},
{
"docid": "3fc3ea7bb6c5342bcbc9d046b0a2537f",
"text": "We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.",
"title": ""
},
{
"docid": "95045efce8527a68485915d8f9e2c6cf",
"text": "OBJECTIVES\nTo update the normal stretched penile length values for children younger than 5 years of age. We also evaluated the association between penile length and anthropometric measures such as body weight, height, and body mass index.\n\n\nMETHODS\nThe study was performed as a cross-section study. The stretched penile lengths of 1040 white uncircumcised male infants and children 0 to 5 years of age were measured, and the mean length for each age group and the rate of increase in penile length were calculated. The correlation between penile length and weight, height, and body mass index of the children was determined by Pearson analysis.\n\n\nRESULTS\nThe stretched penile length was 3.65 +/- 0.27 cm in full-term newborns (n = 165) and 3.95 +/- 0.35 cm in children 1 to 3 months old (n = 112), 4.26 +/- 0.40 cm in those 3.1 to 6 months old (n = 130), 4.65 +/- 0.47 cm in those 6.1 to 12 months old (n = 148), 4.82 +/- 0.44 cm in those 12.1 to 24 months old (n = 135), 5.15 +/- 0.46 cm in those 24.1 to 36 months old (n = 120), 5.58 +/- 0.47 cm in those 36.1 to 48 months old (n = 117), and 6.02 +/- 0.50 cm in those 48.1 to 60 months old (n = 113). The fastest rate of increase in penile length was seen in the first 6 months of age, with a value of 1 mm/mo. A significant correlation was found between penile length and the weight, height, and body mass index of the boys (r = 0.881, r = 0.864, and r = 0.173, respectively; P = 0.001).\n\n\nCONCLUSIONS\nThe age-related values of penile length must be known to be able to determine abnormal penile sizes and to monitor treatment of underlying diseases. Our study has provided updated reference values for penile lengths for Turkish and other white boys aged 0 to 5 years.",
"title": ""
},
{
"docid": "6cca31cabf78c56b06be08cef464d666",
"text": "Sparsity-based subspace clustering algorithms have attracted significant attention thanks to their excellent performance in practical applications. A prominent example is the sparse subspace clustering (SSC) algorithm by Elhamifar and Vidal, which performs spectral clustering based on an adjacency matrix obtained by sparsely representing each data point in terms of all the other data points via the Lasso. When the number of data points is large or the dimension of the ambient space is high, the computational complexity of SSC quickly becomes prohibitive. Dyer et al. observed that SSC-orthogonal matching pursuit (OMP) obtained by replacing the Lasso by the greedy OMP algorithm results in significantly lower computational complexity, while often yielding comparable performance. The central goal of this paper is an analytical performance characterization of SSC-OMP for noisy data. Moreover, we introduce and analyze the SSC-matching pursuit (MP) algorithm, which employs MP in lieu of OMP. Both SSC-OMP and SSC-MP are proven to succeed even when the subspaces intersect and when the data points are contaminated by severe noise. The clustering conditions we obtain for SSC-OMP and SSC-MP are similar to those for SSC and for the thresholding-based subspace clustering (TSC) algorithm due to Heckel and Bölcskei. Analytical results in combination with numerical results indicate that both SSC-OMP and SSC-MP with a data-dependent stopping criterion automatically detect the dimensions of the subspaces underlying the data. Experiments on synthetic and on real data show that SSC-MP often matches or exceeds the performance of the computationally more expensive SSC-OMP algorithm. Moreover, SSC-MP compares very favorably to SSC, TSC, and the nearest subspace neighbor algorithm, both in terms of clustering performance and running time. In addition, we find that, in contrast to SSC-OMP, the performance of SSC-MP is very robust with respect to the choice of parameters in the stopping criteria.",
"title": ""
},
{
"docid": "1d8f7705ba0dd969ed6de9e7e6a9a419",
"text": "A Mecanum-wheeled robot benefits from great omni-direction maneuverability. However it suffers from random slippage and high-speed vibration, which creates electric power safety, uncertain position errors and energy waste problems for heavy-duty tasks. A lack of Mecanum research on heavy-duty autonomous navigation demands a robot platform to conduct experiments in the future. This paper introduces AuckBot, a heavy-duty omni-directional Mecanum robot platform developed at the University of Auckland, including its hardware overview, the control system architecture and the simulation design. In particular the control system, synergistically combining the Beckhoff system as the Controller-PC to serve low-level motion execution and ROS as the Navigation-PC to accomplish highlevel intelligent navigation tasks, is developed. In addition, a computer virtual simulation based on ISG-virtuos for virtual AuckBot has been validated. The present status and future work of AuckBot are described at the end.",
"title": ""
},
{
"docid": "5eac11ef2f695f78604df1e0fa683d45",
"text": "Home automation is an integral part of modern lives that help to monitor and control the home electrical devices as well as other aspects of the digital home that is expected to be the standard for the future home. Home appliance control system enables house owner to control devices Lighting, Heating and ventilation, water pumping, gardening system remotely or from any centralized location. Automatic systems are being preferred over manual system. This paper aims at automizing any home appliances. The appliances are to be controlled automatically by the programmable Logic Controller (PLC) DELTA Electronics DVP SX10. As the functioning of the Appliances is integrated with the working of PLC, the project proves to be accurate, reliable and more efficient than the existing controllers. It is a combination of electrical, electronic and mechanical section where the software used is Ladder Logic language programming. The visualization of the current status of the home appliances is made possible with the use of SCADA screen which is interfaced to the PLC through various communication protocols. Winlog visualization software is a powerful SCADA/HMI for industrial automation, process control and supervisory monitoring. This WINLOG SCADA software has the ability to Remote application deployment and change management. Also it has Modbus and OPC Connectivity and it is equipped with 3D GUI.",
"title": ""
},
{
"docid": "348e68c9175313c6079915a8b81ceecf",
"text": "There are many advantages in using UAVs for search and rescue operations. However, detecting people from a UAV remains a challenge: the embedded detector has to be fast enough and viewpoint robust to detect people in a flexible manner from aerial views. In this paper we propose a processing pipeline to 1) reduce the search space using infrared images and to 2) detect people whatever the roll and pitch angles of the UAV's acquisition system. We tested our approach on a multimodal aerial view dataset and showed that it outperforms the Integral Channel Features (ICF) detector in this context. Moreover, this approach allows real-time compatible detection.",
"title": ""
},
{
"docid": "51c42a305039d65dc442910c8078a9aa",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a “world-model” network that learns to predict the dynamic consequences of the agent’s actions. Simultaneously, we train a separate explicit “self-model” that allows the agent to track the error map of its worldmodel. It then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in realistic physical environments.",
"title": ""
},
{
"docid": "18851774e598f4cb66dbc770abe4a83f",
"text": "In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.",
"title": ""
},
{
"docid": "232b960cc16aa558538858aefd0a7651",
"text": "This paper presents a video-based solution for real time vehicle detection and counting system, using a surveillance camera mounted on a relatively high place to acquire the traffic video stream.The two main methods applied in this system are: the adaptive background estimation and the Gaussian shadow elimination. The former allows a robust moving detection especially in complex scenes. The latter is based on color space HSV, which is able to deal with different size and intensity shadows. After these two operations, it obtains an image with moving vehicle extracted, and then operation counting is effected by a method called virtual detector.",
"title": ""
},
{
"docid": "499a37563d171054ad0b0d6b8f7007bf",
"text": "For cold-start recommendation, it is important to rapidly profile new users and generate a good initial set of recommendations through an interview process --- users should be queried adaptively in a sequential fashion, and multiple items should be offered for opinion solicitation at each trial. In this work, we propose a novel algorithm that learns to conduct the interview process guided by a decision tree with multiple questions at each split. The splits, represented as sparse weight vectors, are learned through an L_1-constrained optimization framework. The users are directed to child nodes according to the inner product of their responses and the corresponding weight vector. More importantly, to account for the variety of responses coming to a node, a linear regressor is learned within each node using all the previously obtained answers as input to predict item ratings. A user study, preliminary but first in its kind in cold-start recommendation, is conducted to explore the efficient number and format of questions being asked in a recommendation survey to minimize user cognitive efforts. Quantitative experimental validations also show that the proposed algorithm outperforms state-of-the-art approaches in terms of both the prediction accuracy and user cognitive efforts.",
"title": ""
},
{
"docid": "94eff60d3783010c0c4b4e045d18a020",
"text": "Preface 1 Preliminaries: Galois theory, algebraic number theory 2 Lecture 1. CFT of Q: classical (Mo. 19/7/10, 9:40–10:40) 4 Lecture 2. CFT of Q: via adeles (Mo. 19/7/10, 11:00–12:00) 6 Lecture 3. Local CFT, local-global compatibility (Tu. 20/7/10, 9:40–10:40) 8 Lecture 4. Global CFT, l-adic characters (Tu. 20/7/10, 11:00–12:00) 10 Appendix A. More on GLC for GL1: algebraic Hecke characters 12 Appendix B. More on GLC for GL1: algebraic Galois characters 14 Exercises 15 References 16 Index 17",
"title": ""
},
{
"docid": "ce0a855890322a98dffbb6f1a3af1c07",
"text": "Gender reassignment (which includes psychotherapy, hormonal therapy and surgery) has been demonstrated as the most effective treatment for patients affected by gender dysphoria (or gender identity disorder), in which patients do not recognize their gender (sexual identity) as matching their genetic and sexual characteristics. Gender reassignment surgery is a series of complex surgical procedures (genital and nongenital) performed for the treatment of gender dysphoria. Genital procedures performed for gender dysphoria, such as vaginoplasty, clitorolabioplasty, penectomy and orchidectomy in male-to-female transsexuals, and penile and scrotal reconstruction in female-to-male transsexuals, are the core procedures in gender reassignment surgery. Nongenital procedures, such as breast enlargement, mastectomy, facial feminization surgery, voice surgery, and other masculinization and feminization procedures complete the surgical treatment available. The World Professional Association for Transgender Health currently publishes and reviews guidelines and standards of care for patients affected by gender dysphoria, such as eligibility criteria for surgery. This article presents an overview of the genital and nongenital procedures available for both male-to-female and female-to-male gender reassignment.",
"title": ""
},
{
"docid": "247534c6b5416e4330a84e10daf2bc0c",
"text": "The aim of the present study was to determine metabolic responses, movement patterns and distance covered at running speeds corresponding to fixed blood lactate concentrations (FBLs) in young soccer players during a match play. A further aim of the study was to evaluate the relationships between FBLs, maximal oxygen consumption (VO2max) and distance covered during a game. A multistage field test was administered to 32 players to determine FBLs and VO2max. Blood lactate (LA), heart rate (HR) and rate of perceived exertion (RPE) responses were obtained from 36 players during tournament matches filmed using six fixed cameras. Images were transferred to a computer, for calibration and synchronization. In all players, values for LA and HR were higher and RPE lower during the 1(st) half compared to the 2(nd) half of the matches (p < 0.01). Players in forward positions had higher LA levels than defenders, but HR and RPE values were similar between playing positions. Total distance and distance covered in jogging, low-moderate-high intensity running and low intensity sprint were higher during the 1(st) half (p < 0.01). In the 1(st) half, players also ran longer distances at FBLs [p<0.01; average running speed at 2mmol·L(-1) (FBL2): 3.32 ± 0.31m·s(-1) and average running speed at 4mmol·L(-1) (FBL4): 3.91 ± 0.25m·s(-1)]. There was a significant difference between playing positions in distance covered at different running speeds (p < 0.05). However, when distance covered was expressed as FBLs, the players ran similar distances. In addition, relationships between FBLs and total distance covered were significant (r = 0.482 to 0.570; p < 0.01). In conclusion, these findings demonstrated that young soccer players experienced higher internal load during the 1(st) half of a game compared to the 2(nd) half. Furthermore, although movement patterns of players differed between playing positions, all players experienced a similar physiological stress throughout the game. Finally, total distance covered was associated to fixed blood lactate concentrations during play. Key pointsBased on LA, HR and RPE responses, young top soccer players experienced a higher physiological stress during the 1(st) half of the matches compared to the 2(nd) half.Movement patterns differed in accordance with the players' positions but that all players experienced a similar physiological stress during match play.Approximately one quarter of total distance was covered at speeds that exceeded the 4 mmol·L(-1) fixed LA threshold.Total distance covered was influenced by running speeds at fixed lactate concentrations in young soccer players during match play.",
"title": ""
},
{
"docid": "d7e7cdc9ac55d5af199395becfe02d73",
"text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.",
"title": ""
},
{
"docid": "f4415b932387c748a30c6a8f86e0c1ea",
"text": "The broaden-and-build theory describes the form and function of a subset of positive emotions, including joy, interest, contentment and love. A key proposition is that these positive emotions broaden an individual's momentary thought-action repertoire: joy sparks the urge to play, interest sparks the urge to explore, contentment sparks the urge to savour and integrate, and love sparks a recurring cycle of each of these urges within safe, close relationships. The broadened mindsets arising from these positive emotions are contrasted to the narrowed mindsets sparked by many negative emotions (i.e. specific action tendencies, such as attack or flee). A second key proposition concerns the consequences of these broadened mindsets: by broadening an individual's momentary thought-action repertoire--whether through play, exploration or similar activities--positive emotions promote discovery of novel and creative actions, ideas and social bonds, which in turn build that individual's personal resources; ranging from physical and intellectual resources, to social and psychological resources. Importantly, these resources function as reserves that can be drawn on later to improve the odds of successful coping and survival. This chapter reviews the latest empirical evidence supporting the broaden-and-build theory and draws out implications the theory holds for optimizing health and well-being.",
"title": ""
},
{
"docid": "1dccd5745d29310e2ca1b9f302efd0bb",
"text": "Graph structure which is often used to model the relationship between the data items has drawn more and more attention. The graph datasets from many important domains have the property called scale-free. In the scale-free graphs, there exist the hubs, which have much larger degree than the average value. The hubs may cause the problems of load imbalance, poor scalability and high communication overhead when the graphs are processed in the distributed memory systems. In this paper, we design an asynchronous graph processing framework targeted for distributed memory by considering the hubs as a separate part of the vertexes, which we call it the hub-centric idea. Specifically speaking, a hub-duplicate graph partitioning method is proposed to balance the workload and reduce the communication overhead. At the same time, an efficient asynchronous state synchronization method for the duplicates is also proposed. In addition, a priority scheduling strategy is applied to further reduce the communication overhead.",
"title": ""
},
{
"docid": "9327ab4f9eba9a32211ddb39463271b1",
"text": "We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with horizon graphs - a space-efficient time series visualization technique - across a range of chart sizes, measuring the speed and accuracy of subjects' estimates of value differences between charts. We identify transition points at which reducing the chart height results in significantly differing drops in estimation accuracy across the compared chart types, and we find optimal positions in the speed-accuracy tradeoff curve at which viewers performed quickly without attendant drops in accuracy. Based on these results, we propose approaches for increasing data density that optimize graphical perception.",
"title": ""
}
] |
scidocsrr
|
f3121bdb4df94e739fe737c8c4b771f8
|
Potentials of Gamification in Learning Management Systems: A Qualitative Evaluation
|
[
{
"docid": "bbb6b192974542b165d3f7a0d139a8e1",
"text": "While gamification is gaining ground in business, marketing, corporate management, and wellness initiatives, its application in education is still an emerging trend. This article presents a study of the published empirical research on the application of gamification to education. The study is limited to papers that discuss explicitly the effects of using game elements in specific educational contexts. It employs a systematic mapping design. Accordingly, a categorical structure for classifying the research results is proposed based on the extracted topics discussed in the reviewed papers. The categories include gamification design principles, game mechanics, context of applying gamification (type of application, educational level, and academic subject), implementation, and evaluation. By mapping the published works to the classification criteria and analyzing them, the study highlights the directions of the currently conducted empirical research on applying gamification to education. It also indicates some major obstacles and needs, such as the need for proper technological support, for controlled studies demonstrating reliable positive or negative results of using specific game elements in particular educational contexts, etc. Although most of the reviewed papers report promising results, more substantial empirical research is needed to determine whether both extrinsic and intrinsic motivation of the learners can be influenced by gamification.",
"title": ""
},
{
"docid": "372ab07026a861acd50e7dd7c605881d",
"text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.",
"title": ""
}
] |
[
{
"docid": "45a24b15455b98277e0ee49b31b234d0",
"text": "Breakthroughs in genetics and molecular biology in the 1970s and 1980s were heralded as a major technological revolution in medicine that would yield a wave of new drug discoveries. However, some forty years later the expected benefits have not materialized. I question the narrative of biotechnology as a Schumpeterian revolution by comparing it to the academic research paradigm that preceded it, clinical research in hospitals. I analyze these as distinct research paradigms that involve different epistemologies, practices, and institutional loci. I develop the claim that the complexity of biological systems means that clinical research was well adapted to medical innovation, and that the genetics/molecular biology paradigm imposed a predictive logic to search that was less effective at finding new drugs. The paper describes how drug discovery unfolds in each paradigm: in clinical research, discovery originates with observations of human subjects and proceeds through feedback-based learning, whereas in the genetics model, discovery originates with a precisely-defined molecular target; feedback from patients enters late in the process. The paper reviews the post-War institutional history that witnessed the relative decline of clinical research and the rise of genetics and molecular science in the United States bio-medical research landscape. The history provides a contextual narrative to illustrate that, in contrast to the framing of biotechnology as a Schumpeterian revolution, the adoption of biotechnology as a core drug discovery platform was propelled by institutional changes that were largely disconnected from processes of scientific or technological selection. Implications for current medical policy initiatives and translational science are discussed. © 2016 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "fdbfc5bf8af1478e919153fb6cde64f3",
"text": "Software development is conducted in increasingly dynamic business environments. Organizations need the capability to develop, release and learn from software in rapid parallel cycles. The abilities to continuously deliver software, to involve users, and to collect and prioritize their feedback are necessary for software evolution. In 2014, we introduced Rugby, an agile process model with workflows for continuous delivery and feedback management, and evaluated it in university projects together with industrial clients.\n Based on Rugby's release management workflow we identified the specific needs for project-based organizations developing mobile applications. Varying characteristics and restrictions in projects teams in corporate environments impact both process and infrastructure. We found that applicability and acceptance of continuous delivery in industry depend on its adaptability. To address issues in industrial projects with respect to delivery process, infrastructure, neglected testing and continuity, we extended Rugby's workflow and made it tailorable.\n Eight projects at Capgemini, a global provider of consulting, technology and outsourcing services, applied a tailored version of the workflow. The evaluation of these projects shows anecdotal evidence that the application of the workflow significantly reduces the time required to build and deliver mobile applications in industrial projects, while at the same time increasing the number of builds and internal deliveries for feedback.",
"title": ""
},
{
"docid": "6ad7d97140d7a5d6b72039b4bb9c3be5",
"text": "This study evaluated the criterion-related validity of the Electronic Head Posture Instrument (EHPI) in measuring the craniovertebral (CV) angle by correlating the measurements of CV angle with anterior head translation (AHT) in lateral cervical radiographs. It also investigated the correlation of AHT and CV angle with the Chinese version of the Northwick Park Questionnaire (NPQ) and Numeric Pain Rating Scale (NPRS). Thirty patients with diagnosis of mechanical neck pain for at least 3 months without referred symptoms were recruited in an outpatient physiotherapy clinic. The results showed that AHT measured with X-ray correlated negatively with CV angle measured with EHPI (r = -0.71, p < 0.001). CV angle also correlated negatively with NPQ (r = -0.67, p < 0.001) and NPRS (r = -0.70, p < 0.001), while AHT positively correlated with NPQ (r = 0.390, p = 0.033) and NPRS (r = 0.49, p = 0.006). We found a negative correlation between CV angle measured with the EHPI and AHT measured with the X-ray lateral film as well as with NPQ and NPRS in patients with chronic mechanical neck pain. EHPI is a valid tool in clinically assessing and evaluating cervical posture of patients with chronic mechanical neck pain.",
"title": ""
},
{
"docid": "b8909da12187cc3c1b9bc428371fc795",
"text": "Among various perceptual-motor tests, only visuomotor integration was significant in predicting accuracy of handwriting performance for the total sample of 59 children consisting of 19 clumsy children, 22 nonclumsy dysgraphic children, and 18 'normal' children. They were selected from a sample of 360 fourth-graders (10-yr.-olds). For groups of clumsy and 'normal' children, the prediction of handwriting performance is difficult. However, correlations among scores on 6 measures showed that handwriting was significantly related to visuomotor integration, visual form perception, and tracing in the total group and to visuomotor integration and visual form perception in the clumsy group. The weakest correlations occurred between tests measuring simple psychomotor functions and handwriting. Moreover, clumsy children were expected to do poorly on tests measuring aiming, tracing, and visuomotor integration, but not on tests measuring visual form perception and finger tapping. Dysgraphic children were expected to do poorly on visuomotor integration only.",
"title": ""
},
{
"docid": "808a6c959eb79deb6ac5278805f5b855",
"text": "Recently there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1-norm, average percentage of zeros, etc) and retain only the top ranked filters. Once the low scoring filters are pruned away the remainder of the network is fine tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counter-intuitive results wherein by randomly pruning 25-50% filters from deep CNNs we are able to obtain the same performance as obtained by using state of the art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class specific pruning and show that even here a random pruning strategy gives close to state of the art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection. We show that using a simple random pruning strategy we can achieve significant speed up in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster RCNN model.",
"title": ""
},
{
"docid": "a53225746b2b6dba6078a998031c2af6",
"text": "Decision Tree induction is commonly used classification algorithm. One of the important problems is how to use records with unknown values from training as well as testing data. Many approaches have been proposed to address the impact of unknown values at training on accuracy of prediction. However, very few techniques are there to address the problem in testing data. In our earlier work, we discussed and summarized these strategies in details. In Lazy Decision Tree, the problem of unknown attribute values in test instance is completely eliminated by delaying the construction of tree till the classification time and using only known attributes for classification. In this paper we present novel algorithm ‘Eager Decision Tree’ which constructs a single prediction model at the time of training which considers all possibilities of unknown attribute values from testing data. It naturally removes the problem of handing unknown values in testing data in Decision Tree induction like Lazy Decision Tree.",
"title": ""
},
{
"docid": "76114f9af6f210af09e81a03251960a0",
"text": "A low insertion-loss single-pole double-throw switch in a standard 0.18-/spl mu/m complementary metal-oxide semiconductor (CMOS) process was developed for 2.4- and 5.8-GHz wireless local area network applications. In order to increase the P/sub 1dB/, the body-floating circuit topology is implemented. A nonlinear CMOS model to predict the switch power performance is also developed. The series-shunt switch achieves a measured P/sub 1dB/ of 21.3 dBm, an insertion loss of 0.7 dB, and an isolation of 35 dB at 2.4 GHz, while at 5.8 GHz, the switch attains a measured P/sub 1dB/ of 20 dBm, an insertion loss of 1.1 dB, and an isolation of 27 dB. The effective chip size is only 0.03 mm/sup 2/. The measured data agree with the simulation results well, including the power-handling capability. To our knowledge, this study presents low insertion loss, high isolation, and good power performance with the smallest chip size among the previously reported 2.4- and 5.8-GHz CMOS switches.",
"title": ""
},
{
"docid": "602afe27e9999f1bd3daefd0b0b93453",
"text": "The principle of Network Functions Virtualization (NFV) aims to transform network architectures by implementing Network Functions (NFs) in software that can run on commodity hardware. There are several challenges inherent to NFV, among which is the need for an orchestration and management framework. This paper presents the Cloud4NFV platform, which follows the major NFV standard guidelines. The platform is presented in detail and special attention is given to data modelling aspects. Further, insights on the current implementation of the platform are given, showing that part of its foundations lay on cloud infrastructure management and Software Defined Networking (SDN) platforms. Finally, it is presented a proof-of-concept (PoC) that illustrates how the platform can be used to deliver a novel service to end customers, focusing on Customer Premises Equipment (CPE) related functions.",
"title": ""
},
{
"docid": "2f4b732a72141cffa059c52eb9f31608",
"text": "Appropriate fetal growth relies upon adequate placental nutrient transfer. Birthweight:placental weight ratio (BW:PW ratio) is often used as a proxy for placental efficiency, defined as the grams of fetus produced per gram placenta. An elevated BW:PW ratio in an appropriately grown fetus (small placenta) is assumed to be due to up-regulated placental nutrient transfer capacity i.e., a higher nutrient net flux per gram placenta. In fetal growth restriction (FGR), where a fetus fails to achieve its genetically pre-determined growth potential, placental weight and BW:PW ratio are often reduced which may indicate a placenta that fails to adapt its nutrient transfer capacity to compensate for its small size. This review considers the literature on BW:PW ratio in both large cohort studies of normal pregnancies and those studies offering insight into the relationship between BW:PW ratio and outcome measures including stillbirth, FGR, and subsequent postnatal consequences. The core of this review is the question of whether BW:PW ratio is truly indicative of altered placental efficiency, and whether changes in BW:PW ratio reflect those placentas which adapt their nutrient transfer according to their size. We consider this question using data from mice and humans, focusing upon studies that have measured the activity of the well characterized placental system A amino acid transporter, both in uncomplicated pregnancies and in FGR. Evidence suggests that BW:PW ratio is reduced both in FGR and in pregnancies resulting in a small for gestational age (SGA, birthweight < 10th centile) infant but this effect is more pronounced earlier in gestation (<28 weeks). In mice, there is a clear association between increased BW:PW ratio and increased placental system A activity. Additionally, there is good evidence in wild-type mice that small placentas upregulate placental nutrient transfer to prevent fetal undergrowth. In humans, this association between BW:PW ratio and placental system A activity is less clear and is worthy of further consideration, both in terms of system A and other placental nutrient transfer processes. This knowledge would help decide the value of measuring BW:PW ratio in terms of determining the risk of poor health outcomes, both in the neonatal period and long term.",
"title": ""
},
{
"docid": "2afe561c1b6a123936f215ffa22432f2",
"text": "Compressed sensing (CS) is a new technique for simultaneous data sampling and compression. In this paper, we propose and study block compressed sensing for natural images, where image acquisition is conducted in a block-by-block manner through the same operator. While simpler and more efficient than other CS techniques, the proposed scheme can sufficiently capture the complicated geometric structures of natural images. Our image reconstruction algorithm involves both linear and nonlinear operations such as Wiener filtering, projection onto the convex set and hard thresholding in the transform domain. Several numerical experiments demonstrate that the proposed block CS compares favorably with existing schemes at a much lower implementation cost.",
"title": ""
},
{
"docid": "9f68df51d0d47b539a6c42207536d012",
"text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.",
"title": ""
},
{
"docid": "04af83df04a019b8364319966e7292eb",
"text": "The Semantic Web is an effort to establish standards and mechanisms that will allow computers to reason more easily about the semantics of the Web resources (documents, data etc.). Ontologies play a central role in this endeavor. An ontology provides a conceptualization of a knowledge domain (e.g., consumer electronics) by defining the classes and subclasses of the domain entities, the types of possible relations between them etc. The current standard to specify Semantic Web ontologies is OWL, a formal language based on description logics and RDF, with OWL 2 being the latest OWL standard. Given an OWL ontology for a knowledge domain, one can publish on the Web machine-readable data pertaining to that domain (e.g., catalogues of products, their features etc.), with the data having formally defined semantics based on the conceptualization of the ontology. Several OWL syntaxes have been developed, but people unfamiliar with formal knowledge representation often have difficulties understanding them. This thesis considered methods that allow end-users to view ontology-based knowledge representations of the Semantic Web in the form of automatically generated texts in multiple natural languages. The first part of the thesis improved NaturalOWL, a Natural Language Generation system for OWL ontologies previously developed at AUEB. The system was modified to support OWL 2 and to be able to produce higher quality texts. Experiments showed that the texts generated by the new version of NaturalOWL are indeed of high quality and significantly better than texts generated by simpler systems, often called ontology",
"title": ""
},
{
"docid": "cb408e52b5e96669e08f70888b11b3e3",
"text": "Centrality is one of the most studied concepts in social network analysis. There is a huge literature regarding centrality measures, as ways to identify the most relevant users in a social network. The challenge is to find measures that can be computed efficiently, and that can be able to classify the users according to relevance criteria as close as possible to reality. We address this problem in the context of the Twitter network, an online social networking service with millions of users and an impressive flow of messages that are published and spread daily by interactions between users. Twitter has different types of users, but the greatest utility lies in finding the most influential ones. The purpose of this article is to collect and classify the different Twitter influence measures that exist so far in literature. These measures are very diverse. Some are based on simple metrics provided by the Twitter API, while others are based on complex mathematical models. Several measures are based on the PageRank algorithm, traditionally used to rank the websites on the Internet. Some others consider the timeline of publication, others the content of the messages, some are focused on specific topics, and others try to make predictions. We consider all these aspects, and some additional ones. Furthermore, we include measures of activity and popularity, the traditional mechanisms to correlate measures, and some important aspects of computational complexity for this particular context.",
"title": ""
},
{
"docid": "4309fd090591a107bce978d61aff6a34",
"text": "Regular exercise training is recognized as a powerful tool to improve work capacity, endothelial function and the cardiovascular risk profile in obesity, but it is unknown which of high-intensity aerobic exercise, moderate-intensity aerobic exercise or strength training is the optimal mode of exercise. In the present study, a total of 40 subjects were randomized to high-intensity interval aerobic training, continuous moderate-intensity aerobic training or maximal strength training programmes for 12 weeks, three times/week. The high-intensity group performed aerobic interval walking/running at 85-95% of maximal heart rate, whereas the moderate-intensity group exercised continuously at 60-70% of maximal heart rate; protocols were isocaloric. The strength training group performed 'high-intensity' leg press, abdominal and back strength training. Maximal oxygen uptake and endothelial function improved in all groups; the greatest improvement was observed after high-intensity training, and an equal improvement was observed after moderate-intensity aerobic training and strength training. High-intensity aerobic training and strength training were associated with increased PGC-1alpha (peroxisome-proliferator-activated receptor gamma co-activator 1alpha) levels and improved Ca(2+) transport in the skeletal muscle, whereas only strength training improved antioxidant status. Both strength training and moderate-intensity aerobic training decreased oxidized LDL (low-density lipoprotein) levels. Only aerobic training decreased body weight and diastolic blood pressure. In conclusion, high-intensity aerobic interval training was better than moderate-intensity aerobic training in improving aerobic work capacity and endothelial function. An important contribution towards improved aerobic work capacity, endothelial function and cardiovascular health originates from strength training, which may serve as a substitute when whole-body aerobic exercise is contra-indicated or difficult to perform.",
"title": ""
},
{
"docid": "866b81f6d74164b9ef625a529b20a7b3",
"text": "16 IEEE Spectrum | February 2006 | NA www.spectrum.ieee.org Millions of people around the world are tackling one of the hardest problems in computer science—without even knowing it. The logic game Sudoku is a miniature version of a longstanding mathematical challenge, and it entices both puzzlers, who see it as an enjoyable plaything, and researchers, who see it as a laboratory for algorithm design. Sudoku has become a worldwide puzzle craze within the past year. Previously known primarily in Japan, it now graces newspapers, Web sites, and best-selling books in dozens of countries [see photo, “Number Fad”]. A puzzle consists of a 9-by-9 grid made up of nine 3-by-3 subgrids. Digits appear in some squares, and based on these starting clues, a player completes the grid so that each row, column, and subgrid contains the digits 1 through 9 exactly once. An easy puzzle requires only simple logical techniques—if a subgrid needs an 8, say, and two of the columns running through it already hold an 8, then the subgrid’s 8 must go in the remaining column. A hard puzzle requires more complex pattern recognition skills; for instance, if a player computes all possible digits for each cell in a subgrid and notices that two cells have exactly the same two choices, those two digits can be eliminated from all other cells in the subgrid. No matter the difficulty level, however, a dedicated puzzler can eventually crack a 9-by-9 Sudoku game. A computer solves a 9-by-9 Sudoku within a second by using logical tricks that are similar to the ones humans use, but finishes much faster [see puzzle, “Challenge”]. On a large scale, however, such shortcuts are not powerful enough, and checking the explosive number of combinations becomes impossible, even for the world’s fastest computers. And no one knows of an algorithm that’s guaranteed to find a solution without trying out a huge number of combinations. This places Sudoku in an infamously difficult class, called NP-complete, that includes problems of great practical importance, such as scheduling, network routing, and gene sequencing. “The question of whether there exists an efficient algorithm for solving these problems is now on just about anyone’s list of the Top 10 unsolved problems in science and mathematics in the world,” says Richard Korf, a computer scientist at the University of California at Los Angeles. The challenge is known as P = NP, where, roughly speaking, P stands for tasks that can be solved efficiently, and NP stands for tasks whose solution can be verified efficiently. (For example, it is easy to verify whether a complete Sudoku is correctly filled in, even though the puzzle may take quite a lot of time to solve.) As a member of the NP-complete subset, NUMBER FAD: A reader examines a Sudoko puzzle in The Independent, London, last May.",
"title": ""
},
{
"docid": "a4f0d57719e43e03eab308e8be23633a",
"text": "Body weight variations are an integral part of a person's aging process. However, the lack of association between the age and the weight of an individual makes it challenging to model these variations for automatic face recognition. In this paper, we propose a regularizer-based approach to learn weight invariant facial representations using two different deep learning architectures, namely, sparse-stacked denoising autoencoders and deep Boltzmann machines. We incorporate a body-weight aware regularization parameter in the loss function of these architectures to help learn weight-aware features. The experiments performed on the extended WIT database show that the introduction of weight aware regularization improves the identification accuracy of the architectures both with and without dropout.",
"title": ""
},
{
"docid": "7569c7f3983c608151fb5bbb093b3293",
"text": "A unilateral probe-fed rectangular dielectric resonator antenna (DRA) with a very small ground plane is investigated. The small ground plane simultaneously works as an excitation patch that excites the fundamental TE111 mode of the DRA, which is an equivalent magnetic dipole. By combining this equivalent magnetic dipole and the electric dipole of the probe, a lateral radiation pattern can be obtained. This complementary antenna has the same E- and H-Planes patterns with low back radiation. Moreover, the cardioid-shaped pattern can be easily steered in the horizontal plane by changing the angular position of the patch (ground). To verify the idea, a prototype operating in 3.5-GHz long term evolution band (3.4–3.6 GHz) was fabricated and measured, with reasonable agreement between the measured and simulated results obtained. It is found that the measured 15-dB front-to-back-ratio bandwidth is 10.9%.",
"title": ""
},
{
"docid": "e1fb80117a0925954b444360e227d680",
"text": "Maize is one of the most important food and feed crops in Asia, and is a source of income for several million farmers. Despite impressive progress made in the last few decades through conventional breeding in the “Asia-7” (China, India, Indonesia, Nepal, Philippines, Thailand, and Vietnam), average maize yields remain low and the demand is expected to increasingly exceed the production in the coming years. Molecular marker-assisted breeding is accelerating yield gains in USA and elsewhere, and offers tremendous potential for enhancing the productivity and value of Asian maize germplasm. We discuss the importance of such efforts in meeting the growing demand for maize in Asia, and provide examples of the recent use of molecular markers with respect to (i) DNA fingerprinting and genetic diversity analysis of maize germplasm (inbreds and landraces/OPVs), (ii) QTL analysis of important biotic and abiotic stresses, and (iii) marker-assisted selection (MAS) for maize improvement. We also highlight the constraints faced by research institutions wishing to adopt the available and emerging molecular technologies, and conclude that innovative models for resource-pooling and intellectual-property-respecting partnerships will be required for enhancing the level and scope of molecular marker-assisted breeding for maize improvement in Asia. Scientists must ensure that the tools of molecular marker-assisted breeding are focused on developing commercially viable cultivars, improved to ameliorate the most important constraints to maize production in Asia.",
"title": ""
},
{
"docid": "c95da5ee6fde5cf23b551375ff01e709",
"text": "The 3GPP has introduced the LTE-M and NB-IoT User Equipment categories and made amendments to LTE release 13 to support the cellular Internet of Things. The contribution of this paper is to analyze the coverage probability, the number of supported devices, and the device battery life in networks equipped with either of the newly standardized technologies. The study is made for a site specific network deployment of a Danish operator, and the simulation is calibrated using drive test measurements. The results show that LTE-M can provide coverage for 99.9 % of outdoor and indoor devices, if the latter is experiencing 10 dB additional loss. However, for deep indoor users NB-IoT is required and provides coverage for about 95 % of the users. The cost is support for more than 10 times fewer devices and a 2-6 times higher device power consumption. Thus both LTE-M and NB- IoT provide extended support for the cellular Internet of Things, but with different trade- offs.",
"title": ""
}
] |
scidocsrr
|
07161ea961b8797ca489003c71afb95a
|
Learning of structured graph dictionaries
|
[
{
"docid": "c1f6052ecf802f1b4b2e9fd515d7ea15",
"text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.",
"title": ""
}
] |
[
{
"docid": "c1563013f73d59407e2e95bb0bc04874",
"text": "Linear complexity can be used to detect predictable nonrandom sequences, and hence it is included in the NIST randomness test suite. But, as shown in this paper, the NIST test suite cannot detect nonrandom sequences that are generated, for instance, by concatenating two different M-sequences with low linear complexity. This defect comes from the fact that the NIST linear complexity test uses deviation from the ideal value only in the last part of the whole linear complexity profile. In this paper, a new faithful linear complexity test is proposed, which uses deviations in all parts of the linear complexity profile and hence can detect even the above nonrandom sequences. An efficient formula is derived to compute the exact area distribution needed for the proposed test. Furthermore, a simple procedure is given to compute the proposed test statistic from linear complexity profile, which requires only O(M) time complexity for a sequence of length M. key words: randomness test, linear complexity profile, NIST SP800-22",
"title": ""
},
{
"docid": "c12bf8114484f7934f84495fb56c852c",
"text": "In recent years, call detail records (CDRs) have been widely used in human mobility research. Although CDRs are originally collected for billing purposes, the vast amount of digital footprints generated by calling and texting activities provide useful insights into population movement. However, can we fully trust CDRs given the uneven distribution of people’s phone communication activities in space and time? In this article, we investigate this issue using a mobile phone location dataset collected from over one million subscribers in Shanghai, China. It includes CDRs (~27%) plus other cellphone-related logs (e.g., tower pings, cellular handovers) generated in a workday. We extract all CDRs into a separate dataset in order to compare human mobility patterns derived from CDRs vs. from the complete dataset. From an individual perspective, the effectiveness of CDRs in estimating three frequently used mobility indicators is evaluated. We find that CDRs tend to underestimate the total travel distance and the movement entropy, while they can provide a good estimate to the radius of gyration. In addition, we observe that the level of deviation is related to the ratio of CDRs in an individual’s trajectory. From a collective perspective, we compare the outcomes of these two datasets in terms of the distance decay effect and urban community detection. The major differences are closely related to the habit of mobile phone usage in space and time. We believe that the event-triggered nature of CDRs does introduce a certain degree of bias in human mobility research and we suggest that researchers use caution to interpret results derived from CDR data. ARTICLE HISTORY Received 21 July 2015 Accepted 21 December 2015",
"title": ""
},
{
"docid": "7ea9a21bdbbda91c4cfa3e75e4fbed6f",
"text": "We present algorithms for fast quantile and frequency estimation in large data streams using graphics processors (GPUs). We exploit the high computation power and memory bandwidth of graphics processors and present a new sorting algorithm that performs rasterization operations on the GPUs. We use sorting as the main computational component for histogram approximation and construction of ε-approximate quantile and frequency summaries. Our algorithms for numerical statistics computation on data streams are deterministic, applicable to fixed or variable-sized sliding windows and use a limited memory footprint. We use GPU as a co-processor and minimize the data transmission between the CPU and GPU by taking into account the low bus bandwidth. We implemented our algorithms on a PC with a NVIDIA GeForce FX 6800 Ultra GPU and a 3.4 GHz Pentium IV CPU and applied them to large data streams consisting of more than 100 million values. We also compared the performance of our GPU-based algorithms with optimized implementations of prior CPU-based algorithms. Overall, our results demonstrate that the graphics processors available on a commodity computer system are efficient stream-processor and useful co-processors for mining data streams.",
"title": ""
},
{
"docid": "e4a3a52e297d268288aba404f0d24544",
"text": "The world is facing several challenges that must be dealt within the coming years such as efficient energy management, need for economic growth, security and quality of life of its habitants. The increasing concentration of the world population into urban areas puts the cities in the center of the preoccupations and makes them important actors for the world's sustainable development strategy. ICT has a substantial potential to help cities to respond to the growing demands of more efficient, sustainable, and increased quality of life in the cities, thus to make them \"smarter\". Smartness is directly proportional with the \"awareness\". Cyber-physical systems can extract the awareness information from the physical world and process this information in the cyber-world. Thus, a holistic integrated approach, from the physical to the cyber-world is necessary for a successful and sustainable smart city outcome. This paper introduces important research challenges that we believe will be important in the coming years and provides guidelines and recommendations to achieve self-aware smart city objectives.",
"title": ""
},
{
"docid": "16db60e96604f65f8b6f4f70e79b8ae5",
"text": "Yahoo! Answers is currently one of the most popular question answering systems. We claim however that its user experience could be significantly improved if it could route the \"right question\" to the \"right user.\" Indeed, while some users would rush answering a question such as \"what should I wear at the prom?,\" others would be upset simply being exposed to it. We argue here that Community Question Answering sites in general and Yahoo! Answers in particular, need a mechanism that would expose users to questions they can relate to and possibly answer.\n We propose here to address this need via a multi-channel recommender system technology for associating questions with potential answerers on Yahoo! Answers. One novel aspect of our approach is exploiting a wide variety of content and social signals users regularly provide to the system and organizing them into channels. Content signals relate mostly to the text and categories of questions and associated answers, while social signals capture the various user interactions with questions, such as asking, answering, voting, etc. We fuse and generalize known recommendation approaches within a single symmetric framework, which incorporates and properly balances multiple types of signals according to channels. Tested on a large scale dataset, our model exhibits good performance, clearly outperforming standard baselines.",
"title": ""
},
{
"docid": "4467f4fc7e9f1199ca6b57f7818ca42c",
"text": "Banking in several developing countries has transcended from a traditional brick-and mortar model of customers queuing for services in the banks to modern day banking where banks can be reached at any point for their services. This can be attributed to the tremendous growth in mobile penetration in many countries across the globe including Jordan. The current exploratory study is an attempt to identify the underlying factors that affects mobile banking adoption in Jordan. Data for this study have been collected using a questionnaire containing 22 questions. Out of 450 questionnaires that have been distributed, 301 are returned (66.0%). In the survey, factors that may affect Jordanian mobile phone users' to adopt mobile banking services were examined. The research findings suggested that all the six factors; self efficacy, trailability, compatibility, complexity, risk and relative advantage were statistically significant in influencing mobile banking adoption.",
"title": ""
},
{
"docid": "3231eedb6c06d3ce428f3c20dac5c37d",
"text": "In this study, differential evolution algorithm (DE) is proposed to train a wavelet neural network (WNN). The resulting network is named as differential evolution trained wavelet neural network (DEWNN). The efficacy of DEWNN is tested on bankruptcy prediction datasets viz. US banks, Turkish banks and Spanish banks. Further, its efficacy is also tested on benchmark datasets such as Iris, Wine and Wisconsin Breast Cancer. Moreover, Garson’s algorithm for feature selection in multi layer perceptron is adapted in the case of DEWNN. The performance of DEWNN is compared with that of threshold accepting trained wavelet neural network (TAWNN) [Vinay Kumar, K., Ravi, V., Mahil Carr, & Raj Kiran, N. (2008). Software cost estimation using wavelet neural networks. Journal of Systems and Software] and the original wavelet neural network (WNN) in the case of all data sets without feature selection and also in the case of four data sets where feature selection was performed. The whole experimentation is conducted using 10-fold cross validation method. Results show that soft computing hybrids viz., DEWNN and TAWNN outperformed the original WNN in terms of accuracy and sensitivity across all problems. Furthermore, DEWNN outscored TAWNN in terms of accuracy and sensitivity across all problems except Turkish banks dataset. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "95d1a35068e7de3293f8029e8b8694f9",
"text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.",
"title": ""
},
{
"docid": "e227e21d9b0523fdff82ca898fea0403",
"text": "As computer games become more complex and consumers demand more sophisticated computer controlled agents, developers are required to place a greater emphasis on the artificial intelligence aspects of their games. One source of sophisticated AI techniques is the artificial intelligence research community. This paper discusses recent efforts by our group at the University of Michigan Artificial Intelligence Lab to apply state of the art artificial intelligence techniques to computer games. Our experience developing intelligent air combat agents for DARPA training exercises, described in John Laird's lecture at the 1998 Computer Game Developer's Conference, suggested that many principles and techniques from the research community are applicable to games. A more recent project, called the Soar/Games project, has followed up on this by developing agents for computer games, including Quake II and Descent 3. The result of these two research efforts is a partially implemented design of an artificial intelligence engine for games based on well established AI systems and techniques.",
"title": ""
},
{
"docid": "c063474634eb427cf0215b4500182f8c",
"text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.",
"title": ""
},
{
"docid": "d5e2d1f3662d66f6d4cfc1c98e4de610",
"text": "Compressed sensing (CS) enables significant reduction of MR acquisition time with performance guarantee. However, computational complexity of CS is usually expensive. To address this, here we propose a novel deep residual learning algorithm to reconstruct MR images from sparsely sampled k-space data. In particular, based on the observation that coherent aliasing artifacts from downsampled data has topologically simpler structure than the original image data, we formulate a CS problem as a residual regression problem and propose a deep convolutional neural network (CNN) to learn the aliasing artifacts. Experimental results using single channel and multi channel MR data demonstrate that the proposed deep residual learning outperforms the existing CS and parallel imaging algorithms. Moreover, the computational time is faster in several orders of magnitude.",
"title": ""
},
{
"docid": "dc817bc11276d76f8d97f67e4b1b2155",
"text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.",
"title": ""
},
{
"docid": "c2c85e02b2eb3c73ece4e43aae42ff28",
"text": "The security of many computer systems hinges on the secrecy of a single word – if an adversary obtains knowledge of a password, they will gain access to the resources controlled by this password. Human users are the ‘weakest link’ in password control, due to our propensity to reuse passwords and to create weak ones. Policies which forbid such unsafe password practices are often violated, even if these policies are well-advertised. We have studied how users perceive their accounts and their passwords. Our participants mentally classified their accounts and passwords into a few groups, based on a small number of perceived similarities. Our participants used stronger passwords, and reused passwords less, in account groups which they considered more important. Our participants thus demonstrated awareness of the basic tenets of password safety, but they did not behave safely in all respects. Almost half of our participants reused at least one of the passwords in their high-importance accounts. Our findings add to the body of evidence that a typical computer user suffers from ‘password overload’. Our concepts of password and account grouping point the way toward more intuitive user interfaces for passwordand account-management systems. .",
"title": ""
},
{
"docid": "a87b48ee446cbda34e8d878cffbd19bb",
"text": "Introduction. In spite of significant changes in the management policies of intersexuality, clinical evidence show that not all pubertal or adult individuals live according to the assigned sex during infancy. Aim. The purpose of this study was to analyze the clinical management of an individual diagnosed as a female pseudohermaphrodite with congenital adrenal hyperplasia (CAH) simple virilizing form four decades ago but who currently lives as a monogamous heterosexual male. Methods. We studied the clinical files spanning from 1965 to 1991 of an intersex individual. In addition, we conducted a magnetic resonance imaging (MRI) study of the abdominoplevic cavity and a series of interviews using the oral history method. Main Outcome Measures. Our analysis is based on the clinical evidence that led to the CAH diagnosis in the 1960s in light of recent clinical testing to confirm such diagnosis. Results. Analysis of reported values for 17-ketosteroids, 17-hydroxycorticosteroids, from 24-hour urine samples during an 8-year period showed poor adrenal suppression in spite of adherence to treatment. A recent MRI study confirmed the presence of hyperplastic adrenal glands as well as the presence of a prepubertal uterus. Semistructured interviews with the individual confirmed a life history consistent with a male gender identity. Conclusions. Although the American Academy of Pediatrics recommends that XX intersex individuals with CAH should be assigned to the female sex, this practice harms some individuals as they may self-identify as males. In the absence of comorbid psychiatric factors, the discrepancy between infant sex assignment and gender identity later in life underlines the need for a reexamination of current standards of care for individuals diagnosed with CAH. Jorge JC, Echeverri C, Medina Y, and Acevedo P. Male gender identity in an xx individual with congenital adrenal hyperplasia. J Sex Med 2008;5:122–131.",
"title": ""
},
{
"docid": "bbd85124fd2e40d887ebd792e275edaf",
"text": "IoT (Internet of Things) based smart devices such as sensors have been actively used in edge clouds i.e., ‘fogs’ along with public clouds. They provide critical data during scenarios ranging from e.g., disaster response to in-home healthcare. However, for these devices to work effectively, end-to-end security schemes for the device communication protocols have to be flexible and should depend upon the application requirements as well as the resource constraints at the network-edge. In this paper, we present the design and implementation of a flexible IoT security middleware for end-to-end cloud-fog communications involving smart devices and cloud-hosted applications. The novel features of our middleware are in its ability to cope with intermittent network connectivity as well as device constraints in terms of computational power, memory, energy, and network bandwidth. To provide security during intermittent network conditions, we use a ‘Session Resumption’ algorithm in order for our middleware to reuse encrypted sessions from the recent past, if a recently disconnected device wants to resume a prior connection that was interrupted. In addition, we describe an ‘Optimal Scheme Decider’ algorithm that enables our middleware to select the best possible end-to-end security scheme option that matches with a given set of device constraints. Experiment results show how our middleware implementation also provides fast and resource-aware security by leveraging static properties i.e., static pre-shared keys (PSKs) for a variety of IoT-based application requirements that have trade-offs in higher security or faster data transfer rates.",
"title": ""
},
{
"docid": "6381c10a963b709c4af88047f38cc08c",
"text": "A great deal of research has been focused on solving the job-shop problem (ΠJ), over the last forty years, resulting in a wide variety of approaches. Recently, much effort has been concentrated on hybrid methods to solve ΠJ as a single technique cannot solve this stubborn problem. As a result much effort has recently been concentrated on techniques that combine myopic problem specific methods and a meta-strategy which guides the search out of local optima. These approaches currently provide the best results. Such hybrid techniques are known as iterated local search algorithms or meta-heuristics. In this paper we seek to assess the work done in the job-shop domain by providing a review of many of the techniques used. The impact of the major contributions is indicated by applying these techniques to a set of standard benchmark problems. It is established that methods such as Tabu Search, Genetic Algorithms, Simulated Annealing should be considered complementary rather than competitive. In addition this work suggests guide-lines on features that should be incorporated to create a good ΠJ system. Finally the possible direction for future work is highlighted so that current barriers within ΠJ maybe surmounted as we approach the 21st Century.",
"title": ""
},
{
"docid": "689c1a1104dcc27c03830b48543fa3df",
"text": "In modern Web applications, the process of user-profiling provides a way to capture user-specific information, which then serves as a source for designing personalized user experiences. Currently, such information about a particular user is available from multiple online sources/services, like social media applications, professional/social networking sites, location based service providers or even from simple Web-pages. The nature of this data being truly heterogeneous, high in volume and also highly dynamic over time, the problem of collecting these data artifacts from disparate sources, to enable complete user-profiling can be challenging. In this paper, we present an approach to dynamically build a structured user profile, that emphasizes the temporal nature to capture dynamic user behavior. The user profile is compiled from multiple, heterogeneous data sources which capture dynamic user actions over time, to capture changing preferences accurately. Natural language processing techniques, machine learning and concepts of the semantic Web were used for capturing relevant user data and implement the proposed “3D User Profile”. Our technique also supports the representation of the generated user profiles as structured data so that other personalized recommendation systems and Semantic Web/Linked Open Data applications can consume them for providing intelligent, personalized services.",
"title": ""
},
{
"docid": "ea0c9e70789c43e2c14c0b35d8f45dc2",
"text": "Harlequin ichthyosis (HI) is a rare and severe form of congenital ichthyosis. Linked to deletion and truncation mutations of a keratinocyte lipid transporter, HI is characterized by diffuse epidermal hyperkeratinization and defective desquamation. At birth, the HI phenotype is striking with thick hyperkeratotic plate-like scales with deep dermal fissures, severe ectropion and eclabium, among other findings. Over the first months of life, the hyperkeratotic covering is shed, revealing a diffusely erythematous, scaly epidermis, which persists for the remainder of the patient's life. Although HI infants have historically succumbed in the perinatal period related to their profound epidermal compromise, the prognosis of HI infants has vastly improved over the past 20 years. Here, we report a case of HI treated with acitretin, focusing on the multi-faceted management of the disease in the inpatient setting. A review of the literature of the management of HI during the perinatal period is also presented.",
"title": ""
},
{
"docid": "c44c90f8b43450e938473e6917a3ff8c",
"text": "The allocation and recognition of cotton leaf diseases are of the major importance as they have a cogent and momentous impact on quality and production of cotton. This paper presents a modus operandi for automatic classification of cotton leaf diseases through feature extraction of leaf symptoms from digital images. Otsu’s segmentation method is used for extracting color and shape features. Support vector machines (SVM) had been used to do classification on the extracted features. Three diseases have been diagnosed, namely Bacterial Blight, Myrothecium and Alternaria. The testing samples of the images are gathered from CICR Nagpur, cotton fields in Buldhana & Wardha district.",
"title": ""
}
] |
scidocsrr
|
aa2154918a45ccf740d744604925ba81
|
Modelling Compression with Discourse Constraints
|
[
{
"docid": "f48ce749a592d83a8fd60485b6b87ea6",
"text": "We present a system for the semantic role labeling task. The system combines a machine learning technique with an inference procedure based on integer linear programming that supports the incorporation of linguistic and structural constraints into the decision process. The system is tested on the data provided in CoNLL2004 shared task on semantic role labeling and achieves very competitive results.",
"title": ""
}
] |
[
{
"docid": "a8b99c09d71135f96a21600527dd58fa",
"text": "When a program is modified during software evolution, developers typically run the new version of the program against its existing test suite to validate that the changes made on the program did not introduce unintended side effects (i.e., regression faults). This kind of regression testing can be effective in identifying some regression faults, but it is limited by the quality of the existing test suite. Due to the cost of testing, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. As a result, these test suites tend to exercise only a small subset of the program's functionality and may be inadequate for testing the changes in a program. To address this issue, we propose a novel approach called Behavioral Regression Testing (BERT). Given two versions of a program, BERT identifies behavioral differences between the two versions through dynamical analysis, in three steps. First, it generates a large number of test inputs that focus on the changed parts of the code. Second, it runs the generated test inputs on the old and new versions of the code and identifies differences in the tests' behavior. Third, it analyzes the identified differences and presents them to the developers. By focusing on a subset of the code and leveraging differential behavior, BERT can provide developers with more (and more detailed) information than traditional regression testing techniques. To evaluate BERT, we implemented it as a plug-in for Eclipse, a popular Integrated Development Environment, and used the plug-in to perform a preliminary study on two programs. The results of our study are promising, in that BERT was able to identify true regression faults in the programs.",
"title": ""
},
{
"docid": "7b4400c6ef5801e60a6f821810538381",
"text": "A CMOS self-biased fully differential amplifier is presented. Due to the self-biasing structure of the amplifier and its associated negative feedback, the amplifier is compensated to achieve low sensitivity to process, supply voltage and temperature (PVT) variations. The output common-mode voltage of the amplifier is adjusted through the same biasing voltages provided by the common-mode feedback (CMFB) circuit. The amplifier core is based on a simple structure that uses two CMOS inverters to amplify the input differential signal. Despite its simple structure, the proposed amplifier is attractive to a wide range of applications, specially those requiring low power and small silicon area. As two examples, a sample-and-hold circuit and a second order multi-bit sigma-delta modulator either employing the proposed amplifier are presented. Besides these application examples, a set of amplifier performance parameters is given.",
"title": ""
},
{
"docid": "3edab364abeabc97b55e8d711217b734",
"text": "To facilitate collaboration over sensitive data, we present DataSynthesizer, a tool that takes a sensitive dataset as input and generates a structurally and statistically similar synthetic dataset with strong privacy guarantees. The data owners need not release their data, while potential collaborators can begin developing models and methods with some confidence that their results will work similarly on the real dataset. The distinguishing feature of DataSynthesizer is its usability --- the data owner does not have to specify any parameters to start generating and sharing data safely and effectively.\n DataSynthesizer consists of three high-level modules --- DataDescriber, DataGenerator and ModelInspector. The first, DataDescriber, investigates the data types, correlations and distributions of the attributes in the private dataset, and produces a data summary, adding noise to the distributions to preserve privacy. DataGenerator samples from the summary computed by DataDescriber and outputs synthetic data. ModelInspector shows an intuitive description of the data summary that was computed by DataDescriber, allowing the data owner to evaluate the accuracy of the summarization process and adjust any parameters, if desired.\n We describe DataSynthesizer and illustrate its use in an urban science context, where sharing sensitive, legally encumbered data between agencies and with outside collaborators is reported as the primary obstacle to data-driven governance.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/DataSynthesizer.",
"title": ""
},
{
"docid": "34208fafbb3009a1bb463e3d8d983e61",
"text": "A large and growing number of web pages display contextual advertising based on keywords automatically extracted from the text of the page, and this is a substantial source of revenue supporting the web today. Despite the importance of this area, little formal, published research exists. We describe a system that learns how to extract keywords from web pages for advertisement targeting. The system uses a number of features, such as term frequency of each potential keyword, inverse document frequency, presence in meta-data, and how often the term occurs in search query logs. The system is trained with a set of example pages that have been hand-labeled with \"relevant\" keywords. Based on this training, it can then extract new keywords from previously unseen pages. Accuracy is substantially better than several baseline systems.",
"title": ""
},
{
"docid": "1a66727305984ae359648e4bd3e75ba2",
"text": "Self-organizing models constitute valuable tools for data visualization, clustering, and data mining. Here, we focus on extensions of basic vector-based models by recursive computation in such a way that sequential and tree-structured data can be processed directly. The aim of this article is to give a unified review of important models recently proposed in literature, to investigate fundamental mathematical properties of these models, and to compare the approaches by experiments. We first review several models proposed in literature from a unifying perspective, thereby making use of an underlying general framework which also includes supervised recurrent and recursive models as special cases. We shortly discuss how the models can be related to different neuron lattices. Then, we investigate theoretical properties of the models in detail: we explicitly formalize how structures are internally stored in different context models and which similarity measures are induced by the recursive mapping onto the structures. We assess the representational capabilities of the models, and we shortly discuss the issues of topology preservation and noise tolerance. The models are compared in an experiment with time series data. Finally, we add an experiment for one context model for tree-structured data to demonstrate the capability to process complex structures.",
"title": ""
},
{
"docid": "ca4696183f72882d2f69cc17ab761ef3",
"text": "Entropy, as it relates to dynamical systems, is the rate of information production. Methods for estimation of the entropy of a system represented by a time series are not, however, well suited to analysis of the short and noisy data sets encountered in cardiovascular and other biological studies. Pincus introduced approximate entropy (ApEn), a set of measures of system complexity closely related to entropy, which is easily applied to clinical cardiovascular and other time series. ApEn statistics, however, lead to inconsistent results. We have developed a new and related complexity measure, sample entropy (SampEn), and have compared ApEn and SampEn by using them to analyze sets of random numbers with known probabilistic character. We have also evaluated cross-ApEn and cross-SampEn, which use cardiovascular data sets to measure the similarity of two distinct time series. SampEn agreed with theory much more closely than ApEn over a broad range of conditions. The improved accuracy of SampEn statistics should make them useful in the study of experimental clinical cardiovascular and other biological time series.",
"title": ""
},
{
"docid": "71cc535dcae1b50f9fe3314f4140d916",
"text": "Information and communications technology has fostered the rise of the sharing economy, enabling individuals to share excess capacity. In this paper, we focus on Airbnb.com, which is among the most prominent examples of the sharing economy. We take the perspective of an accommodation provider and investigate the concept of trust, which facilitates complete strangers to form temporal C2C relationships on Airbnb.com. In fact, the implications of trust in the sharing economy fundamentally differ to related online industries. In our research model, we investigate the formation of trust by incorporating two antecedents – ‘Disposition to trust’ and ‘Familiarity with Airbnb.com’. Furthermore, we differentiate between ‘Trust in Airbnb.com’ and ‘Trust in renters’ and examine their implications on two provider intentions. To seek support for our research model, we conducted a survey with 189 participants. The results show that both trust constructs are decisive to successfully initiate a sharing deal between two parties.",
"title": ""
},
{
"docid": "3335a737dbd959b6ea69b240a053f1e9",
"text": "The amount of effort needed to maintain a software system is related to the technical quality of the source code of that system. The ISO 9126 model for software product quality recognizes maintainability as one of the 6 main characteristics of software product quality, with adaptability, changeability, stability, and testability as subcharacteristics of maintainability. Remarkably, ISO 9126 does not provide a consensual set of measures for estimating maintainability on the basis of a system's source code. On the other hand, the maintainability index has been proposed to calculate a single number that expresses the maintainability of a system. In this paper, we discuss several problems with the MI, and we identify a number of requirements to be fulfilled by a maintainability model to be usable in practice. We sketch a new maintainability model that alleviates most of these problems, and we discuss our experiences with using such as system for IT management consultancy activities.",
"title": ""
},
{
"docid": "82d7a2b6045e90731d510ce7cce1a93c",
"text": "INTRODUCTION\nExtracellular vesicles (EVs) are critical mediators of intercellular communication, capable of regulating the transcriptional landscape of target cells through horizontal transmission of biological information, such as proteins, lipids, and RNA species. This capability highlights their potential as novel targets for disease intervention. Areas covered: This review focuses on the emerging importance of discovery proteomics (high-throughput, unbiased quantitative protein identification) and targeted proteomics (hypothesis-driven quantitative protein subset analysis) mass spectrometry (MS)-based strategies in EV biology, especially exosomes and shed microvesicles. Expert commentary: Recent advances in MS hardware, workflows, and informatics provide comprehensive, quantitative protein profiling of EVs and EV-treated target cells. This information is seminal to understanding the role of EV subtypes in cellular crosstalk, especially when integrated with other 'omics disciplines, such as RNA analysis (e.g., mRNA, ncRNA). Moreover, high-throughput MS-based proteomics promises to provide new avenues in identifying novel markers for detection, monitoring, and therapeutic intervention of disease.",
"title": ""
},
{
"docid": "a63cc19137ead27acf5530c0bdb924f5",
"text": "We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications.",
"title": ""
},
{
"docid": "f3083088c9096bb1932b139098cbd181",
"text": "OBJECTIVE\nMaiming and death due to dog bites are uncommon but preventable tragedies. We postulated that patients admitted to a level I trauma center with dog bites would have severe injuries and that the gravest injuries would be those caused by pit bulls.\n\n\nDESIGN\nWe reviewed the medical records of patients admitted to our level I trauma center with dog bites during a 15-year period. We determined the demographic characteristics of the patients, their outcomes, and the breed and characteristics of the dogs that caused the injuries.\n\n\nRESULTS\nOur Trauma and Emergency Surgery Services treated 228 patients with dog bite injuries; for 82 of those patients, the breed of dog involved was recorded (29 were injured by pit bulls). Compared with attacks by other breeds of dogs, attacks by pit bulls were associated with a higher median Injury Severity Scale score (4 vs. 1; P = 0.002), a higher risk of an admission Glasgow Coma Scale score of 8 or lower (17.2% vs. 0%; P = 0.006), higher median hospital charges ($10,500 vs. $7200; P = 0.003), and a higher risk of death (10.3% vs. 0%; P = 0.041).\n\n\nCONCLUSIONS\nAttacks by pit bulls are associated with higher morbidity rates, higher hospital charges, and a higher risk of death than are attacks by other breeds of dogs. Strict regulation of pit bulls may substantially reduce the US mortality rates related to dog bites.",
"title": ""
},
{
"docid": "7f69fbcda9d6ee11d5cc1591a88b6403",
"text": "Voice conversion is defined as modifying the speech signal of one speaker (source speaker) so that it sounds as if it had been pronounced by a different speaker (target speaker). This paper describes a system for efficient voice conversion. A novel mapping function is presented which associates the acoustic space of the source speaker with the acoustic space of the target speaker. The proposed system is based on the use of a Gaussian Mixture Model, GMM, to model the acoustic space of a speaker and a pitch synchronous harmonic plus noise representation of the speech signal for prosodic modifications. The mapping function is a continuous parametric function which takes into account the probab ilistic classification provided by the mixture model (GMM). Evaluation by objective tests showed that the proposed system was able to reduce the perceptual distance between the source and target speaker by 70%. Formal listening tests also showed that 97% of the converted speech was judged to be spoken from the target speaker while maintaining high speech qua lity.",
"title": ""
},
{
"docid": "1fdecf272795a163d32838022247568e",
"text": "This paper presents an anisotropy-based position estimation approach taking advantage of saturation effects in permanent magnet synchronous machines (PMSM). Due to magnetic anisotropies of the electrical machine, current responses to high-frequency voltage excitations contain rotor position information. Therefore, the rotor position can be estimated by means of these current responses. The relation between the high-frequency current changes, the applied phase voltages and the rotor position is given by the inverse inductance matrix of the machine. In this paper, an analytical model of the inverse inductance matrix considering secondary anisotropies and saturation effects is proposed. It is shown that the amount of rotor position information contained in these current changes depends on the direction of the voltage excitation and the operating point. By means of this knowledge, a position estimation approach for slowly-sampled control systems is developed. Experimental results show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "c16a6e967bec774cdefacc110753743e",
"text": "In this letter, a top-gated field-effect device (FED) manufactured from monolayer graphene is investigated. Except for graphene deposition, a conventional top-down CMOS-compatible process flow is applied. Carrier mobilities in graphene pseudo-MOS structures are compared to those obtained from the top-gated Graphene-FEDs. The extracted values exceed the universal mobility of silicon and silicon-on-insulator MOSFETs",
"title": ""
},
{
"docid": "a31d88d98a3a335979a271c9bc57b86f",
"text": "Sympathetic nervous system (SNS) activity plays a significant role in cardiovascular control. Preejection period (PEP) is a noninvasive biomarker that reflects SNS activity. In this paper, unobtrusive estimation of PEP of the heart using ballistocardiogram (BCG) and electrocardiogram (ECG) signals is investigated. Although previous work has shown that the time intervals from ECG R-peak to BCG I and J peaks are correlated with PEP, relying on a single BCG beat can be prone to errors. An approach is proposed based on multiple regression and use of initial training data sets with a reference standard, impedance cardiography (ICG). For evaluation, healthy subjects were asked to stand on a force plate to record BCG and ECG signals. Regression coefficients were obtained using leave-one-out cross-validation and the true PEP values were obtained using ECG and ICG. Regression coefficients were averaged over two different recordings from the same subjects. The estimation performance was evaluated based on the data, via leave-one-out cross-validation. Multiple regression is shown to reduce the mean absolute error and the root mean square error, and has a reduced confidence interval compared with the models based on only a single feature. This paper shows that the fusion of multiple timing intervals can be useful for improved PEP estimation.",
"title": ""
},
{
"docid": "d74486ee2c479d6f644630e38f90f386",
"text": "ion? Does it supplant the real or is there, in it, reality itself? Like so many true things, this one doesn't resolve itself to a black or a white. Nor is it gray. It is, along with the rest of life, black/white. Both/neither.\" {John Perry Barlow 1995, p. 56) 1. What Is Infrastructure? People who study how technology affects organizational transformation increasingly recognize its dual, paradoxical nature. It is both engine and barrier for change; both customizable and rigid; both inside and outside organizational practices. It is product and process. Some authors have analyzed this seeming paradox as structuration: (after Giddens)—technological rigidities give rise to adaptations which in turn require calibration and standardization. Over time, structureagency relations re-form dialectically (Orlikowski 1991, Davies and Mitchell 1994, Korpela 1994). This paradox is integral to large scale, dispersed technologies (Brown 1047-7047/96/0701/0111$01.25 Copyright © 1996, Institute for Operations Research and the Management Sciences INFORMATION SYSTEMS RESEARCH Vol. 7, No. 1, March 1996 111",
"title": ""
},
{
"docid": "2e6b034cbb73d91b70e3574a06140621",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use.\n\n\nAIM OF STUDY\nThis study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin.\n\n\nMATERIALS AND METHODS\nThis is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks.\n\n\nRESULTS\nThere was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 μmol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 μmol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 μmol/L, respectively).\n\n\nCONCLUSIONS\nBitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day.",
"title": ""
},
{
"docid": "dbf5fd755e91c4a67446dcce2d8759ba",
"text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. .",
"title": ""
},
{
"docid": "938aecbc66963114bf8753d94f7f58ed",
"text": "OBJECTIVE\nTo observe the clinical effect of bee-sting (venom) therapy in the treatment of rheumatoid arthritis (RA).\n\n\nMETHODS\nOne hundred RA patients were randomly divided into medication (control) group and bee-venom group, with 50 cases in each. Patients of control group were treated with oral administration of Methotrexate (MTX, 7.5 mg/w), Sulfasalazine (0.5 g,t. i.d.), Meloxicam (Mobic,7. 5 mg, b. i. d.); and those of bee-venom group treated with Bee-sting of Ashi-points and the above-mentioned Western medicines. Ashi-points were selected according to the position of RA and used as the main acupoints, supplemented with other acupoints according to syndrome differentiation. The treatment was given once every other day and all the treatments lasted for 3 months.\n\n\nRESULTS\nCompared with pre-treatment, scores of joint swelling degree, joint activity, pain, and pressing pain, joint-swelling number, grasp force, 15 m-walking duration, morning stiff duration in bee-venom group and medication group were improved significantly (P<0.05, 0.01). Comparison between two groups showed that after the therapy, scores of joint swelling, pain and pressing pain, joint-swelling number and morning stiff duration, and the doses of the administered MTX and Mobic in bee-venom group were all significantly lower than those in medication group (P<0.05, 0.01); whereas the grasp force in been-venom group was markedly higher than that in medication group (P<0.05). In addition, the relapse rate of bee-venom group was obviously lower than that of medication group (P<0.05; 12% vs 32%).\n\n\nCONCLUSION\nCombined application of bee-venom therapy and medication is superior to simple use of medication in relieving RA, and when bee-sting therapy used, the commonly-taken doses of western medicines may be reduced, and the relapse rate gets lower.",
"title": ""
}
] |
scidocsrr
|
68a78d56c63b1ba917d18b94fa7cee6c
|
A novel wavelet-SVM short-time passenger flow prediction in Beijing subway system
|
[
{
"docid": "4e29bdddbdeb5382347a3915dc7048de",
"text": "Accuracy and robustness with respect to missing or corrupt input data are two key characteristics for any travel time prediction model that is to be applied in a real-time environment (e.g. for display on variable message signs on freeways). This article proposes a freeway travel time prediction framework that exhibits both qualities. The framework exploits a recurrent neural network topology, the so-called statespace neural network (SSNN), with preprocessing strategies based on imputation. Although the SSNN model is a neural network, its design (in terms of inputand model selection) is not ‘‘black box’’ nor location-specific. Instead, it is based on the lay-out of the freeway stretch of interest. In this sense, the SSNN model combines the generality of neural network approaches, with traffic related (‘‘white-box’’) design. Robustness to missing data is tackled by means of simple imputation (data replacement) schemes, such as exponential forecasts and spatial interpolation. Although there are clear theoretical shortcomings to ‘‘simple’’ imputation schemes to remedy input failure, our results indicate that their use is justified in this particular application. The SSNN model appears to be robust to the ‘‘damage’’ done by these imputation schemes. This is true for both incidental (random) and structural input failure. We demonstrate that the SSNN travel time prediction framework yields good accurate and robust travel time predictions on both synthetic and real data. 2005 Elsevier Ltd. All rights reserved. 0968-090X/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.trc.2005.03.001 * Corresponding author. E-mail address: h.vanlint@citg.tudelft.nl (J.W.C. van Lint). 348 J.W.C. van Lint et al. / Transportation Research Part C 13 (2005) 347–369",
"title": ""
}
] |
[
{
"docid": "8cd99d9b59e6f1b631767b57fb506619",
"text": "We describe origami programming methodology based on constraint functional logic programming. The basic operations of origami are reduced to solving systems of equations which describe the geometric properties of paper folds. We developed two software components: one that provides primitives to construct, manipulate and visualize paper folds and the other that solves the systems of equations. Using these components, we illustrate computer-supported origami construction and show the significance of the constraint functional logic programming paradigm in the program development.",
"title": ""
},
{
"docid": "2f7a0ab1c7a3ae17ef27d2aa639c39b4",
"text": "Evolutionary algorithms are commonly used to create high-performing strategies or agents for computer games. In this paper, we instead choose to evolve the racing tracks in a car racing game. An evolvable track representation is devised, and a multiobjective evolutionary algorithm maximises the entertainment value of the track relative to a particular human player. This requires a way to create accurate models of players' driving styles, as well as a tentative definition of when a racing track is fun, both of which are provided. We believe this approach opens up interesting new research questions and is potentially applicable to commercial racing games.",
"title": ""
},
{
"docid": "01f423d3fae351fa6c39821d0ec895e6",
"text": "Skeptics believe the Web is too unstructured for Web mining to succeed. Indeed, data mining has been applied traditionally to databases, yet much of the information on the Web lies buried in documents designed for human consumption such as home pages or product catalogs. Furthermore, much of the information on the Web is presented in natural-language text with no machine-readable semantics; HTML annotations structure the display of Web pages, but provide little insight into their content. Some have advocated transforming the Web into a massive layered database to facilitate data mining [12], but the Web is too dynamic and chaotic to be tamed in this manner. Others have attempted to hand code site-specific “wrappers” that facilitate the extraction of information from individual Web resources (e.g., [8]). Hand coding is convenient but cannot keep up with the explosive growth of the Web. As an alternative, this article argues for the structured Web hypothesis: Information on the Web is sufficiently structured to facilitate effective Web mining. Examples of Web structure include linguistic and typographic conventions, HTML annotations (e.g., <title>), classes of semi-structured documents (e.g., product catalogs), Web indices and directories, and much more. To support the structured Web hypothesis, this article will survey preliminary Web mining successes and suggest directions for future work. Web mining may be organized into the following subtasks:",
"title": ""
},
{
"docid": "cb1c65cb1e7959e52f3091da6103ff3a",
"text": "The Internet of Things paradigm originates from the proliferation of intelligent devices that can sense, compute and communicate data streams in a ubiquitous information and communication network. The great amounts of data coming from these devices introduce some challenges related to the storage and processing capabilities of the information. This strengthens the novel paradigm known as Big Data. In such a complex scenario, the Cloud computing is an efficient solution for the managing of sensor data. This paper presents Polluino, a system for monitoring the air pollution via Arduino. Moreover, a Cloud-based platform that manages data coming from air quality sensors is developed.",
"title": ""
},
{
"docid": "e244cedaac9812461142859fc87f3e52",
"text": "Krill herd (KH) has been proven to be an efficient algorithm for function optimization. For some complex functions, this algorithmmay have problems with convergence or being trapped in local minima. To cope with these issues, this paper presents an improved KH-based algorithm, called Opposition Krill Herd (OKH). The proposed approach utilizes opposition-based learning (OBL), position clamping (PC) and method while both PC and heavy-tailed CM help KH escape from local optima. Simulations are implemented on an array of benchmark functions and two engineering optimization problems. The results show that OKH has a good performance on majority of the considered functions and two engineering cases. The influence of each individual strategy (OBL, CM and PC) on KH is verified through 25 benchmarks. The results show that the KH with OBL, CM and PC operators, has the best performance among different variants of OKH. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6655b03c0fcc83a71a3119d7e526eedc",
"text": "Dynamic magnetic resonance imaging (MRI) scans can be accelerated by utilizing compressed sensing (CS) reconstruction methods that allow for diagnostic quality images to be generated from undersampled data. Unfortunately, CS reconstruction is time-consuming, requiring hours between a dynamic MRI scan and image availability for diagnosis. In this work, we train a convolutional neural network (CNN) to perform fast reconstruction of severely undersampled dynamic cardiac MRI data, and we explore the utility of CNNs for further accelerating dynamic MRI scan times. Compared to state-of-the-art CS reconstruction techniques, our CNN achieves reconstruction speeds that are 150x faster without significant loss of image quality. Additionally, preliminary results suggest that CNNs may allow scan times that are 2x faster than those allowed by CS.",
"title": ""
},
{
"docid": "8b675cc47b825268837a7a2b5a298dc9",
"text": "Artificial Intelligence chatbot is a technology that makes interaction between man and machine possible by using natural language. In this paper, we proposed an architectural design of a chatbot that will function as virtual diabetes physician/doctor. This chatbot will allow diabetic patients to have a diabetes control/management advice without the need to go to the hospital. A general history of a chatbot, a brief description of each chatbots is discussed. We proposed the design of a new technique that will be implemented in this chatbot as the key component to function as diabetes physician. Using this design, chatbot will remember the conversation path through parameter called Vpath. Vpath will allow chatbot to gives a response that is mostly suitable for the whole conversation as it specifically designed to be a virtual diabetes physician.",
"title": ""
},
{
"docid": "3c30209d29779153b4cb33d13d101cf8",
"text": "Acceptance-based interventions such as mindfulness-based stress reduction program and acceptance and commitment therapy are alternative therapies for cognitive behavioral therapy for treating chronic pain patients. To assess the effects of acceptance-based interventions on patients with chronic pain, we conducted a systematic review and meta-analysis of controlled and noncontrolled studies reporting effects on mental and physical health of pain patients. All studies were rated for quality. Primary outcome measures were pain intensity and depression. Secondary outcomes were anxiety, physical wellbeing, and quality of life. Twenty-two studies (9 randomized controlled studies, 5 clinical controlled studies [without randomization] and 8 noncontrolled studies) were included, totaling 1235 patients with chronic pain. An effect size on pain of 0.37 was found for the controlled studies. The effect on depression was 0.32. The quality of the studies was not found to moderate the effects of acceptance-based interventions. The results suggest that at present mindfulness-based stress reduction program and acceptance and commitment therapy are not superior to cognitive behavioral therapy but can be good alternatives. More high-quality studies are needed. It is recommended to focus on therapies that integrate mindfulness and behavioral therapy. Acceptance-based therapies have small to medium effects on physical and mental health in chronic pain patients. These effects are comparable to those of cognitive behavioral therapy.",
"title": ""
},
{
"docid": "7cef2fac422d9fc3c3ffbc130831b522",
"text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.",
"title": ""
},
{
"docid": "9a1505d126d1120ffa8d9670c71cb076",
"text": "A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.",
"title": ""
},
{
"docid": "bc6a6cf11881326360387cbed997dcf1",
"text": "The explanation of heterogeneous multivariate time series data is a central problem in many applications. The problem requires two major data mining challenges to be addressed simultaneously: Learning models that are humaninterpretable and mining of heterogeneous multivariate time series data. The intersection of these two areas is not adequately explored in the existing literature. To address this gap, we propose grammar-based decision trees and an algorithm for learning them. Grammar-based decision tree extends decision trees with a grammar framework. Logical expressions, derived from context-free grammar, are used for branching in place of simple thresholds on attributes. The added expressivity enables support for a wide range of data types while retaining the interpretability of decision trees. By choosing a grammar based on temporal logic, we show that grammar-based decision trees can be used for the interpretable classification of high-dimensional and heterogeneous time series data. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to analyze the classic Australian Sign Language dataset as well as categorize and explain near midair collisions to support the development of a prototype aircraft collision avoidance system.",
"title": ""
},
{
"docid": "88130a65e625f85e527d63a0d2a446d4",
"text": "Test-Driven Development (TDD) is an agile practice that is widely accepted and advocated by most agile methods and methodologists. In this paper, we report on a longitudinal case study of an IBM team who has sustained use of TDD for five years and over ten releases of a Java-implemented product. The team worked from a design and wrote tests incrementally before or while they wrote code and, in the process, developed a significant asset of automated tests. The IBM team realized sustained quality improvement relative to a pre-TDD project and consistently had defect density below industry standards. As a result, our data indicate that the TDD practice can aid in the production of high quality products. This quality improvement would compensate for the moderate perceived productivity losses. Additionally, the use of TDD may decrease the degree to which code complexity increases as software ages.",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "4a8e78ff046070b14a53f6cd0737dd32",
"text": "This study aims to gain insights into emerging research fields in the area of marketing and tourism. It provides support for the use of quantitative techniques to facilitate content analysis. The authors present a longitudinal latent semantic analysis of keywords. The proposed method is illustrated by two different examples: a scholarly journal (International Marketing Review) and conference proceedings (ENTER eTourism Conference). The methodology reveals an understanding of the current state of the art of marketing research and e-tourism by identifying neglected, popular or upcoming thematic research foci. The outcomes are compared with former results generated by traditional content analysis techniques. Findings confirm that the proposed methodology has the potential to complement qualitative content analysis, as the semantic analysis produces similar outcomes to qualitative content analysis to some extent. This paper reviews a journal’s content over a period of nearly three decades. The authors argue that the suggested methodology facilitates the analysis dramatically and can thus be simply applied on a regular basis in order to monitor topic development within a specific research domain.",
"title": ""
},
{
"docid": "204ecea0d8b6c572cd1a5d20b5e267a9",
"text": "Nowadays it is very common for people to write online reviews of products they have purchased. These reviews are a very important source of information for the potential customers before deciding to purchase a product. Consequently, websites containing customer reviews are becoming targets of opinion spam. -- undeserving positive or negative reviews; reviews that reviewers never use the product, but is written with an agenda in mind. This paper aims to detect spam reviews by users. Characteristics of the review will be identified based on previous research, plus a new feature -- rating consistency check. The goal is to devise a tool to evaluate the product reviews and detect product review spams. The approach is based on multiple criteria: checking unusual review vs. rating patterns, links or advertisements, detecting questions and comparative reviews. We tested our system on a couple of sets of data and find that we are able to detect these factors effectively.",
"title": ""
},
{
"docid": "b1167c4321d3235974bc6171d6c062bb",
"text": "Thousands of malicious applications targeting mobile devices, including the popular Android platform, are created every day. A large number of those applications are created by a small number of professional underground actors, however previous studies overlooked such information as a feature in detecting and classifying malware, and in attributing malware to creators. Guided by this insight, we propose a method to improve on the performance of Android malware detection by incorporating the creator’s information as a feature and classify malicious applications into similar groups. We developed a system called AndroTracker that implements this method in practice. AndroTracker enables fast detection of malware by using creator information such as serial number of certificate. Additionally, it analyzes malicious behaviors and permissions to increase detection accuracy. AndroTracker also can classify malware based on similarity scoring. Finally, AndroTracker shows detection and classification performance with 99% and 90% accuracy respectively.",
"title": ""
},
{
"docid": "dd9d776dbc470945154d460921005204",
"text": "The Ant Colony System (ACS) is, next to Ant Colony Optimization (ACO) and the MAX-MIN Ant System (MMAS), one of the most efficient metaheuristic algorithms inspired by the behavior of ants. In this article we present three novel parallel versions of the ACS for the graphics processing units (GPUs). To the best of our knowledge, this is the first such work on the ACS which shares many key elements of the ACO and the MMAS, but differences in the process of building solutions and updating the pheromone trails make obtaining an efficient parallel version for the GPUs a difficult task. The proposed parallel versions of the ACS differ mainly in their implementations of the pheromone memory. The first two use the standard pheromone matrix, and the third uses a novel selective pheromone memory. Computational experiments conducted on several Travelling Salesman Problem (TSP) instances of sizes ranging from 198 to 2392 cities showed that the parallel ACS on Nvidia Kepler GK104 GPU (1536 CUDA cores) is able to obtain a speedup up to 24.29x vs the sequential ACS running on a single core of Intel Xeon E5-2670 CPU. The parallel ACS with the selective pheromone memory achieved speedups up to 16.85x, but in most cases the obtained solutions were of significantly better quality than for the sequential ACS.",
"title": ""
},
{
"docid": "493eb0d5e4f9db288de9abd7ab172a2d",
"text": "To reveal and leverage the correlated and complemental information between different views, a great amount of multi-view learning algorithms have been proposed in recent years. However, unsupervised feature selection in multiview learning is still a challenge due to lack of data labels that could be utilized to select the discriminative features. Moreover, most of the traditional feature selection methods are developed for the single-view data, and are not directly applicable to the multi-view data. Therefore, we propose an unsupervised learning method called Adaptive Unsupervised Multi-view Feature Selection (AUMFS) in this paper. AUMFS attempts to jointly utilize three kinds of vital information, i.e., data cluster structure, data similarity and the correlations between different views, contained in the original data together for feature selection. To achieve this goal, a robust sparse regression model with the l2,1-norm penalty is introduced to predict data cluster labels, and at the same time, multiple view-dependent visual similar graphs are constructed to flexibly model the visual similarity in each view. Then, AUMFS integrates data cluster labels prediction and adaptive multi-view visual similar graph learning into a unified framework. To solve the objective function of AUMFS, a simple yet efficient iterative method is proposed. We apply AUMFS to three visual concept recognition applications (i.e., social image concept recognition, object recognition and video-based human action recognition) on four benchmark datasets. Experimental results show the proposed method significantly outperforms several state-of-the-art feature selection methods. More importantly, our method is not very sensitive to the parameters and the optimization method converges very fast.",
"title": ""
},
{
"docid": "fcf894fdaec96bd826ec3c5eb31be707",
"text": "In future defence scenarios directed energy weapons are of increasing interest. Therefore national and international R&D programs are increasing their activities on laser and high power microwave technologies in the defence and anti terror areas. The paper gives an overview of the German R&D programmes on directed energy weapons. A solid state medium energy weapon laser (MEL) is investigated at Rheinmetall for i.e. anti air defence applications up to distances of about 7 km. Due to the small volume these Lasers can be integrated as a secondary weapon system into mobile platforms such as AECVs. The beam power of a MEL is between 1 kW and 100 kW. The electric energy per pulse is in the kJ range. A burst of only a few pulses is needed to destroy optronics of targets in a distance up to 7 km. The electric energy requirements of a MEL system are low. High energy density pulsed power technologies are already available for the integration into a medium sized vehicle. The paper gives an overview on the MEL technologies which are under investigation in order to introduce a technology demonstrator at the end of 2005. The electric requirements at the interface to the power bus of a vehicle are presented. Finally an integration concept as a secondary weapon in a medium sized vehicle is given and discussed. In close cooperation with Diehl Munitionssysteme high power microwave technologies are investigated. Different kinds of HPM Sources are under development for defence and anti terror applications. It is the goal to introduce first prototype systems within a short time frame. The paper gives a brief overview on the different source technologies currently under investigation. The joint program concentrates on ultra wide band and damped sinus HPM waveforms in single shot and repetitive operation. Radiation powers up to the Gigawatt range are realized up to now. By presenting some characteristic scenarios for those HPM systems the wide range of applications is proven in the paper.",
"title": ""
},
{
"docid": "92b20ec581fc5609da2908f9f0f74a33",
"text": "We address the problem of using external rotation information with uncalibrated video sequences. The main problem addressed is, what is the benefit of the orientation information for camera calibration? It is shown that in case of a rotating camera the camera calibration problem is linear even in the case that all intrinsic parameters vary. For arbitrarily moving cameras the calibration problem is also linear but underdetermined for the general case of varying all intrinsic parameters. However, if certain constraints are applied to the intrinsic parameters the camera calibration can be computed linearily. It is analyzed which constraints are needed for camera calibration of freely moving cameras. Furthermore we address the problem of aligning the camera data with the rotation sensor data in time. We give an approach to align these data in case of a rotating camera.",
"title": ""
}
] |
scidocsrr
|
f0f07e5aec207f7edfc75e2136b028a7
|
Author ' s personal copy The role of RFID in agriculture : Applications , limitations and challenges
|
[
{
"docid": "dc67945b32b2810a474acded3c144f68",
"text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.",
"title": ""
}
] |
[
{
"docid": "48168ed93d710d3b85b7015f2c238094",
"text": "ion and hierarchical information processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such flexibility in artificial systems is challenging, even with more and more computational power. Here, we investigate the hypothesis that abstraction and hierarchical information processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded-optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems.",
"title": ""
},
{
"docid": "0685c33de763bdedf2a1271198569965",
"text": "The use of virtual-reality technology in the areas of rehabilitation and therapy continues to grow, with encouraging results being reported for applications that address human physical, cognitive, and psychological functioning. This article presents a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis for the field of VR rehabilitation and therapy. The SWOT analysis is a commonly employed framework in the business world for analyzing the factors that influence a company's competitive position in the marketplace with an eye to the future. However, the SWOT framework can also be usefully applied outside of the pure business domain. A quick check on the Internet will turn up SWOT analyses for urban-renewal projects, career planning, website design, youth sports programs, and evaluation of academic research centers, and it becomes obvious that it can be usefully applied to assess and guide any organized human endeavor designed to accomplish a mission. It is hoped that this structured examination of the factors relevant to the current and future status of VR rehabilitation will provide a good overview of the key issues and concerns that are relevant for understanding and advancing this vital application area.",
"title": ""
},
{
"docid": "10d8bbea398444a3fb6e09c4def01172",
"text": "INTRODUCTION\nRecent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act of 2011.\n\n\nMETHOD\nThe current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005-2009.\n\n\nRESULTS\nResults show that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because of inattentive and risky driving.",
"title": ""
},
{
"docid": "f47019a78ee833dcb8c5d15a4762ccf9",
"text": "It has recently been shown that Bondi-van der Burg-Metzner-Sachs supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e., zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.",
"title": ""
},
{
"docid": "2f1ba4ba5cff9a6e614aa1a781bf1b13",
"text": "Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.",
"title": ""
},
{
"docid": "70c6aaf0b0fc328c677d7cb2249b68bf",
"text": "In this paper, we discuss and review how combined multiview imagery from satellite to street level can benefit scene analysis. Numerous works exist that merge information from remote sensing and images acquired from the ground for tasks such as object detection, robots guidance, or scene understanding. What makes the combination of overhead and street-level images challenging are the strongly varying viewpoints, the different scales of the images, their illuminations and sensor modality, and time of acquisition. Direct (dense) matching of images on a per-pixel basis is thus often impossible, and one has to resort to alternative strategies that will be discussed in this paper. For such purpose, we review recent works that attempt to combine images taken from the ground and overhead views for purposes like scene registration, reconstruction, or classification. After the theoretical review, we present three recent methods to showcase the interest and potential impact of such fusion on real applications (change detection, image orientation, and tree cataloging), whose logic can then be reused to extend the use of ground-based images in remote sensing and vice versa. Through this review, we advocate that cross fertilization between remote sensing, computer vision, and machine learning is very valuable to make the best of geographic data available from Earth observation sensors and ground imagery. Despite its challenges, we believe that integrating these complementary data sources will lead to major breakthroughs in Big GeoData. It will open new perspectives for this exciting and emerging field.",
"title": ""
},
{
"docid": "b51fcfa32dbcdcbcc49f1635b44601ed",
"text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.",
"title": ""
},
{
"docid": "2956f80e896a660dbd268f9212e6d00f",
"text": "Writing as a productive skill in EFL classes is outstandingly significant. In writing classes there needs to be an efficient relationship between the teacher and students. The teacher as the only audience in many writing classes responds to students’ writing. In the early part of the 21 century the range of technologies available for use in classes has become very diverse and the ways they are being used in classrooms all over the world might affect the outcome we expect from our classes. As the present generations of students are using new technologies, the application of these recent technologies in classes might be useful. Using technology in writing classes provides opportunities for students to hand their written work to the teacher without the need for any face-to-face interaction. This present study investigates the effect of Edmodo on EFL learners’ writing performance. A quasi-experimental design was used in this study. The participants were 40 female advanced-level students attending advanced writing classes at Irana English Institute, Razan Hamedan. The focus was on the composition writing ability. The students were randomly assigned to two groups, experimental and control. Edmodo was used in the experimental group. Mann-Whitney U test was used for data analysis; the results indicated that the use of Edmodo in writing was more effective on EFL learners’ writing performance participating in this study.",
"title": ""
},
{
"docid": "1d8cd32e2a2748b9abd53cf32169d798",
"text": "Optimizing the weights of Artificial Neural Networks (ANNs) is a great important of a complex task in the research of machine learning due to dependence of its performance to the success of learning process and the training method. This paper reviews the implementation of meta-heuristic algorithms in ANNs’ weight optimization by studying their advantages and disadvantages giving consideration to some meta-heuristic members such as Genetic algorithim, Particle Swarm Optimization and recently introduced meta-heuristic algorithm called Harmony Search Algorithm (HSA). Also, the application of local search based algorithms to optimize the ANNs weights and their benefits as well as their limitations are briefly elaborated. Finally, a comparison between local search methods and global optimization methods is carried out to speculate the trends in the progresses of ANNs’ weight optimization in the current resrearch.",
"title": ""
},
{
"docid": "3ece1c9f619899d5bab03c24fd3cd34a",
"text": "A new technique for obtaining high performance, low power, radio direction finding (RDF) using a single receiver is presented. For man-portable applications, multichannel systems consume too much power, are too expensive, and are too heavy to easily be carried by a single individual. Most single channel systems are not accurate enough or do not provide the capability to listen while direction finding (DF) is being performed. By employing feedback in a pseudo-Doppler system via a vector modulator in the IF of a single receiver and an adaptive algorithm to control it, the accuracy of a pseudoDoppler system can be enhanced to the accuracy of an interferometer based system without the expense of a multichannel receiver. And, it will maintain audio listenthrough while direction finding is being performed all with a single inexpensive low power receiver. The use of these techniques provides performance not attainable by other single channel methods.",
"title": ""
},
{
"docid": "6ac3d776d686f873ab931071c75aeed2",
"text": "GridRPC, which is an RPC mechanism tailored for the Grid, is an attractive programming model for Grid computing. This paper reports on the design and implementation of a GridRPC programming system called Ninf-G. Ninf-G is a reference implementation of the GridRPC API which has been proposed for standardization at the Global Grid Forum. In this paper, we describe the design, implementations and typical usage of Ninf-G. A preliminary performance evaluation in both WAN and LAN environments is also reported. Implemented on top of the Globus Toolkit, Ninf-G provides a simple and easy programming interface based on standard Grid protocols and the API for Grid Computing. The overhead of remote procedure calls in Ninf-G is acceptable in both WAN and LAN environments.",
"title": ""
},
{
"docid": "f152838edb23a40e895dea2e1ee709d1",
"text": "We present two uncommon cases of adolescent girls with hair-thread strangulation of the labia minora. The first 14-year-old girl presented with a painful pedunculated labial lump (Fig. 1). The lesion was covered with exudate. She was examined under sedation and found a coil of long hair forming a tourniquet around a labial segment. Thread removal resulted to immediate relief from pain, and gradual return to normal appearance. Another 10-year-old girl presented with a similar labial swelling. The recent experience of the first case led us straight to the problem. A long hair-thread was found at the neck of the lesion. Hair removal resulted in settling of the pain. The labial swelling subsided in few days.",
"title": ""
},
{
"docid": "ef09bc08cc8e94275e652e818a0af97f",
"text": "The biosynthetic pathway of L-tartaric acid, the form most commonly encountered in nature, and its catabolic ties to vitamin C, remain a challenge to plant scientists. Vitamin C and L-tartaric acid are plant-derived metabolites with intrinsic human value. In contrast to most fruits during development, grapes accumulate L-tartaric acid, which remains within the berry throughout ripening. Berry taste and the organoleptic properties and aging potential of wines are intimately linked to levels of L-tartaric acid present in the fruit, and those added during vinification. Elucidation of the reactions relating L-tartaric acid to vitamin C catabolism in the Vitaceae showed that they proceed via the oxidation of L-idonic acid, the proposed rate-limiting step in the pathway. Here we report the use of transcript and metabolite profiling to identify candidate cDNAs from genes expressed at developmental times and in tissues appropriate for L-tartaric acid biosynthesis in grape berries. Enzymological analyses of one candidate confirmed its activity in the proposed rate-limiting step of the direct pathway from vitamin C to tartaric acid in higher plants. Surveying organic acid content in Vitis and related genera, we have identified a non-tartrate-forming species in which this gene is deleted. This species accumulates in excess of three times the levels of vitamin C than comparably ripe berries of tartrate-accumulating species, suggesting that modulation of tartaric acid biosynthesis may provide a rational basis for the production of grapes rich in vitamin C.",
"title": ""
},
{
"docid": "a1f05b8954434a782f9be3d9cd10bb8b",
"text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.",
"title": ""
},
{
"docid": "38301e7db178d7072baf0226a1747c03",
"text": "We present an algorithm for ray tracing displacement maps that requires no additional storage over the base model. Displacement maps are rarely used in ray tracing due to the cost associated with storing and intersecting the displaced geometry. This is unfortunate because displacement maps allow the addition of large amounts of geometric complexity into models. Our method works for models composed of triangles with normals at the vertices. In addition, we discuss a special purpose displacement that creates a smooth surface that interpolates the triangle vertices and normals of a mesh. The combination allows relatively coarse models to be displacement mapped and ray traced effectively.",
"title": ""
},
{
"docid": "3e8535bc48ce88ba6103a68dd3ad1d5d",
"text": "This letter reports the concept and design of the active-braid, a novel bioinspired continuum manipulator with the ability to contract, extend, and bend in three-dimensional space with varying stiffness. The manipulator utilizes a flexible crossed-link helical array structure as its main supporting body, which is deformed by using two radial actuators and a total of six longitudinal tendons, analogously to the three major types of muscle layers found in muscular hydrostats. The helical array structure ensures that the manipulator behaves similarly to a constant volume structure (expanding while shortening and contracting while elongating). Numerical simulations and experimental prototypes are used in order to evaluate the feasibility of the concept.",
"title": ""
},
{
"docid": "e0f84798289c06abcacd14df1df4a018",
"text": "PARP inhibitors (PARPi), a cancer therapy targeting poly(ADP-ribose) polymerase, are the first clinically approved drugs designed to exploit synthetic lethality, a genetic concept proposed nearly a century ago. Tumors arising in patients who carry germline mutations in either BRCA1 or BRCA2 are sensitive to PARPi because they have a specific type of DNA repair defect. PARPi also show promising activity in more common cancers that share this repair defect. However, as with other targeted therapies, resistance to PARPi arises in advanced disease. In addition, determining the optimal use of PARPi within drug combination approaches has been challenging. Nevertheless, the preclinical discovery of PARPi synthetic lethality and the route to clinical approval provide interesting lessons for the development of other therapies. Here, we discuss current knowledge of PARP inhibitors and potential ways to maximize their clinical effectiveness.",
"title": ""
},
{
"docid": "62d63357923c5a7b1ea21b8448e3cba3",
"text": "This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians.",
"title": ""
},
{
"docid": "931f8ada4fdf90466b0b9ff591fb67d1",
"text": "Cognition results from interactions among functionally specialized but widely distributed brain regions; however, neuroscience has so far largely focused on characterizing the function of individual brain regions and neurons therein. Here we discuss recent studies that have instead investigated the interactions between brain regions during cognitive processes by assessing correlations between neuronal oscillations in different regions of the primate cerebral cortex. These studies have opened a new window onto the large-scale circuit mechanisms underlying sensorimotor decision-making and top-down attention. We propose that frequency-specific neuronal correlations in large-scale cortical networks may be 'fingerprints' of canonical neuronal computations underlying cognitive processes.",
"title": ""
}
] |
scidocsrr
|
3cf3840371b5e9515a49b1c4f17bd44e
|
ICT Governance: A Reference Framework
|
[
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "33a9c1b32f211ea13a70b1ce577b71dc",
"text": "In this work, we propose a face recognition library, with the objective of lowering the implementation complexity of face recognition features on applications in general. The library is based on Convolutional Neural Networks; a special kind of Neural Network specialized for image data. We present the main motivations for the use of face recognition, as well as the main interface for using the library features. We describe the overall architecture structure of the library and evaluated it on a large scale scenario. The proposed library achieved an accuracy of 98.14% when using a required confidence of 90%, and an accuracy of 99.86% otherwise. Keywords—Artificial Intelligence, CNNs, Face Recognition, Image Recognition, Machine Learning, Neural Networks.",
"title": ""
},
{
"docid": "1876319faa49a402ded2af46a9fcd966",
"text": "One, and two, and three police persons spring out of the shadows Down the corner comes one more And we scream into that city night: \" three plus one makes four! \" Well, they seem to think we're disturbing the peace But we won't let them make us sad 'Cause kids like you and me baby, we were born to add Born To Add, Sesame Street (sung to the tune of Bruce Springsteen's Born to Run) to Ursula Preface In October 1996, I got a position as a research assistant working on the Twenty-One project. The project aimed at providing a software architecture that supports a multilingual community of people working on local Agenda 21 initiatives in exchanging ideas and publishing their work. Local Agenda 21 initiatives are projects of local governments, aiming at sustainable processes in environmental , human, and economic terms. The projects cover themes like combating poverty, protecting the atmosphere, human health, freshwater resources, waste management, education, etc. Documentation on local Agenda 21 initiatives are usually written in the language of the local government, very much unlike documentation on research in e.g. information retrieval for which English is the language of international communication. Automatic cross-language retrieval systems are therefore a helpful tool in the international cooperation between local governments. Looking back, I regret not being more involved in the non-technical aspects of the Twenty-One project. To make up for this loss, many of the examples in this thesis are taken from the project's domain. Working on the Twenty-One project convinced me that solutions to cross-language information retrieval should explicitly combine translation models and retrieval models into one unifying framework. Working in a language technology group, the use of language models seemed a natural choice. A choice that simplifies things considerably for that matter. The use of language models for information retrieval practically reduces ranking to simply adding the occurrences of terms: complex weighting algorithms are no longer needed. \" Born to add \" is therefore the motto of this thesis. By adding out loud, it hopefully annoys-no offence, and with all due respect-some of the well-established information retrieval approaches, like Bruce Stringbean and The Sesame Street Band annoys the Sesame Street police. Acknowledgements The research presented in this thesis is funded in part by the European Union projects Twenty-One, Pop-Eye and Olive, and the Telematics Institute project Druid. I am most grateful to Wessel Kraaij of TNO-TPD …",
"title": ""
},
{
"docid": "8e6efa696b960cf08cf1616efc123cbd",
"text": "SLAM (Simultaneous Localization and Mapping) for underwater vehicles is a challenging research topic due to the limitations of underwater localization sensors and error accumulation over long-term operations. Furthermore, acoustic sensors for mapping often provide noisy and distorted images or low-resolution ranging, while video images provide highly detailed images but are often limited due to turbidity and lighting. This paper presents a review of the approaches used in state-of-the-art SLAM techniques: Extended Kalman Filter SLAM (EKF-SLAM), FastSLAM, GraphSLAM and its application in underwater environments.",
"title": ""
},
{
"docid": "e6d4d23df1e6d21bd988ca462526fe15",
"text": "Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training improves the data efficiency and policy returns of end-to-end reinforcement learning.",
"title": ""
},
{
"docid": "d58425a613f9daea2677d37d007f640e",
"text": "Recently the improved bag of features (BoF) model with locality-constrained linear coding (LLC) and spatial pyramid matching (SPM) achieved state-of-the-art performance in image classification. However, only adopting SPM to exploit spatial information is not enough for satisfactory performance. In this paper, we use hierarchical temporal memory (HTM) cortical learning algorithms to extend this LLC & SPM based model. HTM regions consist of HTM cells are constructed to spatial pool the LLC codes. Each cell receives a subset of LLC codes, and adjacent subsets are overlapped so that more spatial information can be captured. Additionally, HTM cortical learning algorithms have two processes: learning phase which make the HTM cell only receive most frequent LLC codes, and inhibition phase which ensure that the output of HTM regions is sparse. The experimental results on Caltech 101 and UIUC-Sport dataset show the improvement on the original LLC & SPM based model.",
"title": ""
},
{
"docid": "ab2c0a23ed71295ee4aa51baf9209639",
"text": "An expert system to diagnose the main childhood diseases among the tweens is proposed. The diagnosis is made taking into account the symptoms that can be seen or felt. The childhood diseases have many common symptoms and some of them are very much alike. This creates many difficulties for the doctor to reach at a right decision or diagnosis. The proposed system can remove these difficulties and it is having knowledge of many childhood diseases. The proposed expert system is implemented using SWI-Prolog.",
"title": ""
},
{
"docid": "263ac34590609435b2a104a385f296ca",
"text": "Efficient computation of curvature-based energies is important for practical implementations of geometric modeling and physical simulation applications. Building on a simple geometric observation, we provide a version of a curvature-based energy expressed in terms of the Laplace operator acting on the embedding of the surface. The corresponding energy--being quadratic in positions--gives rise to a constant Hessian in the context of isometric deformations. The resulting isometric bending model is shown to significantly speed up common cloth solvers, and when applied to geometric modeling situations built onWillmore flow to provide runtimes which are close to interactive rates.",
"title": ""
},
{
"docid": "d82c11c5a6981f1d3496e0838519704d",
"text": "This paper presents a detailed study of the nonuniform bipolar conduction phenomenon under electrostatic discharge (ESD) events in single-finger NMOS transistors and analyzes its implications for the design of ESD protection for deep-submicron CMOS technologies. It is shown that the uniformity of the bipolar current distribution under ESD conditions is severely degraded depending on device finger width ( ) and significantly influenced by the substrate and gate-bias conditions as well. This nonuniform current distribution is identified as a root cause of the severe reduction in ESD failure threshold current for the devices with advanced silicided processes. Additionally, the concept of an intrinsic second breakdown triggering current ( 2 ) is introduced, which is substrate-bias independent and represents the maximum achievable ESD failure strength for a given technology. With this improved understanding of ESD behavior involved in advanced devices, an efficient design window can be constructed for robust deep submicron ESD protection.",
"title": ""
},
{
"docid": "89513d2cf137e60bf7f341362de2ba84",
"text": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
"title": ""
},
{
"docid": "26abfdd9af796a2903b0f7cef235b3b4",
"text": "Argumentation mining is an advanced form of human language understanding by the machine. This is a challenging task for a machine. When sufficient explicit discourse markers are present in the language utterances, the argumentation can be interpreted by the machine with an acceptable degree of accuracy. However, in many real settings, the mining task is difficult due to the lack or ambiguity of the discourse markers, and the fact that a substantial amount of knowledge needed for the correct recognition of the argumentation, its composing elements and their relationships is not explicitly present in the text, but makes up the background knowledge that humans possess when interpreting language. In this article1 we focus on how the machine can automatically acquire the needed common sense and world knowledge. As very few research has been done in this respect, many of the ideas proposed in this article are tentative, but start being researched. We give an overview of the latest methods for human language understanding that map language to a formal knowledge representation that facilitates other tasks (for instance, a representation that is used to visualize the argumentation or that is easily shared in a decision or argumentation support system). Most current systems are trained on texts that are manually annotated. Then we go deeper into the new field of representation learning that nowadays is very much studied in computational linguistics. This field investigates methods for representing language as statistical concepts or as vectors, allowing straightforward methods of compositionality. The methods often use deep learning and its underlying neural network technologies to learn concepts from large text collections in an unsupervised way (i.e., without the need for manual annotations). We show how these methods can help the argumentation mining process, but also demonstrate that these methods need further research to automatically acquire the necessary background knowledge and more specifically common sense and world knowledge. We propose a number of ways to improve the learning of common sense and world knowledge by exploiting textual and visual data, and touch upon how we can integrate the learned knowledge in the argumentation mining process.",
"title": ""
},
{
"docid": "c049f188b31bbc482e16d22a8061abfa",
"text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.",
"title": ""
},
{
"docid": "2bf2e36bbbbdd9e091395636fcc2a729",
"text": "An open-source framework for real-time structured light is presented. It is called “SLStudio”, and enables real-time capture of metric depth images. The framework is modular, and extensible to support new algorithms for scene encoding/decoding, triangulation, and aquisition hardware. It is the aim that this software makes real-time 3D scene capture more widely accessible and serves as a foundation for new structured light scanners operating in real-time, e.g. 20 depth images per second and more. The use cases for such scanners are plentyfull, however due to the computational constraints, all public implementations so far are limited to offline processing. With “SLStudio”, we are making a platform available which enables researchers from many different fields to build application specific real time 3D scanners. The software is hosted at http://compute.dtu.dk/~jakw/slstudio.",
"title": ""
},
{
"docid": "6830ca98632f86ef2a0cb4c19183d9b4",
"text": "In success or failure of any firm/industry or organization employees plays the most vital and important role. Airline industry is one of service industry the job of which is to sell seats to their travelers/costumers and passengers; hence employees inspiration towards their work plays a vital part in serving client’s requirements. This research focused on the influence of employee’s enthusiasm and its apparatuses e.g. pay and benefits, working atmosphere, vision of organization towards customer satisfaction and management systems in Pakistani airline industry. For analysis correlation and regression methods were used. Results of the research highlighted that workers motivation and its four major components e.g. pay and benefits, working atmosphere, vision of organization and management systems have a significant positive impact on customer’s gratification. Those employees of the industry who directly interact with client highly impact the client satisfaction level. It is obvious from results of this research that pay and benefits performs a key role in employee’s motivation towards achieving their organizational objectives of greater customer satisfaction.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "6daa1bc00a4701a2782c1d5f82c518e2",
"text": "An 8-year-old Caucasian girl was referred with perineal bleeding of sudden onset during micturition. There was no history of trauma, fever or dysuria, but she had a history of constipation. Family history was unremarkable. Physical examination showed a prepubertal girl with a red ‘doughnut’-shaped lesion surrounding the urethral meatus (figure 1). Laboratory findings, including platelet count and coagulation, were normal. A vaginoscopy, performed using sedation, was negative. Swabs tested negative for sexually transmitted pathogens. A diagnosis of urethral prolapse (UP) was made on clinical appearance. Treatment with topical oestrogen cream was started and constipation treated with oral polyethylene glycol. On day 10, the bleeding stopped, and at week 5 there was a moderate regression of the UP. However, occasional mild bleeding persisted at 10 months, so she was referred to a urologist (figure 2). UP is an eversion of the distal urethral mucosa through the external meatus. It is most commonly seen in postmenopausal women and is uncommon in prepubertal girls. UP is rare in Caucasian children and more common in patients of African descent. 2 It may be asymptomatic or present with bleeding, spotting or urinary symptoms. The exact pathophysiological process of UP is unknown. Increased intra-abdominal pressure with straining, inadequate periurethral supporting tissue, neuromuscular dysfunction and a relative oestrogen deficiency are possible predisposing factors. Differential diagnoses include ureterocele, polyps, tumours and non-accidental injury. 3 Management options include conservative treatments such as tepid water baths and topical oestrogens. Surgery is indicated if bleeding, dysuria or pain persist. 5 Vaginoscopy in this case was possibly unnecessary, as there were no signs of trauma to the perineal area or other concerning signs or history of abuse. In the presence of typical UP, invasive diagnostic procedures should not be considered as first-line investigations and they should be reserved for cases of diagnostic uncertainty.",
"title": ""
},
{
"docid": "5deae44a9c14600b1a2460836ed9572d",
"text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.",
"title": ""
},
{
"docid": "68a5192778ae203ea1e31ba4e29b4330",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "14a90781132fa3932d41b21b382ba362",
"text": "In this paper, a prevalent type of zero-voltage- transition bidirectional converters is analyzed with the inclusion of the reverse recovery effect of the diodes. The main drawback of this type is missing the soft-switching condition of the main switches at operating duty cycles smaller than 0.5. As a result, soft-switching condition would be lost in one of the bidirectional converter operating modes (forward or reverse modes) since the duty cycles of the forward and reverse modes are complement of each other. Analysis shows that the rectifying diode reverse recovery would assist in providing the soft-switching condition for the duty cycles below 0.5, which is done by a proper design of the snubber capacitor and with no limitation on the rectifying diode current rate at turn-off. Hence, the problems associated with the soft-switching range and the reverse recovery of the rectifying diode are solved simultaneously, and soft-switching condition for both operating modes of the bidirectional converter is achieved with no extra auxiliary components and no complex control. The theoretical analysis for a bidirectional buck and boost converter is presented in detail, and the validity of the theoretical analysis is justified using the experimental results of a 250-W 135- to 200-V prototype converter.",
"title": ""
},
{
"docid": "67fb91119ba2464e883616ffd324f864",
"text": "Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.",
"title": ""
},
{
"docid": "e5c625ceaf78c66c2bfb9562970c09ec",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
}
] |
scidocsrr
|
b6e7bc1157925518397ec2288723b332
|
Classifying Emotions in Customer Support Dialogues in Social Media
|
[
{
"docid": "9b0905c443e4a9b6bef8e5defcae940c",
"text": "Are word-level affect lexicons useful in detecting emotions at sentence level? Some prior research finds no gain over and above what is obtained with ngram features—arguably the most widely used features in text classification. Here, we experiment with two very different emotion lexicons and show that even in supervised settings, an affect lexicon can provide significant gains. We further show that while ngram features tend to be accurate, they are often unsuitable for use in new domains. On the other hand, affect lexicon features tend to generalize and produce better results than ngrams when applied to a new domain.",
"title": ""
}
] |
[
{
"docid": "27bd0bccf28931032558596dd4d8c2d3",
"text": "We address the problem of classification in partially labeled networks (a.k.a. within-network classification) where observed class labels are sparse. Techniques for statistical relational learning have been shown to perform well on network classification tasks by exploiting dependencies between class labels of neighboring nodes. However, relational classifiers can fail when unlabeled nodes have too few labeled neighbors to support learning (during training phase) and/or inference (during testing phase). This situation arises in real-world problems when observed labels are sparse.\n In this paper, we propose a novel approach to within-network classification that combines aspects of statistical relational learning and semi-supervised learning to improve classification performance in sparse networks. Our approach works by adding \"ghost edges\" to a network, which enable the flow of information from labeled to unlabeled nodes. Through experiments on real-world data sets, we demonstrate that our approach performs well across a range of conditions where existing approaches, such as collective classification and semi-supervised learning, fail. On all tasks, our approach improves area under the ROC curve (AUC) by up to 15 points over existing approaches. Furthermore, we demonstrate that our approach runs in time proportional to L • E, where L is the number of labeled nodes and E is the number of edges.",
"title": ""
},
{
"docid": "f9d44eac4e07ed72e59d1aa194105615",
"text": "Each human intestine harbours not only hundreds of trillions of bacteria but also bacteriophage particles, viruses, fungi and archaea, which constitute a complex and dynamic ecosystem referred to as the gut microbiota. An increasing number of data obtained during the last 10 years have indicated changes in gut bacterial composition or function in type 2 diabetic patients. Analysis of this ‘dysbiosis’ enables the detection of alterations in specific bacteria, clusters of bacteria or bacterial functions associated with the occurrence or evolution of type 2 diabetes; these bacteria are predominantly involved in the control of inflammation and energy homeostasis. Our review focuses on two key questions: does gut dysbiosis truly play a role in the occurrence of type 2 diabetes, and will recent discoveries linking the gut microbiota to host health be helpful for the development of novel therapeutic approaches for type 2 diabetes? Here we review how pharmacological, surgical and nutritional interventions for type 2 diabetic patients may impact the gut microbiota. Experimental studies in animals are identifying which bacterial metabolites and components act on host immune homeostasis and glucose metabolism, primarily by targeting intestinal cells involved in endocrine and gut barrier functions. We discuss novel approaches (e.g. probiotics, prebiotics and faecal transfer) and the need for research and adequate intervention studies to evaluate the feasibility and relevance of these new therapies for the management of type 2 diabetes.",
"title": ""
},
{
"docid": "75d5fa282c31e2955b3089d75c0dff4f",
"text": "Over the last two decades, we have seen remarkable progress in computer vision with demonstration of capabilities such as face detection, handwritten digit recognition, reconstructing three-dimensional models of cities, automated monitoring of activities, segmenting out organs or tissues in biological images, and sensing for control of robots and cars. Yet there are many problems where computers still perform significantly below human perception. For example, in the recent PAS. CAL benchmark challenge on visual object detection, the average precision for most 3D object categories was under 50%.",
"title": ""
},
{
"docid": "a9a9e3a2707d677c256695e71b42d086",
"text": "Image warping is a transformation which maps all positions in one image plane to positions in a second plane. It arises in many image analysis problems, whether in order to remove optical distortions introduced by a camera or a particular viewing perspective, to register an image with a map or template, or to align two or more images. The choice of warp is a compromise between a smooth distortion and one which achieves a good match. Smoothness can be ensured by assuming a parametric form for the warp or by constraining it using di erential equations. Matching can be speci ed by points to be brought into alignment, by local measures of correlation between images, or by the coincidence of edges. Parametric and nonparametric approaches to warping, and matching criteria, are reviewed.",
"title": ""
},
{
"docid": "e82918cb388666499767bbd4d59daf84",
"text": "The space around us is represented not once but many times in parietal cortex. These multiple representations encode locations and objects of interest in several egocentric reference frames. Stimulus representations are transformed from the coordinates of receptor surfaces, such as the retina or the cochlea, into the coordinates of effectors, such as the eye, head, or hand. The transformation is accomplished by dynamic updating of spatial representations in conjunction with voluntary movements. This direct sensory-to-motor coordinate transformation obviates the need for a single representation of space in environmental coordinates. In addition to representing object locations in motoric coordinates, parietal neurons exhibit strong modulation by attention. Both top-down and bottom-up mechanisms of attention contribute to the enhancement of visual responses. The saliance of a stimulus is the primary factor in determining the neural response to it. Although parietal neurons represent objects in motor coordinates, visual responses are independent of the intention to perform specific motor acts.",
"title": ""
},
{
"docid": "46884062bbf3153edec5d4943433c216",
"text": "We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification. This work provides a new unsupervised method to learn semantic parts and gives new understanding of the internal representations of CNNs. Our technique is based on the hypothesis that semantic parts are represented by populations of neurons rather than by single filters. We propose a clustering technique to extract part representations, which we call Visual Concepts. We show that visual concepts are semantically coherent in that they represent semantic parts, and visually coherent in that corresponding image patches appear very similar. Also, visual concepts provide full spatial coverage of the parts of an object, rather than a few sparse parts as is typically found in keypoint annotations. Furthermore, We treat each visual concept as part detector and evaluate it for keypoint detection using the PASCAL3D+ dataset, and for part detection using our newly annotated ImageNetPart dataset. The experiments demonstrate that visual concepts can be used to detect parts. We also show that some visual concepts respond to several semantic parts, provided these parts are visually similar. Note that our ImageNetPart dataset gives rich part annotations which cover the whole object, making it useful for other part-related applications.",
"title": ""
},
{
"docid": "488b9a67352399733c610ab994c63fb6",
"text": "The occurrence of anti-patterns in software complicate development process and reduce the software quality. The contribution proposes selected methods as an OCL Query, extension to Similarity Scoring Algorithm, Bit-vector Algorithm and rule based approach originally used for design patterns detection. This paper summarizes approaches, important differences between design patterns and anti-patterns structures, modifications and extensions of algorithms and their application to detect selected anti-pattterns.",
"title": ""
},
{
"docid": "b5b4e637065ba7c0c18a821bef375aea",
"text": "The new era of mobile health ushered in by the wide adoption of ubiquitous computing and mobile communications has brought opportunities for governments and companies to rethink their concept of healthcare. Simultaneously, the worldwide urbanization process represents a formidable challenge and attracts attention toward cities that are expected to gather higher populations and provide citizens with services in an efficient and human manner. These two trends have led to the appearance of mobile health and smart cities. In this article we introduce the new concept of smart health, which is the context-aware complement of mobile health within smart cities. We provide an overview of the main fields of knowledge that are involved in the process of building this new concept. Additionally, we discuss the main challenges and opportunities that s-Health would imply and provide a common ground for further research.",
"title": ""
},
{
"docid": "1f677c07ba42617ac590e6e0a5cdfeab",
"text": "Network Functions Virtualization (NFV) is an emerging initiative to overcome increasing operational and capital costs faced by network operators due to the need to physically locate network functions in specific hardware appliances. In NFV, standard IT virtualization evolves to consolidate network functions onto high volume servers, switches and storage that can be located anywhere in the network. Services are built by chaining a set of Virtual Network Functions (VNFs) deployed on commodity hardware. The implementation of NFV leads to the challenge: How several network services (VNF chains) are optimally orchestrated and allocated on the substrate network infrastructure? In this paper, we address this problem and propose CoordVNF, a heuristic method to coordinate the composition of VNF chains and their embedding into the substrate network. CoordVNF aims to minimize bandwidth utilization while computing results within reasonable runtime.",
"title": ""
},
{
"docid": "c7a9efee2b447cbadc149717ad7032ee",
"text": "We introduce a novel method to learn a policy from unsupervised demonstrations of a process. Given a model of the system and a set of sequences of outputs, we find a policy that has a comparable performance to the original policy, without requiring access to the inputs of these demonstrations. We do so by first estimating the inputs of the system from observed unsupervised demonstrations. Then, we learn a policy by applying vanilla supervised learning algorithms to the (estimated)input-output pairs. For the input estimation, we present a new adaptive linear estimator (AdaL-IE) that explicitly trades-off variance and bias in the estimation. As we show empirically, AdaL-IE produces estimates with lower error compared to the state-of-the-art input estimation method, (UMV-IE) [Gillijns and De Moor, 2007]. Using AdaL-IE in conjunction with imitation learning enables us to successfully learn control policies that consistently outperform those using UMV-IE.",
"title": ""
},
{
"docid": "b31235bf87cc8ebd243fd8c52c63f8d4",
"text": "The dual-polarized corporate-feed waveguide slot array antenna is designed for the 60 GHz band. Using the multi-layer structure, we have realized dual-polarization operation. Even though the gain is approximately 1 dB lower than the antenna for the single polarization due to the -15dB cross-polarization level in 8=58°, this antenna still shows very high gain over 32 dBi over the broad bandwidth. This antenna will be fabricated and measured in future.",
"title": ""
},
{
"docid": "0759d6bd8c46a5ea5ce16c3675e07784",
"text": "Because context has a robust influence on the processing of subsequent words, the idea that readers and listeners predict upcoming words has attracted research attention, but prediction has fallen in and out of favor as a likely factor in normal comprehension. We note that the common sense of this word includes both benefits for confirmed predictions and costs for disconfirmed predictions. The N400 component of the event-related potential (ERP) reliably indexes the benefits of semantic context. Evidence that the N400 is sensitive to the other half of prediction--a cost for failure--is largely absent from the literature. This raises the possibility that \"prediction\" is not a good description of what comprehenders do. However, it need not be the case that the benefits and costs of prediction are evident in a single ERP component. Research outside of language processing indicates that late positive components of the ERP are very sensitive to disconfirmed predictions. We review late positive components elicited by words that are potentially more or less predictable from preceding sentence context. This survey suggests that late positive responses to unexpected words are fairly common, but that these consist of two distinct components with different scalp topographies, one associated with semantically incongruent words and one associated with congruent words. We conclude with a discussion of the possible cognitive correlates of these distinct late positivities and their relationships with more thoroughly characterized ERP components, namely the P300, P600 response to syntactic errors, and the \"old/new effect\" in studies of recognition memory.",
"title": ""
},
{
"docid": "fe012505cc7a2ea36de01fc92924a01a",
"text": "The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (< 10% increase in accuracy over random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.",
"title": ""
},
{
"docid": "5dbf9e66bd0febfb10d234382cec5c46",
"text": "Clustering is a challenging task in data mining technique. The aim of clustering is to group the similar data into number of clusters. Various clustering algorithms have been developed to group data into clusters. However, these clustering algorithms work effectively either on pure numeric data or on pure categorical data, most of them perform poorly on mixed categorical and numerical data types in previous k-means algorithm was used but it is not accurate for large datasets. In this paper we cluster the mixed numeric and categorical data set in efficient manner. In this paper we present a clustering algorithm based on similarity weight and filter method paradigm that works well for data with mixed numeric and categorical features. We propose a modified description of cluster center to overcome the numeric data only limitation and provide a better characterization of clusters. The performance of this algorithm has been studied on benchmark data sets.",
"title": ""
},
{
"docid": "269daf010c813533064fc924ecc34b9e",
"text": "A novel simple and effective autonomous current-sharing controller for parallel three-phase inverters is proposed in this paper. The proposed controller provides faster response and better accuracy in contrast to the conventional droop control, since this novel approach does not require any active or reactive power calculations. Instead, a synchronous-reference-frame (SRF) virtual impedance loop and an SRF-based phase-locked loop are used. Stationary analysis is provided in order to identify the inherent mechanism of the direct and quadrature output currents in relation to the voltage amplitude and frequency with different line impedances by means of the system transfer functions. Comparison experiments from two parallel inverters are presented to compare the control performance of the conventional droop control and the proposed control with different line impedances. In addition, experimental results from a setup with three parallel 2.2-kW inverters verify the effectiveness of the proposed control strategy in different scenarios.",
"title": ""
},
{
"docid": "ed8ee467e7f40d6ba35cc6f8329ca681",
"text": "This paper proposes an architecture for Software Defined Optical Transport Networks. The SDN Controller includes a network abstraction layer allowing the implementation of cognitive controls and policies for autonomic operation, based on global network view. Additionally, the controller implements a virtualized GMPLS control plane, offloading and simplifying the network elements, while unlocking the implementation of new services such as optical VPNs, optical network slicing, and keeping standard OIF interfaces, such as UNI and NNI. The concepts have been implemented and validated in a real testbed network formed by five DWDM nodes equipped with flexgrid WSS ROADMs.",
"title": ""
},
{
"docid": "c2daec5b85a4e8eea614d855c6549ef0",
"text": "An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as \"place green at B 4 now\". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.",
"title": ""
},
{
"docid": "69a01ea46134301abebd6159942c0b52",
"text": "This paper proposes a crowd counting method. Crowd counting is difficult because of large appearance changes of a target which caused by density and scale changes. Conventional crowd counting methods generally utilize one predictor (e.g. regression and multi-class classifier). However, such only one predictor can not count targets with large appearance changes well. In this paper, we propose to predict the number of targets using multiple CNNs specialized to a specific appearance, and those CNNs are adaptively selected according to the appearance of a test image. By integrating the selected CNNs, the proposed method has the robustness to large appearance changes. In experiments, we confirm that the proposed method can count crowd with lower counting error than a CNN and integration of CNNs with fixed weights. Moreover, we confirm that each predictor automatically specialized to a specific appearance.",
"title": ""
},
{
"docid": "168e62944fb1558c9b6cb8801434ef9d",
"text": "Parkinson’s disease is a neurodegenerative disorder accompanied by depletion of dopamine and loss of dopaminergic neurons in the brain that is believed to be responsible for the motor and non-motor symptoms in this disease. The main drug prescribed for Parkinsonian patients is l-dopa, which can be converted to dopamine by passing through the blood-brain barrier. Although l-dopa is able to improve motor function and improve the quality of life in the patients, there is inter-individual variability and some patients do not achieve the therapeutic effect. Variations in treatment response and side effects of current drugs have convinced scientists to think of treating Parkinson’s disease at the cellular and molecular level. Molecular and cellular therapy for Parkinson’s disease include (i) cell transplantation therapy with human embryonic stem (ES) cells, human induced pluripotent stem (iPS) cells and human fetal mesencephalic tissue, (ii) immunological and inflammatory therapy which is done using antibodies, and (iii) gene therapy with AADC-TH-GCH gene therapy, viral vector-mediated gene delivery, RNA interference-based therapy, CRISPR-Cas9 gene editing system, and alternative methods such as optogenetics and chemogenetics. Although these methods currently have a series of challenges, they seem to be promising techniques for Parkinson’s treatment in future. In this study, these prospective therapeutic approaches are reviewed.",
"title": ""
},
{
"docid": "d3765112295d9a4591b438130df59a25",
"text": "This paper presents the design and mathematical model of a lower extremity exoskeleton device used to make paralyzed people walk again. The design takes into account the anatomy of standard human leg with a total of 11 Degrees of freedom (DoF). A CAD model in SolidWorks is presented along with its fabrication and a mathematical model in MATLAB.",
"title": ""
}
] |
scidocsrr
|
77b6dc62c3125918b32ffe854e2b210b
|
Feature Engineering for Text Classification
|
[
{
"docid": "1ec9b98f0f7509088e7af987af2f51a2",
"text": "In this paper, we describe an automated learning approach to text categorization based on perception learning and a new feature selection metric, called correlation coefficient. Our approach has been teated on the standard Reuters text categorization collection. Empirical results indicate that our approach outperforms the best published results on this % uters collection. In particular, our new feature selection method yields comiderable improvement. We also investigate the usability of our automated hxu-n~ approach by actually developing a system that categorizes texts into a treeof categories. We compare tbe accuracy of our learning approach to a rrddmsed, expert system ap preach that uses a text categorization shell built by Cams gie Group. Although our automated learning approach still gives a lower accuracy, by appropriately inmrporating a set of manually chosen worda to use as f~ures, the combined, semi-automated approach yields accuracy close to the * baaed approach.",
"title": ""
}
] |
[
{
"docid": "e9dc264c49d49267e48a28072acb76c5",
"text": "We present Falcon, an interactive, deterministic, and declarative data cleaning system, which uses SQL update queries as the language to repair data. Falcon does not rely on the existence of a set of pre-defined data quality rules. On the contrary, it encourages users to explore the data, identify possible problems, and make updates to fix them. Bootstrapped by one user update, Falcon guesses a set of possible sql update queries that can be used to repair the data. The main technical challenge addressed in this paper consists in finding a set of sql update queries that is minimal in size and at the same time fixes the largest number of errors in the data. We formalize this problem as a search in a lattice-shaped space. To guarantee that the chosen updates are semantically correct, Falcon navigates the lattice by interacting with users to gradually validate the set of sql update queries. Besides using traditional one-hop based traverse algorithms (e.g., BFS or DFS), we describe novel multi-hop search algorithms such that Falcon can dive over the lattice and conduct the search efficiently. Our novel search strategy is coupled with a number of optimization techniques to further prune the search space and efficiently maintain the lattice. We have conducted extensive experiments using both real-world and synthetic datasets to show that Falcon can effectively communicate with users in data repairing.",
"title": ""
},
{
"docid": "4709a4e1165abb5d0018b74495218fc7",
"text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "72be67603d8548e9c161312b5d60c889",
"text": "RUNX1 is a member of the core-binding factor family of transcription factors and is indispensable for the establishment of definitive hematopoiesis in vertebrates. RUNX1 is one of the most frequently mutated genes in a variety of hematological malignancies. Germ line mutations in RUNX1 cause familial platelet disorder with associated myeloid malignancies. Somatic mutations and chromosomal rearrangements involving RUNX1 are frequently observed in myelodysplastic syndrome and leukemias of myeloid and lymphoid lineages, that is, acute myeloid leukemia, acute lymphoblastic leukemia, and chronic myelomonocytic leukemia. More recent studies suggest that the wild-type RUNX1 is required for growth and survival of certain types of leukemia cells. The purpose of this review is to discuss the current status of our understanding about the role of RUNX1 in hematological malignancies.",
"title": ""
},
{
"docid": "dd144f12a70a37160007f2b7f04b4d77",
"text": "This research examines the role of trait empathy in emotional contagion through non-social targets-art objects. Studies 1a and 1b showed that high- (compared to low-) empathy individuals are more likely to infer an artist's emotions based on the emotional valence of the artwork and, as a result, are more likely to experience the respective emotions themselves. Studies 2a and 2b experimentally manipulated artists' emotions via revealing details about their personal life. Study 3 experimentally induced positive vs. negative emotions in individuals who then wrote literary texts. These texts were shown to another sample of participants. High- (compared to low-) empathy participants were more like to accurately identify and take on the emotions ostensibly (Studies 2a and 2b) or actually (Study 3) experienced by the \"artists\". High-empathy individuals' enhanced sensitivity to others' emotions is not restricted to social targets, such as faces, but extends to products of the human mind, such as objects of art.",
"title": ""
},
{
"docid": "b9167711b44b1b2f8da10cb9bd135f85",
"text": "Cloud computing enables an entire ecosystem of developing, composing, and providing IT services. An emerging class of cloud-based software architectures, serverless, focuses on providing software architects the ability to execute arbitrary functions with small overhead in server management, as Function-as-a-service (FaaS). However useful, serverless and FaaS suffer from a community problem that faces every emerging technology, which has indeed also hampered cloud computing a decade ago: lack of clear terminology, and scattered vision about the field. In this work, we address this community problem. We clarify the term serverless, by reducing it to cloud functions as programming units, and a model of executing simple and complex (e.g., workflows of) functions with operations managed primarily by the cloud provider. We propose a research vision, where 4 key directions (perspectives) present 17 technical opportunities and challenges.",
"title": ""
},
{
"docid": "98ce0c1bc955b7aa64e1820b56a1be6c",
"text": "Lipid nanoparticles (LNPs) have attracted special interest during last few decades. Solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) are two major types of Lipid-based nanoparticles. SLNs were developed to overcome the limitations of other colloidal carriers, such as emulsions, liposomes and polymeric nanoparticles because they have advantages like good release profile and targeted drug delivery with excellent physical stability. In the next generation of the lipid nanoparticle, NLCs are modified SLNs which improve the stability and capacity loading. Three structural models of NLCs have been proposed. These LNPs have potential applications in drug delivery field, research, cosmetics, clinical medicine, etc. This article focuses on features, structure and innovation of LNPs and presents a wide discussion about preparation methods, advantages, disadvantages and applications of LNPs by focusing on SLNs and NLCs.",
"title": ""
},
{
"docid": "22070b95e5eeebf17bc7019aabc5f5b0",
"text": "s of the 2nd Cancer Cachexia Conference, Montreal, Canada, 26-28 September 2014 Published online 31March 2015 inWiley Online Library (wileyonlinelibrary.com) © 2015 John Wiley & Sons Ltd 1-01 Body composition and prognostication in cancer Vickie Baracos Department of Oncology, University of Alberta, Edmonton Alberta, Canada Cancer cachexia contributes to poor prognosis through progressive depletion of the body’s energy and protein reserves; research is revealing the impact of the quantitly of these reserves on survival. Our group has exploitated computed tomography (CT) images to study body composition in cancer patients. We argue that CT taken for the purposes of diagnosis and routine follow-up can be used to derive clinically useful information on skeletal muscle and fat amount and distribution. Population-based data sets have been analyzed, revealing wide variation in individual proportions of fat and muscle (Prado et al. Lancet Oncology 2008;9:629–35; Martin et al. J. Clin Oncol. 2013: 31:1539–47). Muscle loss during aging is well known and is prognostic of frailty, falls, fractures, loss of independence, increased length of hospital stay, infectious complications in hospital and mortality. Muscle depletion is not limited to people who appear underweight and it may be a hidden condition in normal weight, overweight or obese people (i.e. sarcopenic obesity). Disparate behaviour of skeletal muscle and fat was acknowledged by an international consensus of experts on cancer cachexia, defined as being characterized by loss of skeletal muscle with or without loss of fat mass. Within the large interindividual variation of body composition in cancer patients, several consistent themes are emerging. Skeletal muscle depletion is a powerful predictor of cancer related mortality as well as of severe toxicity during systemic chemotherapy. Distinct from skeletal muscle, the fat mass is an important reserve of energy. High fat mass (i.e. obesity) appears to confer a survival advantage in patients with diseases associated with wasting, including cancer, rather than a disadvantage as understood from studies of all-cause mortality. The larger energy reserve of obese persons is thought to confer this advantage. Obesity predicted higher survival especially strongly when sarcopenia is absent. To specifically understand the relationships between body composition and cancer utcomes, we have reviewed several thousand clinical CT images. We used statistical methods (i.e. optimal stratification) to define muscle mass cutpoints that relate significantly to increased mortality and evaluated them in survival models alongside conventional covariates including cancer site, stage and performance status. Muscle depletion is associated with mortality in diverse tumor groups including patients with cancers of the pancreas, lung, breast and gastrointestinal tract, liver, bladder and kidney. Cancer patients who are cachexic by conventional criteria (involuntary weight loss) and by the additional criterion of severe muscle depletion share a very poor prognosis, regardless of overall body weight. Severe muscle depletion was identified in patients with cancers of the breast, colon, lung, kidney, liver, head & neck and lymphoma and these consistently had worse toxicity resulting in dose reductions or definitive termination of therapy when treated with 5-FU, capecitabine, sorafenib, sunitinib, carboplatin, cisplatin or a regimen (5FU with epirubicin & cyclophosphamide; 5FUwith oxaliplatin or CPT 11). Reduced treatment may explain excess early mortality in patients affected by severe muscle depletion. Survival models including cachexia and body weight/composition characteristics showed excellent fit (i.e. concordance statistics >0.9) and outperformed prediction models using only conventional cancer related covariates (C-statistics 0.75-0.8). In renal cell carcinoma muscle depletion was independent of the frequently used Memorial Sloan Kettering Cancer Center prognostic score and similar results were seen for muscle depletion in lymphoma independent of the FLIPI prognostic score. 1–02 Myostatin as a marker of cachexia in gastric cancer Maurizio Muscaritoli, Zaira Aversa and Filippo Rossi Fanelli Department of Clinical Medicine Sapienza, University of Rome, Rome, Italy Myostatin, also known as growth and differentiation factor-8 (GDF-8), is a negative regulator of muscle mass, belonging to the TGF-β superfamily. Myostatin is secreted as an inactive propeptide that is cleaved to generate a mature ligand, whose activity may be regulated in vivo by association and dissociation with binding proteins, including propeptide itself as well as follistatin or related molecules. Active myostatin binds the activin type II B receptor (ActRIIB) and, to a lesser extent, the related ActRIIA, resulting in the phosphorylation and consequent recruitment of the low-affinity type I receptor ALK (activin receptor like-kinase)-4 or ALK-5. This binding induces phosphorylation and activation of the transcription factors SMAD2 and 3 [mammalian homologue of Drosophila MAD (MothersAgainst-Decapentaplegic gene)], which translocate into the nucleus and together with SMAD 4 regulate the expression of target genes. In addition, myostatin has been suggested to exert its action through different pathways, such as the extracellular signal-regulated kinase (ERK)/ mitogen activated protein kinase (MAPK) cascade. Moreover, cross-talking between myostatin pathway and the IGF-1 axis has been postulated. Inactivating mutations of myostatin gene have been found in the “double-muscled cattle phenotype” as well as in humans. Myostatin null mice are characterized by marked muscle enlargement (~100 to 200% more than controls), exhibiting both fiber hypertrophy and hyperplasia, whereas systemic administration of myostatin in adult mice induces profound muscle and fat loss. Moreover, high myostatin protein levels have been reported in conditions associated with muscle depletion, such as aging, denervation atrophy, or mechanical unloading. Results from our laboratory have shown that myostatin signaling is enhanced in skeletal muscle of tumor-bearing rats and mice. Similarly, others have shown that myostatin inhibition, either by antisense oligonucleotides or by © 2015 John Wiley & Sons Ltd Journal of Cachexia, Sarcopenia and Muscle 2015; 6: 2–31 DOI: 10.1002/jcsm.12004 administration of an Actvin Receptor II B/Fragment-crystallizable (ActRIIB/ Fc) fusion protein or ActRIIB-soluble form, preventmuscle wasting in tumorbearing mice. When myostatin signaling was studied in muscle biopsies obtained during surgical procedure from non-weight losing gastric cancer patients, we found that protein expression of bothmyostatin and phosphorylated GSK-3β were significantly increased, while phosphorylated-SMAD 2/3did not significantly changewith respect to controls. Although the reason of this result is not known at present, a possible explanation could be that myostatin increase is paralleled by a concomitant rise in the expression of follistatin, a physiological inhibitor of myostatin. This would result in a myostatin/follistatin ratio similar to controls, thereby maintaining the myostatin signaling in basal conditions. In addition, unchanged levels of pSmad 2/3, despite increased myostatin protein expression, also may reflect a modulation of other molecules acting through the activin receptor type IIB, such as activin A. Interestingly enough, we found that the expression levels of muscle myostatin mRNA are significantly reduced in gastric cancer patients. Although the reason for these apparently contradictory results is not known at present, it is conceivable that the differences may at least in part be due to posttranscriptional mechanisms, such as increased myostatin synthesis secondary to increased translational efficiency or reduced degradation of myostatin. Based on the available data, it may be concluded that myostatin signaling is perturbed in the skeletal muscle of patients with gastric cancer. Changes occur even in early disease stage and in the absence of significant weight loss, supporting the view that the molecular changes contributing to muscle wasting and cancer cachexia are operating since the early phases of cancer. Myostatin signaling is complex and may be affected by the interplay of inhibitors such as follistatin and/or other members of the TGFβ superfamily. Myostatin may represent a suitable target for future pharmacological interventions aimed at the prevention and treatment of cancer-related muscle loss. 1-03 Role of Activin A in human cancer cachexia (ACTICA study) A Loumaye, M de Barsy, M Nachit, L Frateur, P Lause, A Van Maanen, JP Thissen Cancer Center of the Cliniques Universitaires St-Luc; Radiology, Cliniques Universitaires St-Luc; Endocrinology, Diabetology and Nutrition Dept, IREC, Université Catholique de Louvain and Cliniques Universitaires St-Luc, Brussels, Belgium Cachexia is a complex metabolic syndrome associated with underlying illness, characterized by loss of skeletal muscle and not reversible by nutritional support. Recent animal observations suggest that the production of Activin A (ActA), a member of the TGFß superfamily, by some tumors might contribute to cancer cachexia. This hypothesis seems attractive since inhibitors of ActA have been developed. Nevertheless, the role of ActA in the development of cancer cachexia has never been investigated in humans. Our goal was to demonstrate the role of ActA as a mediator of the human cancer cachexia and to assess its potential use as a biomarker of cachexia. Patients with colorectal or lung cancer were prospectively evaluated. All patients had clinical, nutritional and functional assessment. The skeletal muscle mass was measured by bioimpedance (BIA) and abdomen CT-scan (CT). Blood samples were collected in standardized conditions to measure circulating levels of ActA. One-hundred fifty-two patients were recruited (59 lung",
"title": ""
},
{
"docid": "b6983a5ccdac40607949e2bfe2beace2",
"text": "A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as \"p-hacking,\" occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.",
"title": ""
},
{
"docid": "1dfbe95e53aeae347c2b42ef297a859f",
"text": "With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "0d90fdb9568ca23c608bdfdae03d26c9",
"text": "Thank you for reading pattern recognition statistical structural and neural approaches. Maybe you have knowledge that, people have look numerous times for their chosen readings like this pattern recognition statistical structural and neural approaches, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they juggled with some harmful bugs inside their computer.",
"title": ""
},
{
"docid": "610f445a9cb22c5f68aae2767acc71eb",
"text": "Distributed data management systems often operate on \"elastic'' clusters that can scale up or down on demand. These systems face numerous challenges, including data fragmentation, replication, and cluster sizing. Unfortunately, these challenges have traditionally been treated independently, leaving administrators with little insight on how the interplay of these decisions affects query performance. This paper introduces NashDB, an adaptive data distribution framework that relies on an economic model to automatically balance the supply and demand of data fragments, replicas, and cluster nodes. NashDB adapts its decisions to query priorities and shifting workloads, while avoiding underutilized cluster nodes and redundant replicas. This paper introduces and evaluates NashDB's model, as well as a suite of optimization techniques designed to efficiently identify data distribution schemes that match workload demands and transition the system to this new scheme with minimum data transfer overhead. Experimentally, we show that NashDB is often Pareto dominant compared to other solutions.",
"title": ""
},
{
"docid": "7cebca85b555c6312f14cfa90fb1b50b",
"text": "This paper describes a new evolutionary algorithm that is especially well suited to AI-Assisted Game Design. The approach adopted in this paper is to use observations of AI agents playing the game to estimate the game's quality. Some of best agents for this purpose are General Video Game AI agents, since they can be deployed directly on a new game without game-specific tuning; these agents tend to be based on stochastic algorithms which give robust but noisy results and tend to be expensive to run. This motivates the main contribution of the paper: the development of the novel N-Tuple Bandit Evolutionary Algorithm, where a model is used to estimate the fitness of unsampled points and a bandit approach is used to balance exploration and exploitation of the search space. Initial results on optimising a Space Battle game variant suggest that the algorithm offers far more robust results than the Random Mutation Hill Climber and a Biased Mutation variant, which are themselves known to offer competitive performance across a range of problems. Subjective observations are also given by human players on the nature of the evolved games, which indicate a preference towards games generated by the N-Tuple algorithm.",
"title": ""
},
{
"docid": "5228454ef59c012b079885b2cce0c012",
"text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.",
"title": ""
},
{
"docid": "98719df1a7f6b748ab867f1ae4d2ece5",
"text": "Atherosclerosis is a chronic disease of the arterial wall, and a leading cause of death and loss of productive life years worldwide. Research into the disease has led to many compelling hypotheses about the pathophysiology of atherosclerotic lesion formation and of complications such as myocardial infarction and stroke. Yet, despite these advances, we still lack definitive evidence to show that processes such as lipoprotein oxidation, inflammation and immunity have a crucial involvement in human atherosclerosis. Experimental atherosclerosis in animals furnishes an important research tool, but extrapolation to humans requires care. Understanding how to combine experimental and clinical science will provide further insight into atherosclerosis and could lead to new clinical applications.",
"title": ""
},
{
"docid": "c5628c76f448fb71165069aefc75a2c4",
"text": "This research work aims to design and develop a wireless food ordering system in the restaurant. The project presents in-depth on the technical operation of the Wireless Ordering System (WOS) including systems architecture, function, limitations and recommendations. It is believed that with the increasing use of handheld device e.g PDAs in restaurants, pervasive application will become an important tool for restaurants to improve the management aspect by utilizing PDAs to coordinate food ordering could increase efficiency for restaurants and caterers by saving time, reducing human errors and by providing higher quality customer service. With the combination of simple design and readily available emerging communications technologies, it can be concluded that this system is an attractive solution for the hospitality industry.",
"title": ""
},
{
"docid": "63a16361103abc8b2cc149f44f79ae62",
"text": "Maturity models are a well-known instrument to support the improvement of functional domains in IS, like software development or testing. In this paper we present a generic method for developing focus area maturity models based on both extensive industrial experience and scientific investigation. Focus area maturity models are distinguished from fixed-level maturity models, like CMM, in that they are especially suited to the incremental improvement of functional domains.",
"title": ""
},
{
"docid": "0da299fb53db5980a10e0ae8699d2209",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "b6535f4d06f3a143a1ab2ba6635dd0cb",
"text": "Health information privacy concerns (HIPC) are commonly cited as primary barrier to the ongoing growth of health wearables (HW) for private users. However, little is known about the driving factors of HIPC and the nature of users’ privacy perception. Seven semi-structured focus groups with current users of HWs were conducted to empirically explore factors driving users’ HIPC. Based on an iterative thematic analysis approach, where the interview codes were systematically matched with literature, I develop a thematic map that visualizes the privacy perception of HW users. In particular this map uncovers three central factors (Dilemma of Forced Acceptance, State-Trait Data Sensitivity and Transparency) on HIPC, which HW users have to deal with.",
"title": ""
},
{
"docid": "677f2e8be01f1e2becda8efc720db85b",
"text": "A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest.",
"title": ""
},
{
"docid": "a059b3ef66c54ecbe43aa0e8d35b9da8",
"text": "Completion of lagging strand DNA synthesis requires processing of up to 50 million Okazaki fragments per cell cycle in mammalian cells. Even in yeast, the Okazaki fragment maturation happens approximately a million times during a single round of DNA replication. Therefore, efficient processing of Okazaki fragments is vital for DNA replication and cell proliferation. During this process, primase-synthesized RNA/DNA primers are removed, and Okazaki fragments are joined into an intact lagging strand DNA. The processing of RNA/DNA primers requires a group of structure-specific nucleases typified by flap endonuclease 1 (FEN1). Here, we summarize the distinct roles of these nucleases in different pathways for removal of RNA/DNA primers. Recent findings reveal that Okazaki fragment maturation is highly coordinated. The dynamic interactions of polymerase δ, FEN1 and DNA ligase I with proliferating cell nuclear antigen allow these enzymes to act sequentially during Okazaki fragment maturation. Such protein-protein interactions may be regulated by post-translational modifications. We also discuss studies using mutant mouse models that suggest two distinct cancer etiological mechanisms arising from defects in different steps of Okazaki fragment maturation. Mutations that affect the efficiency of RNA primer removal may result in accumulation of unligated nicks and DNA double-strand breaks. These DNA strand breaks can cause varying forms of chromosome aberrations, contributing to development of cancer that associates with aneuploidy and gross chromosomal rearrangement. On the other hand, mutations that impair editing out of polymerase α incorporation errors result in cancer displaying a strong mutator phenotype.",
"title": ""
}
] |
scidocsrr
|
becbceca094c91340955e53721ce3f2e
|
Business-to-business interactions: issues and enabling technologies
|
[
{
"docid": "34a5d59c8b72690c7d776871447af6d0",
"text": "E lectronic commerce lets people purchase goods and exchange information on business transactions online. The most popular e-commerce channel is the Internet. Although the Internet's role as a business channel is a fairly recent phenomenon, its impact, financial and otherwise, has been substantially greater than that of other business channels in existence for several decades. E-commerce gives companies improved efficiency and reliability of business processes through transaction automation. There are two major types of e-commerce: business to consumer (B2C), in which consumers purchase products and services from businesses , and business to business (B2B), in which businesses buy and sell among themselves. A typical business depends on other businesses for several of the direct and indirect inputs to its end products. For example, Dell Computer depends on one company for microprocessor chips and another for hard drives. B2B e-commerce automates and streamlines the process of buying and selling these intermediate products. It provides more reliable updating of business data. For procurement transactions, buyers and sellers can meet in an electronic marketplace and exchange information. In addition, B2B makes product information available globally and updates it in real time. Hence, procuring organizations can take advantage of vast amounts of product information. B2C e-commerce is now sufficiently stable. Judging from its success, we can expect B2B to similarly improve business processes for a better return on investment. Market researchers predict that B2B transactions will amount to a few trillion dollars in the next few years, as compared to about 100 billion dollars' worth of B2C transactions. B2C was easier to achieve, given the relative simplicity of reaching its target: the individual consumer. That's not the case with B2B, which involves engineering the interactions of diverse, complex enterprises. Interoperability is therefore a key issue in B2B. To achieve interoperability, many companies have formed consortia to develop B2B frameworks—generic templates that provide functions enabling businesses to communicate efficiently over the Internet. The consor-tia aim to provide an industrywide standard that companies can easily adopt. Their work has resulted in several technical standards. Among the most popular are Open Buying on the Internet (OBI), eCo, RosettaNet, commerce XML (cXML), and BizTalk. The problem with these standards, and many others, is that they are incompatible. Businesses trying to implement a B2B framework are bewildered by a variety of standards that point in different directions. Each standard has its merits and demerits. To aid decision-makers in choosing …",
"title": ""
}
] |
[
{
"docid": "f6deeee48e0c8f1ed1d922093080d702",
"text": "Foreword: The ACM SIGCHI (Association for Computing Machinery Special Interest Group in Computer Human Interaction) community conducted a deliberative process involving a high-visibility committee, a day-long workshop at CHI99 (Pittsburgh, PA, May 15, 1999) and a collaborative authoring process. This interim report is offered to produce further discussion and input leading to endorsement by the SIGCHI Executive Committee and then other professional societies. The scope of this research agenda included advanced information and communications technology research that could yield benefits over the next two to five years.",
"title": ""
},
{
"docid": "015326feea60387bc2a8cdc9ea6a7f81",
"text": "Phosphorylation of the transcription factor CREB is thought to be important in processes underlying long-term memory. It is unclear whether CREB phosphorylation can carry information about the sign of changes in synaptic strength, whether CREB pathways are equally activated in neurons receiving or providing synaptic input, or how synapse-to-nucleus communication is mediated. We found that Ca(2+)-dependent nuclear CREB phosphorylation was rapidly evoked by synaptic stimuli including, but not limited to, those that induced potentiation and depression of synaptic strength. In striking contrast, high frequency action potential firing alone failed to trigger CREB phosphorylation. Activation of a submembranous Ca2+ sensor, just beneath sites of Ca2+ entry, appears critical for triggering nuclear CREB phosphorylation via calmodulin and a Ca2+/calmodulin-dependent protein kinase.",
"title": ""
},
{
"docid": "e7473169711de31dc063ace07ec799f9",
"text": "Two major tasks in spoken language understanding (SLU) are intent determination (ID) and slot filling (SF). Recurrent neural networks (RNNs) have been proved effective in SF, while there is no prior work using RNNs in ID. Based on the idea that the intent and semantic slots of a sentence are correlative, we propose a joint model for both tasks. Gated recurrent unit (GRU) is used to learn the representation of each time step, by which the label of each slot is predicted. Meanwhile, a max-pooling layer is employed to capture global features of a sentence for intent classification. The representations are shared by two tasks and the model is trained by a united loss function. We conduct experiments on two datasets, and the experimental results demonstrate that our model outperforms the state-of-theart approaches on both tasks.",
"title": ""
},
{
"docid": "e5f30c0d2c25b6b90c136d1c84ba8a75",
"text": "Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets.",
"title": ""
},
{
"docid": "0488511dc0641993572945e98a561cc7",
"text": "Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.",
"title": ""
},
{
"docid": "a9052b10f9750d58eb33b9e5d564ee6e",
"text": "Cyber Physical Systems (CPS) play significant role in shaping smart manufacturing systems. CPS integrate computation with physical processes where behaviors are represented in both cyber and physical parts of the system. In order to understand CPS in the context of smart manufacturing, an overview of CPS technologies, components, and relevant standards is presented. A detailed technical review of the existing engineering tools and practices from major control vendors has been conducted. Furthermore, potential research areas have been identified in order to enhance the tools functionalities and capabilities in supporting CPS development process.",
"title": ""
},
{
"docid": "902aab15808014d55a9620bcc48621f5",
"text": "Software developers are always looking for ways to boost their effectiveness and productivity and perform complex jobs more quickly and easily, particularly as projects have become increasingly large and complex. Programmers want to shed unneeded complexity and outdated methodologies and move to approaches that focus on making programming simpler and faster. With this in mind, many developers are increasingly using dynamic languages such as JavaScript, Perl, Python, and Ruby. Although software experts disagree on the exact definition, a dynamic language basically enables programs that can change their code and logical structures at runtime, adding variable types, module names, classes, and functions as they are running. These languages frequently are interpreted and generally check typing at runtime",
"title": ""
},
{
"docid": "a8da8a2d902c38c6656ea5db841a4eb1",
"text": "The uses of the World Wide Web on the Internet for commerce and information access continue to expand. The e-commerce business has proven to be a promising channel of choice for consumers as it is gradually transforming into a mainstream business activity. However, lack of trust has been identified as a major obstacle to the adoption of online shopping. Empirical study of online trust is constrained by the shortage of high-quality measures of general trust in the e-commence contexts. Based on theoretical or empirical studies in the literature of marketing or information system, nine factors have sound theoretical sense and support from the literature. A survey method was used for data collection in this study. A total of 172 usable questionnaires were collected from respondents. This study presents a new set of instruments for use in studying online trust of an individual. The items in the instrument were analyzed using a factors analysis. The results demonstrated reliable reliability and validity in the instrument.This study identified seven factors has a significant impact on online trust. The seven dominant factors are reputation, third-party assurance, customer service, propensity to trust, website quality, system assurance and brand. As consumers consider that doing business with online vendors involves risk and uncertainty, online business organizations need to overcome these barriers. Further, implication of the finding also provides e-commerce practitioners with guideline for effectively engender online customer trust.",
"title": ""
},
{
"docid": "1f1a6df3b85a35af375a47a93584f498",
"text": "Natural language generation (NLG) is an important component of question answering(QA) systems which has a significant impact on system quality. Most tranditional QA systems based on templates or rules tend to generate rigid and stylised responses without the natural variation of human language. Furthermore, such methods need an amount of work to generate the templates or rules. To address this problem, we propose a Context-Aware LSTM model for NLG. The model is completely driven by data without manual designed templates or rules. In addition, the context information, including the question to be answered, semantic values to be addressed in the response, and the dialogue act type during interaction, are well approached in the neural network model, which enables the model to produce variant and informative responses. The quantitative evaluation and human evaluation show that CA-LSTM obtains state-of-the-art performance.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "056a1d216afd6ea3841b9d4f49c896b6",
"text": "The first car was invented in 1870 by Siegfried Marcus (Guarnieri, 2011). Actually it was just a wagon with an engine but without a steering wheel and without brakes. Instead, it was controlled by the legs of the driver. Converting traditional vehicles into autonomous vehicles was not just one step. The first step was just 28 years after the invention of cars that is to say 1898. This step's concept was moving a vehicle by a remote controller (Nikola, 1898). Since this first step and as computers have been becoming advanced and sophisticated, many functions of modern vehicles have been converted to be entirely automatic with no need of even remote controlling. Changing gears was one of the first actions that could be done automatically without an involvement of the driver (Anthony, 1908), so such cars got the title of \"automatic cars\"; however, nowadays there are vehicles that can completely travel by themselves although they are not yet allowed to travel on public roads in most of the world. Such vehicles are called \"autonomous vehicles\" or \"driverless cars\".",
"title": ""
},
{
"docid": "627aee14031293785224efdb7bac69f0",
"text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.",
"title": ""
},
{
"docid": "c83ec9a4ec6f58ea2fe57bf2e4fa0c37",
"text": "Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA’s high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.",
"title": ""
},
{
"docid": "541eb97c2b008fefa6b50a5b372b2f31",
"text": "Due to advancements in the mobile technology and the presence of strong mobile platforms, it is now possible to use the revolutionising augmented reality technology in mobiles. This research work is based on the understanding of different types of learning theories, concept of mobile learning and mobile augmented reality and discusses how applications using these advanced technologies can shape today's education systems.",
"title": ""
},
{
"docid": "9db9902c0e9d5fc24714554625a04c7a",
"text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.",
"title": ""
},
{
"docid": "2f8f1f2db01eeb9a47591e77bb1c835a",
"text": "We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.",
"title": ""
},
{
"docid": "a200c0d2d6a437eb3f9a019e4ed530eb",
"text": "With the rising of online social networks, influence has been a complex and subtle force to govern users’ behaviors and relationship formation. Therefore, how to precisely identify and measure influence has been a hot research direction. Differentiating from existing researches, we are devoted to combining the status of users in the network and the contents generated from these users to synthetically measure the influence diffusion. In this paper, we firstly proposed a directed user-content bipartite graph model. Next, an iterative algorithm is designed to compute two scores: the users’ Influence and boards’ Reach. Finally, we conduct extensive experiments on the dataset extracted from the online community Pinterest. The experimental results verify our proposed model can discover most influential users and popular broads effectively and can also be expected to benefit various applications, e.g., viral marketing, personal recommendation, information retrieval, etc. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c2dfa94555085b6ca3b752d719688613",
"text": "In this paper, we propose RNN-Capsule, a capsule model based on Recurrent Neural Network (RNN) for sentiment analysis. For a given problem, one capsule is built for each sentiment category e.g., ‘positive’ and ‘negative’. Each capsule has an attribute, a state, and three modules: representation module, probability module, and reconstruction module. The attribute of a capsule is the assigned sentiment category. Given an instance encoded in hidden vectors by a typical RNN, the representation module builds capsule representation by the attention mechanism. Based on capsule representation, the probability module computes the capsule’s state probability. A capsule’s state is active if its state probability is the largest among all capsules for the given instance, and inactive otherwise. On two benchmark datasets (i.e., Movie Review and Stanford Sentiment Treebank) and one proprietary dataset (i.e., Hospital Feedback), we show that RNN-Capsule achieves state-of-the-art performance on sentiment classification. More importantly, without using any linguistic knowledge, RNN-Capsule is capable of outputting words with sentiment tendencies reflecting capsules’ attributes. The words well reflect the domain specificity of the dataset. ACM Reference Format: Yequan Wang1 Aixin Sun2 Jialong Han3 Ying Liu4 Xiaoyan Zhu1. 2018. Sentiment Analysis by Capsules. InWWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3178876.3186015",
"title": ""
},
{
"docid": "753d840a62fc4f4b57f447afae07ba84",
"text": "Feature selection has been proven to be effective and efficient in preparing high-dimensional data for data mining and machine learning problems. Since real-world data is usually unlabeled, unsupervised feature selection has received increasing attention in recent years. Without label information, unsupervised feature selection needs alternative criteria to define feature relevance. Recently, data reconstruction error emerged as a new criterion for unsupervised feature selection, which defines feature relevance as the capability of features to approximate original data via a reconstruction function. Most existing algorithms in this family assume predefined, linear reconstruction functions. However, the reconstruction function should be data dependent and may not always be linear especially when the original data is high-dimensional. In this paper, we investigate how to learn the reconstruction function from the data automatically for unsupervised feature selection, and propose a novel reconstruction-based unsupervised feature selection framework REFS, which embeds the reconstruction function learning process into feature selection. Experiments on various types of realworld datasets demonstrate the effectiveness of the proposed framework REFS.",
"title": ""
},
{
"docid": "f582f73b7a7a252d6c17766a9c5f8dee",
"text": "The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets.",
"title": ""
}
] |
scidocsrr
|
c93a476afc35a3cc919bc06906b0d5cc
|
Semantic Complex Event Processing for Social Media Monitoring-A Survey
|
[
{
"docid": "2c2be931e456761824920fcc9e4666ec",
"text": "The resource description framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for these kinds of graphs which includes the notion of temporal entailment and a syntax to incorporate this framework into standard RDF graphs, using the RDF vocabulary plus temporal labels. We give a characterization of temporal entailment in terms of RDF entailment and show that the former does not yield extra asymptotic complexity with respect to nontemporal RDF graphs. We also discuss temporal RDF graphs with anonymous timestamps, providing a theoretical framework for the study of temporal anonymity. Finally, we sketch a temporal query language for RDF, along with complexity results for query evaluation that show that the time dimension preserves the tractability of answers",
"title": ""
}
] |
[
{
"docid": "3b442860310e3617184f9ccc89e5cddc",
"text": "A pneumatic muscle (PM) system was studied to determine whether a three-element model could describe its dynamics. As far as the authors are aware, this model has not been used to describe the dynamics of PM. A new phenomenological model consists of a contractile (force-generating) element, spring element, and damping element in parallel. The PM system was investigated using an apparatus that allowed precise and accurate actuation pressure (P) control by a linear servovalve. Length change of the PM was measured by a linear potentiometer. Spring and damping element functions of P were determined by a static perturbation method at several constant P values. These results indicate that at constant P, PM behaves as a spring and damper in parallel. The contractile element function of P was determined by the response to a step input in P, using values of spring and damping elements from the perturbation study. The study showed that the resulting coefficient functions of the three-element model describe the dynamic response to the step input of P accurately, indicating that the static perturbation results can be applied to the dynamic case. This model is further validated by accurately predicting the contraction response to a triangular P waveform. All three elements have pressure-dependent coefficients for pressure P in the range 207 ⩽ P⩽ 621 kPa (30⩽ P⩽ 90 psi). Studies with a step decrease in P (relaxation of the PM) indicate that the damping element coefficient is smaller during relaxation than contraction.© 2003 Biomedical Engineering Society. PAC2003: 8719Rr, 8719Ff, 8710+e, 8768+z",
"title": ""
},
{
"docid": "06605d7a6538346f3bb0771fd3c92c12",
"text": "Measurements show that the IGBT is able to clamp the collector-emitter voltage to a certain value at short-circuit turn-off despite a very low gate turn-off resistor in combination with a high parasitic inductance is applied. The IGBT itself reduces the turn-off diC/dt by avalanche injection. However, device destructions during fast turn-off were observed which cannot be linked with an overvoltage failure mode. Measurements and semiconductor simulations of high-voltage IGBTs explain the self-clamping mechanism in detail. Possible failures which can be connected with filamentation processes are described. Options for improving the IGBT robustness during short-circuit turn-off are discussed.",
"title": ""
},
{
"docid": "2caf31154811099e68644c3e3e7e1792",
"text": "In this paper, we study the effective semi-supervised hashing method under the framework of regularized learning-based hashing. A nonlinear hash function is introduced to capture the underlying relationship among data points. Thus, the dimensionality of the matrix for computation is not only independent from the dimensionality of the original data space but also much smaller than the one using linear hash function. To effectively deal with the error accumulated during converting the real-value embeddings into the binary code after relaxation, we propose a semi-supervised nonlinear hashing algorithm using bootstrap sequential projection learning which effectively corrects the errors by taking into account of all the previous learned bits holistically without incurring the extra computational overhead. Experimental results on the six benchmark data sets demonstrate that the presented method outperforms the state-of-the-art hashing algorithms at a large margin.",
"title": ""
},
{
"docid": "f5be73d82f441b5f0d6011bbbec8b759",
"text": "Abnormal crowd behavior detection is an important research issue in computer vision. The traditional methods first extract the local spatio-temporal cuboid from video. Then the cuboid is described by optical flow or gradient features, etc. Unfortunately, because of the complex environmental conditions, such as severe occlusion, over-crowding, etc., the existing algorithms cannot be efficiently applied. In this paper, we derive the high-frequency and spatio-temporal (HFST) features to detect the abnormal crowd behaviors in videos. They are obtained by applying the wavelet transform to the plane in the cuboid which is parallel to the time direction. The high-frequency information characterize the dynamic properties of the cuboid. The HFST features are applied to the both global and local abnormal crowd behavior detection. For the global abnormal crowd behavior detection, Latent Dirichlet allocation is used to model the normal scenes. For the local abnormal crowd behavior detection, Multiple Hidden Markov Models, with an competitive mechanism, is employed to model the normal scenes. The comprehensive experiment results show that the speed of detection has been greatly improved using our approach. Moreover, a good accuracy has been achieved considering the false positive and false negative detection rates.",
"title": ""
},
{
"docid": "596ef2efc6d35ba2d507d630945ed3d1",
"text": "The paper presents a high performance system for stepper motor control in a microstepping mode, which was designed and performed with a L292 specialized integrated circuits, made by SGS-THOMSON, Microelectronics Company. The microstepping control system improves the positioning accuracy and eliminates low speed ripple and resonance effects in a stepper motor electrical drive.",
"title": ""
},
{
"docid": "0d00fb427296aff5aa31c88852635ee5",
"text": "OBJECTIVE\nTo examine the relation between milk and calcium intake in midlife and the risk of Parkinson disease (PD).\n\n\nMETHODS\nFindings are based on dietary intake observed from 1965 to 1968 in 7,504 men ages 45 to 68 in the Honolulu Heart Program. Men were followed for 30 years for incident PD.\n\n\nRESULTS\nIn the course of follow-up, 128 developed PD (7.1/10,000 person-years). Age-adjusted incidence of PD increased with milk intake from 6.9/10,000 person-years in men who consumed no milk to 14.9/10,000 person-years in men who consumed >16 oz/day (p = 0.017). After further adjustment for dietary and other factors, there was a 2.3-fold excess of PD (95% CI 1.3 to 4.1) in the highest intake group (>16 oz/day) vs those who consumed no milk. The effect of milk consumption on PD was also independent of the intake of calcium. Calcium from dairy and nondairy sources had no apparent relation with the risk of PD.\n\n\nCONCLUSIONS\nFindings suggest that milk intake is associated with an increased risk of Parkinson disease. Whether observed effects are mediated through nutrients other than calcium or through neurotoxic contaminants warrants further study.",
"title": ""
},
{
"docid": "fceb43462f77cf858ef9747c1c5f0728",
"text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.",
"title": ""
},
{
"docid": "a25adeae7e1cdc9260c7d059f9fa5f82",
"text": "This work presents a generic computer vision system designed for exploiting trained deep Convolutional Neural Networks (CNN) as a generic feature extractor and mixing these features with more traditional hand-crafted features. Such a system is a single structure that can be used for synthesizing a large number of different image classification tasks. Three substructures are proposed for creating the generic computer vision system starting from handcrafted and non-handcrafter features: i) one that remaps the output layer of a trained CNN to classify a different problem using an SVM; ii) a second for exploiting the output of the penultimate layer of a trained CNN as a feature vector to feed an SVM; and iii) a third for merging the output of some deep layers, applying a dimensionality reduction method, and using these features as the input to an SVM. The application of feature transform techniques to reduce the dimensionality of feature sets coming from the deep layers represents one of the main contributions of this paper. Three approaches are used for the non-handcrafted features: deep",
"title": ""
},
{
"docid": "397d6f645f5607140cf7d16597b8ec83",
"text": "OBJECTIVES\nTo determine if differences between dyslexic and typical readers in their reading scores and verbal IQ are evident as early as first grade and whether the trajectory of these differences increases or decreases from childhood to adolescence.\n\n\nSTUDY DESIGN\nThe subjects were the 414 participants comprising the Connecticut Longitudinal Study, a sample survey cohort, assessed yearly from 1st to 12th grade on measures of reading and IQ. Statistical analysis employed longitudinal models based on growth curves and multiple groups.\n\n\nRESULTS\nAs early as first grade, compared with typical readers, dyslexic readers had lower reading scores and verbal IQ, and their trajectories over time never converge with those of typical readers. These data demonstrate that such differences are not so much a function of increasing disparities over time but instead because of differences already present in first grade between typical and dyslexic readers.\n\n\nCONCLUSIONS\nThe achievement gap between typical and dyslexic readers is evident as early as first grade, and this gap persists into adolescence. These findings provide strong evidence and impetus for early identification of and intervention for young children at risk for dyslexia. Implementing effective reading programs as early as kindergarten or even preschool offers the potential to close the achievement gap.",
"title": ""
},
{
"docid": "560cadfecdf5207851d333b4a122a06d",
"text": "Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].",
"title": ""
},
{
"docid": "df808fcf51612bf81e8fd328d298291d",
"text": "Chemomechanical preparation of the root canal includes both mechanical instrumentation and antibacterial irrigation, and is principally directed toward the elimination of micro-organisms from the root canal system. A variety of instruments and techniques have been developed and described for this critical stage of root canal treatment. Since their introduction in 1988, nickel-titanium (NiTi) rotary instruments have become a mainstay in clinical endodontics because of their exceptional ability to shape root canals with potentially fewer procedural complications. Safe clinical usage of NiTi instruments requires an understanding of basic metallurgy of the alloy including fracture mechanisms and their correlation to canal anatomy. This paper reviews the biologic principles of preparing root canals with an emphasis on correct use of current rotary NiTi instrumentation techniques and systems. The role and properties of contemporary root canal irrigants is also discussed.",
"title": ""
},
{
"docid": "1e4cb8960a99ad69e54e8c44fb21e855",
"text": "Over the last decade, the endocannabinoid system has emerged as a pivotal mediator of acute and chronic liver injury, with the description of the role of CB1 and CB2 receptors and their endogenous lipidic ligands in various aspects of liver pathophysiology. A large number of studies have demonstrated that CB1 receptor antagonists represent an important therapeutic target, owing to beneficial effects on lipid metabolism and in light of its antifibrogenic properties. Unfortunately, the brain-penetrant CB1 antagonist rimonabant, initially approved for the management of overweight and related cardiometabolic risks, was withdrawn because of an alarming rate of mood adverse effects. However, the efficacy of peripherally-restricted CB1 antagonists with limited brain penetrance has now been validated in preclinical models of NAFLD, and beneficial effects on fibrosis and its complications are anticipated. CB2 receptor is currently considered as a promising anti-inflammatory and antifibrogenic target, although clinical development of CB2 agonists is still awaited. In this review, we highlight the latest advances on the impact of the endocannabinoid system on the key steps of chronic liver disease progression and discuss the therapeutic potential of molecules targeting cannabinoid receptors.",
"title": ""
},
{
"docid": "0a20a3c9e4da2b87a6fdc4e4a66fee2d",
"text": "In this paper, we propose a probabilistic survival model derived from the survival analysis theory for measuring aspect novelty. The retrieved documents' query-relevance and novelty are combined at the aspect level for re-ranking. Experiments conducted on the TREC 2006 and 2007 Genomics collections demonstrate the effectiveness of the proposed approach in promoting ranking diversity for biomedical information retrieval.",
"title": ""
},
{
"docid": "107bb53e3ceda3ee29fc348febe87f11",
"text": "The objective here is to develop a flat surface area measuring system which is used to calculate the surface area of any irregular sheet. The irregular leather sheet is used in this work. The system is self protected by user name and password set through software for security purpose. Only authorize user can enter into the system by entering the valid pin code. After entering into the system, the user can measure the area of any irregular sheet, monitor and control the system. The heart of the system is Programmable Logic Controller (Master K80S) which controls the complete working of the system. The controlling instructions for the system are given through the designed Human to Machine Interface (HMI). For communication purpose the GSM modem is also interfaced with the Programmable Logic Controller (PLC). The remote user can also monitor the current status of the devices by sending SMS message to the GSM modem.",
"title": ""
},
{
"docid": "ac5ba63b30562827a27607fd2b91f5d3",
"text": "Understanding unstructured texts is an essential skill for human beings as it enables knowledge acquisition. Although understanding unstructured texts is easy for we human beings with good education, it is a great challenge for machines. Recently, with the rapid development of artificial intelligence techniques, researchers put efforts to teach machines to understand texts and justify the educated machines by letting them solve the questions upon the given unstructured texts, inspired by the reading comprehension test as we humans do. However, feature effectiveness with respect to different questions significantly hinders the performance of answer selection, because different questions may focus on various aspects of the given text and answer candidates. To solve this problem, we propose a question-oriented feature attention (QFA) mechanism, which learns to weight different engineering features according to the given question, so that important features with respect to the specific question is emphasized accordingly. Experiments on MCTest dataset have well-validated the effectiveness of the proposed method. Additionally, the proposed QFA is applicable to various IR tasks, such as question answering and answer selection. We have verified the applicability on a crawled community-based question-answering dataset.",
"title": ""
},
{
"docid": "25e04f534f2a1d0d3d7e20c3c17ef387",
"text": "Recent techniques enable folding planer sheets to create complex 3D shapes, however, even a small 3D shape can have large 2D unfoldings. The huge dimension of the flattened structure makes fabrication difficult. In this paper, we propose a novel approach for folding a single thick strip into two target shapes: folded 3D shape and stacked shape. The folded shape is an approximation of a complex 3D shape provided by the user. The provided 3D shape may be too large to be fabricated (e.g. 3D-printed) due to limited workspace. Meanwhile, the stacked shape could be the compactest form of the 3D shape which makes its fabrication possible. The compactness of the stacked state also makes packing and transportation easier. The key technical contribution of this work is an efficient method for finding strips for quadrilateral meshes without refinement. We demonstrate our results using both simulation and fabricated models.",
"title": ""
},
{
"docid": "8c086dec1e59a2f0b81d6ce74e92eae7",
"text": "A necessary attribute of a mobile robot planning algorithm is the ability to accurately predict the consequences of robot actions to make informed decisions about where and how to drive. It is also important that such methods are efficient, as onboard computational resources are typically limited and fast planning rates are often required. In this article, we present several practical mobile robot motion planning algorithms for local and global search, developed with a common underlying trajectory generation framework for use in model-predictive control. These techniques all center on the idea of generating informed, feasible graphs at scales and resolutions that respect computational and temporal constraints of the application. Connectivity in these graphs is provided by a trajectory generator that searches in a parameterized space of robot inputs subject to an arbitrary predictive motion model. Local search graphs connect the currently observed state-to-states at or near the planning or perception horizon. Global search graphs repeatedly expand a precomputed trajectory library in a uniformly distributed state lattice to form a recombinant search space that respects differential constraints. In this article, we discuss the trajectory generation algorithm, methods for online or offline calibration of predictive motion models, sampling strategies for local search graphs that exploit global guidance and environmental information for real-time obstacle avoidance and navigation, and methods for efficient design of global search graphs with attention to optimality, feasibility, and computational complexity of heuristic search. The model-invariant nature of our approach to local and global motions planning has enabled a rapid and successful application of these techniques to a variety of platforms. Throughout the article, we also review experiments performed on planetary rovers, field robots, mobile manipulators, and autonomous automobiles and discuss future directions of the article.",
"title": ""
},
{
"docid": "9accdf3edad1e9714282e58758d3c382",
"text": "We present initial results from and quantitative analysis of two leading open source hypervisors, Xen and KVM. This study focuses on the overall performance, performance isolation, and scalability of virtual machines running on these hypervisors. Our comparison was carried out using a benchmark suite that we developed to make the results easily repeatable. Our goals are to understand how the different architectural decisions taken by different hypervisor developers affect the resulting hypervisors, to help hypervisor developers realize areas of improvement for their hypervisors, and to help users make informed decisions about their choice of hypervisor.",
"title": ""
},
{
"docid": "310b8159894bc88b74a907c924277de6",
"text": "We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graph-theoretic algorithm, can be interpreted as such an algorithm. Viewing Normalized Cuts under this light reveals that it pays more attention to points away from the center of the data set than those near the center of the data set. As a result, it can sometimes split long clusters and display sensitivity to outliers. We derive a variant of Normalized Cuts that assigns uniform weight to all points, eliminating the sensitivity to outliers.",
"title": ""
},
{
"docid": "acbb1a68d9e0e1768fff8acc8ae42b32",
"text": "The rapid increase in the number of Android malware poses great challenges to anti-malware systems, because the sheer number of malware samples overwhelms malware analysis systems. The classification of malware samples into families, such that the common features shared by malware samples in the same family can be exploited in malware detection and inspection, is a promising approach for accelerating malware analysis. Furthermore, the selection of representative malware samples in each family can drastically decrease the number of malware to be analyzed. However, the existing classification solutions are limited because of the following reasons. First, the legitimate part of the malware may misguide the classification algorithms because the majority of Android malware are constructed by inserting malicious components into popular apps. Second, the polymorphic variants of Android malware can evade detection by employing transformation attacks. In this paper, we propose a novel approach that constructs frequent subgraphs (fregraphs) to represent the common behaviors of malware samples that belong to the same family. Moreover, we propose and develop FalDroid, a novel system that automatically classifies Android malware and selects representative malware samples in accordance with fregraphs. We apply it to 8407 malware samples from 36 families. Experimental results show that FalDroid can correctly classify 94.2% of malware samples into their families using approximately 4.6 sec per app. FalDroid can also dramatically reduce the cost of malware investigation by selecting only 8.5% to 22% representative samples that exhibit the most common malicious behavior among all samples.",
"title": ""
}
] |
scidocsrr
|
ef672d1005138956e24b42c5fa2c62fe
|
A Survey on Internet of Things: Security and Privacy Issues
|
[
{
"docid": "3c778c71f621b2c887dc81e7a919058e",
"text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.",
"title": ""
}
] |
[
{
"docid": "795a4d9f2dc10563dfee28c3b3cd0f08",
"text": "A wide-band probe fed patch antenna with low cross polarization and symmetrical broadside radiation pattern is proposed and studied. By employing a novel meandering probe feed and locating a patch about 0.1/spl lambda//sub 0/ above a ground plane, a patch antenna with 30% impedance bandwidth (SWR<2) and 9 dBi gain is designed. The far field radiation pattern of the antenna is stable across the operating bandwidth. Parametric studies and design guidelines of the proposed feeding structure are provided.",
"title": ""
},
{
"docid": "ef9b5b0fbfd71c8d939bfe947c60292d",
"text": "OBJECTIVE\nSome prolonged and turbulent grief reactions include symptoms that differ from the DSM-IV criteria for major depressive disorder. The authors investigated a new diagnosis that would include these symptoms.\n\n\nMETHOD\nThey developed observer-based definitions of 30 symptoms noted clinically in previous longitudinal interviews of bereaved persons and then designed a plan to investigate whether any combination of these would serve as criteria for a possible new diagnosis of complicated grief disorder. Using a structured diagnostic interview, they assessed 70 subjects whose spouses had died. Latent class model analyses and signal detection procedures were used to calibrate the data against global clinical ratings and self-report measures of grief-specific distress.\n\n\nRESULTS\nComplicated grief disorder was found to be characterized by a smaller set of the assessed symptoms. Subjects elected by an algorithm for these symptoms patterns did not significantly overlap with subjects who received a diagnosis of major depressive disorder.\n\n\nCONCLUSIONS\nA new diagnosis of complicated grief disorder may be indicated. Its criteria would include the current experience (more than a year after a loss) of intense intrusive thoughts, pangs of severe emotion, distressing yearnings, feeling excessively alone and empty, excessively avoiding tasks reminiscent of the deceased, unusual sleep disturbances, and maladaptive levels of loss of interest in personal activities.",
"title": ""
},
{
"docid": "67825e84cb2e636deead618a0868fa4a",
"text": "Image compression is used specially for the compression of images where tolerable degradation is required. With the wide use of computers and consequently need for large scale storage and transmission of data, efficient ways of storing of data have become necessary. With the growth of technology and entrance into the Digital Age, the world has found itself amid a vast amount of information. Dealing with such enormous information can often present difficulties. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. In this paper, we discuss about lossy image compression techniques and reviews of different basic lossy image compression methods are considered. The methods such as JPEG and JPEG2000 are considered. A conclusion is derived on the basis of these methods Keywords— Data compression, Lossy image compression, JPEG, JPEG2000, DCT, DWT",
"title": ""
},
{
"docid": "aeba4012971d339a9a953a7b86f57eb8",
"text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"title": ""
},
{
"docid": "6111427d19826acdd38c80cb7f421405",
"text": "We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks.",
"title": ""
},
{
"docid": "db3bb02dde6c818b173cf12c9c7440b7",
"text": "PURPOSE\nThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-training? and (2) What challenges and opportunities specific to social media have educators encountered in implementing these interventions?\n\n\nMETHOD\nThe authors searched the MEDLINE, CINAHL, ERIC, Embase, PsycINFO, ProQuest, Cochrane Library, Web of Science, and Scopus databases (from the start of each through September 12, 2011) using keywords related to social media and medical education. Two authors independently reviewed the search results to select peer-reviewed, English-language articles discussing social media use in educational interventions at any level of physician training. They assessed study quality using the Medical Education Research Study Quality Instrument.\n\n\nRESULTS\nFourteen studies met inclusion criteria. Interventions using social media tools were associated with improved knowledge (e.g., exam scores), attitudes (e.g., empathy), and skills (e.g., reflective writing). The most commonly reported opportunities related to incorporating social media tools were promoting learner engagement (71% of studies), feedback (57%), and collaboration and professional development (both 36%). The most commonly cited challenges were technical issues (43%), variable learner participation (43%), and privacy/security concerns (29%). Studies were generally of low to moderate quality; there was only one randomized controlled trial.\n\n\nCONCLUSIONS\nSocial media use in medical education is an emerging field of scholarship that merits further investigation. Educators face challenges in adapting new technologies, but they also have opportunities for innovation.",
"title": ""
},
{
"docid": "fc522482dbbcdeaa06e3af9a2f82b377",
"text": "Background/Objectives:As rates of obesity have increased throughout much of the world, so too have bias and prejudice toward people with higher body weight (that is, weight bias). Despite considerable evidence of weight bias in the United States, little work has examined its extent and antecedents across different nations. The present study conducted a multinational examination of weight bias in four Western countries with comparable prevalence rates of adult overweight and obesity.Methods:Using comprehensive self-report measures with 2866 individuals in Canada, the United States, Iceland and Australia, the authors assessed (1) levels of explicit weight bias (using the Fat Phobia Scale and the Universal Measure of Bias) and multiple sociodemographic predictors (for example, sex, age, race/ethnicity and educational attainment) of weight-biased attitudes and (2) the extent to which weight-related variables, including participants’ own body weight, personal experiences with weight bias and causal attributions of obesity, play a role in expressions of weight bias in different countries.Results:The extent of weight bias was consistent across countries, and in each nation attributions of behavioral causes of obesity predicted stronger weight bias, as did beliefs that obesity is attributable to lack of willpower and personal responsibility. In addition, across all countries the magnitude of weight bias was stronger among men and among individuals without family or friends who had experienced this form of bias.Conclusions:These findings offer new insights and important implications regarding sociocultural factors that may fuel weight bias across different cultural contexts, and for targets of stigma-reduction efforts in different countries.",
"title": ""
},
{
"docid": "e473e6b4c5d825582f3a5afe00a005de",
"text": "This paper explores and quantifies garbage collection behavior for three whole heap collectors and generational counterparts: copying semi-space, mark-sweep, and reference counting, the canonical algorithms from which essentially all other collection algorithms are derived. Efficient implementations in MMTk, a Java memory management toolkit, in IBM's Jikes RVM share all common mechanisms to provide a clean experimental platform. Instrumentation separates collector and program behavior, and performance counters measure timing and memory behavior on three architectures.Our experimental design reveals key algorithmic features and how they match program characteristics to explain the direct and indirect costs of garbage collection as a function of heap size on the SPEC JVM benchmarks. For example, we find that the contiguous allocation of copying collectors attains significant locality benefits over free-list allocators. The reduced collection costs of the generational algorithms together with the locality benefit of contiguous allocation motivates a copying nursery for newly allocated objects. These benefits dominate the overheads of generational collectors compared with non-generational and no collection, disputing the myth that \"no garbage collection is good garbage collection.\" Performance is less sensitive to the mature space collection algorithm in our benchmarks. However the locality and pointer mutation characteristics for a given program occasionally prefer copying or mark-sweep. This study is unique in its breadth of garbage collection algorithms and its depth of analysis.",
"title": ""
},
{
"docid": "ac0119255806976213d61029247b14f1",
"text": "Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. We conducted a controlled experiment to test the effects of display and scenario properties on training effectiveness for a visual scanning task in a simulated urban environment. The experiment varied the levels of field of view and visual complexity during a training phase and then evaluated scanning performance with the simulator's highest levels of fidelity and scene complexity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual complexity significantly affected target detection during training; higher field of view led to better performance and higher visual complexity worsened performance. Additionally, adherence to the prescribed visual scanning strategy during assessment was best when the level of visual complexity during training matched that of the assessment conditions, providing evidence that similar visual complexity was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training-evaluation in a more realistic setting may be necessary.",
"title": ""
},
{
"docid": "f2640838cfc3938d1a717229e77b3afc",
"text": "Defenders of enterprise networks have a critical need to quickly identify the root causes of malware and data leakage. Increasingly, USB storage devices are the media of choice for data exfiltration, malware propagation, and even cyber-warfare. We observe that a critical aspect of explaining and preventing such attacks is understanding the provenance of data (i.e., the lineage of data from its creation to current state) on USB devices as a means of ensuring their safe usage. Unfortunately, provenance tracking is not offered by even sophisticated modern devices. This work presents ProvUSB, an architecture for fine-grained provenance collection and tracking on smart USB devices. ProvUSB maintains data provenance by recording reads and writes at the block layer and reliably identifying hosts editing those blocks through attestation over the USB channel. Our evaluation finds that ProvUSB imposes a one-time 850 ms overhead during USB enumeration, but approaches nearly-bare-metal runtime performance (90% of throughput) on larger files during normal execution, and less than 0.1% storage overhead for provenance in real-world workloads. ProvUSB thus provides essential new techniques in the defense of computer systems and USB storage devices.",
"title": ""
},
{
"docid": "edcdae3f9da761cedd52273ccd850520",
"text": "Extracting information from Web pages requires the ability to work at Web scale in terms of the number of documents, the number of domains and domain complexity. Recent approaches have used existing knowledge bases to learn to extract information with promising results. In this paper we propose the use of distant supervision for relation extraction from the Web. Distant supervision is a method which uses background information from the Linking Open Data cloud to automatically label sentences with relations to create training data for relation classifiers. Although the method is promising, existing approaches are still not suitable for Web extraction as they suffer from three main issues: data sparsity, noise and lexical ambiguity. Our approach reduces the impact of data sparsity by making entity recognition tools more robust across domains, as well as extracting relations across sentence boundaries. We reduce the noise caused by lexical ambiguity by employing statistical methods to strategically select training data. Our experiments show that using a more robust entity recognition approach and expanding the scope of relation extraction results in about 8 times the number of extractions, and that strategically selecting training data can result in an error reduction of about 30%.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "a58cbbff744568ae7abd2873d04d48e9",
"text": "Training real-world Deep Neural Networks (DNNs) can take an eon (i.e., weeks or months) without leveraging distributed systems. Even distributed training takes inordinate time, of which a large fraction is spent in communicating weights and gradients over the network. State-of-the-art distributed training algorithms use a hierarchy of worker-aggregator nodes. The aggregators repeatedly receive gradient updates from their allocated group of the workers, and send back the updated weights. This paper sets out to reduce this significant communication cost by embedding data compression accelerators in the Network Interface Cards (NICs). To maximize the benefits of in-network acceleration, the proposed solution, named INCEPTIONN (In-Network Computing to Exchange and Process Training Information Of Neural Networks), uniquely combines hardware and algorithmic innovations by exploiting the following three observations. (1) Gradients are significantly more tolerant to precision loss than weights and as such lend themselves better to aggressive compression without the need for the complex mechanisms to avert any loss. (2) The existing training algorithms only communicate gradients in one leg of the communication, which reduces the opportunities for in-network acceleration of compression. (3) The aggregators can become a bottleneck with compression as they need to compress/decompress multiple streams from their allocated worker group. To this end, we first propose a lightweight and hardware-friendly lossy-compression algorithm for floating-point gradients, which exploits their unique value characteristics. This compression not only enables significantly reducing the gradient communication with practically no loss of accuracy, but also comes with low complexity for direct implementation as a hardware block in the NIC. To maximize the opportunities for compression and avoid the bottleneck at aggregators, we also propose an aggregator-free training algorithm that exchanges gradients in both legs of communication in the group, while the workers collectively perform the aggregation in a distributed manner. Without changing the mathematics of training, this algorithm leverages the associative property of the aggregation operator and enables our in-network accelerators to (1) apply compression for all communications, and (2) prevent the aggregator nodes from becoming bottlenecks. Our experiments demonstrate that INCEPTIONN reduces the communication time by 70.9~80.7% and offers 2.2~3.1x speedup over the conventional training system, while achieving the same level of accuracy.",
"title": ""
},
{
"docid": "df808fcf51612bf81e8fd328d298291d",
"text": "Chemomechanical preparation of the root canal includes both mechanical instrumentation and antibacterial irrigation, and is principally directed toward the elimination of micro-organisms from the root canal system. A variety of instruments and techniques have been developed and described for this critical stage of root canal treatment. Since their introduction in 1988, nickel-titanium (NiTi) rotary instruments have become a mainstay in clinical endodontics because of their exceptional ability to shape root canals with potentially fewer procedural complications. Safe clinical usage of NiTi instruments requires an understanding of basic metallurgy of the alloy including fracture mechanisms and their correlation to canal anatomy. This paper reviews the biologic principles of preparing root canals with an emphasis on correct use of current rotary NiTi instrumentation techniques and systems. The role and properties of contemporary root canal irrigants is also discussed.",
"title": ""
},
{
"docid": "f48f55963cf3beb43170df96a463feba",
"text": "This article proposes and implements a class of chaotic motors for electric compaction. The key is to develop a design approach for the permanent magnets PMs of doubly salient PM DSPM motors in such a way that chaotic motion can be naturally produced. The bifurcation diagram is employed to derive the threshold of chaoization in terms of PM flux, while the corresponding phase-plane trajectories are used to characterize the chaotic motion. A practical three-phase 12/8-pole DSPM motor is used for exemplification. The proposed chaotic motor is critically assessed for application to a vibratory soil compactor, which is proven to offer better compaction performance than its counterparts. Both computer simulation and experimental results are given to illustrate the proposed chaotic motor. © 2006 American Institute of Physics. DOI: 10.1063/1.2165783",
"title": ""
},
{
"docid": "b82b46fc0d886e3e87b757a6ca14d4bb",
"text": "Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.",
"title": ""
},
{
"docid": "e9b8787e5bb1f099e914db890e04dc23",
"text": "This paper presents the design of a compact UHF-RFID tag antenna with several miniaturization techniques including meandering technique and capacitive tip-loading structure. Additionally, T-matching technique is also utilized in the antenna design for impedance matching. This antenna was designed on Rogers 5880 printed circuit board (PCB) with the dimension of 43 × 26 × 0.787 mm3 and relative permittivity, □r of 2.2. The performance of the proposed antenna was analyzed in terms of matched impedance, antenna gain, return loss and tag reading range through the simulation in CST Microwave Studio software. As a result, the proposed antenna obtained a gain of 0.97dB and a maximum reading range of 5.15 m at 921 MHz.",
"title": ""
},
{
"docid": "d8b3eb944d373741747eb840a18a490b",
"text": "Natural scenes contain large amounts of geometry, such as hundreds of thousands or even millions of tree leaves and grass blades. Subtle lighting effects present in such environments usually include a significant amount of occlusion effects and lighting variation. These effects are important for realistic renderings of such natural environments; however, plausible lighting and full global illumination computation come at prohibitive costs especially for interactive viewing. As a solution to this problem, we present a simple approximation to integrated visibility over a hemisphere (ambient occlusion) that allows interactive rendering of complex and dynamic scenes. Based on a set of simple assumptions, we show that our method allows the rendering of plausible variation in lighting at modest additional computation and little or no precomputation, for complex and dynamic scenes.",
"title": ""
},
{
"docid": "96f42b3a653964cffa15d9b3bebf0086",
"text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,",
"title": ""
},
{
"docid": "aa0d6d4fb36c2a1d18dac0930e89179e",
"text": "The interest in biomass is increasing in the light of the growing concern about global warming and the resulting climate change. The emission of the greenhouse gas CO2 can be reduced when 'green' biomass-derived transportation fuels are used. One of the most promising routes to produce green fuels is the combination of biomass gasification (BG) and Fischer-Tropsch (FT) synthesis, wherein biomass is gasified and after cleaning the biosyngas is used for FT synthesis to produce long-chain hydrocarbons that are converted into ‘green diesel’. To demonstrate this route, a small FT unit based on Shell technology was operated for in total 650 hours on biosyngas produced by gasification of willow. In the investigated system, tars were removed in a high-temperature tar cracker and other impurities, like NH3 and H2S were removed via wet scrubbing followed by active-carbon and ZnO filters. The experimental work and the supporting system analysis afforded important new insights on the desired gas cleaning and the optimal line-up for biomass gasification processes with a maximised conversion to FT liquids. Two approaches were considered: a front-end approach with reference to the (small) scale of existing CFB gasifiers (1-100 M Wth) and a back-end approach with reference to the desired (large) scale for FT synthesis (500-1000 MWth). In general, the sum of H2 and CO in the raw biosyngas is an important parameter, whereas the H2/CO ratio is less relevant. BTX (i.e . benzene, toluene, and xylenes) are the design guideline for the gas cleaning and with this the tar issue is de-facto solved (as tars are easier to remove than BTX). To achieve high yields of FT products the presence of a tar cracker in the system is required. Oxygen gasification allows a further increase in yield of FT products as a N2-free gas is required for off-gas recycling. The scale of the BG-FT installation determines the line-up of the gas cleaning and the integrated process. It is expected that the future of BG-FT systems will be large plants with pressurised oxygen blown gasifiers and maximised Fischer-Tropsch synthesis.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.