query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
17
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
e069511af849975d57bb7a32fefdd9c1
|
A new interpretation of nonlinear energy operator and its efficacy in spike detection
|
[
{
"docid": "94c7fde13a5792a89b7575ac41827f1c",
"text": "The noise sensitivities of nine different QRS detection algorithms were measured for a normal, single-channel, lead-II, synthesized ECG corrupted with five different types of synthesized noise: electromyographic interference, 60-Hz power line interference, baseline drift due to respiration, abrupt baseline shift, and a composite noise constructed from all of the other noise types. The percentage of QRS complexes detected, the number of false positives, and the detection delay were measured. None of the algorithms were able to detect all QRS complexes without any false positives for all of the noise types at the highest noise level. Algorithms based on amplitude and slope had the highest performance for EMG-corrupted ECG. An algorithm using a digital filter had the best performance for the composite-noise-corrupted data.<<ETX>>",
"title": ""
},
{
"docid": "fd18cb0cc94b336ff32b29e0f27363dc",
"text": "We have developed a real-time algorithm for detection of the QRS complexes of ECG signals. It reliably recognizes QRS complexes based upon digital analyses of slope, amplitude, and width. A special digital bandpass filter reduces false detections caused by the various types of interference present in ECG signals. This filtering permits use of low thresholds, thereby increasing detection sensitivity. The algorithm automatically adjusts thresholds and parameters periodically to adapt to such ECG changes as QRS morphology and heart rate. For the standard 24 h MIT/BIH arrhythmia database, this algorithm correctly detects 99.3 percent of the QRS complexes.",
"title": ""
}
] |
[
{
"docid": "5441d081eabb4ad3d96775183e603b65",
"text": "We give an introduction to computation and logic tailored for algebraists, and use this as a springboard to discuss geometric models of computation and the role of cut-elimination in these models, following Girard's geometry of interaction program. We discuss how to represent programs in the λ-calculus and proofs in linear logic as linear maps between infinite-dimensional vector spaces. The interesting part of this vector space semantics is based on the cofree cocommutative coalgebra of Sweedler [71] and the recent explicit computations of liftings in [62].",
"title": ""
},
{
"docid": "d2abcdcdb6650c30838507ec1521b263",
"text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.",
"title": ""
},
{
"docid": "62a611b7f5a5d3bb99659c4ee9e5e4a3",
"text": "Transmissible spongioform enchephalopathies (TSE's), include bovine spongiform encephalopathy (also called BSE or \"mad cow disease\"), Creutzfeldt-Jakob disease (CJD) in humans, and scrapie in sheep. They remain a mystery, their cause hotly debated. But between 1994 and 1996, 12 people in England came down with CJD, the human form of mad cow, and all had eaten beef from suspect cows. Current mad cow diagnosis lies solely in the detection of late appearing \"prions\", an acronym for hypothesized, gene-less, misfolded proteins, somehow claimed to cause the disease. Yet laboratory preparations of prions contain other things, which could include unidentified bacteria or viruses. Furthermore, the rigors of prion purification alone, might, in and of themselves, have killed the causative virus or bacteria. Therefore, even if samples appear to infect animals, it is impossible to prove that prions are causative. Manuelidis found viral-like particles, which even when separated from prions, were responsible for spongiform STE's. Subsequently, Lasmezas's study showed that 55% of mice injected with cattle BSE, and who came down with disease, had no detectable prions. Still, incredibly, prions, are held as existing TSE dogma and Heino Dringer, who did pioneer work on their nature, candidly predicts \"it will turn out that the prion concept is wrong.\" Many animals that die of spongiform TSE's never show evidence of misfolded proteins, and Dr. Frank Bastian, of Tulane, an authority, thinks the disorder is caused by the bacterial DNA he found in this group of diseases. Recently, Roels and Walravens isolated Mycobacterium bovis it from the brain of a cow with the clinical and histopathological signs of mad cow. Moreover, epidemiologic maps of the origins and peak incidence of BSE in the UK, suggestively match those of England's areas of highest bovine tuberculosis, the Southwest, where Britain's mad cow epidemic began. The neurotoxic potential for cow tuberculosis was shown in pre-1960 England, where one quarter of all tuberculous meningitis victims suffered from Mycobacterium bovis infection. And Harley's study showed pathology identical to \"mad cow\" from systemic M. bovis in cattle, causing a tuberculous spongiform encephalitis. In addition to M. bovis, Mycobacterium avium subspecies paratuberculosis (fowl tuberculosis) causes Johne's disease, a problem known and neglected in cattle and sheep for almost a century, and rapidly emerging as the disease of the new millennium. Not only has M. paratuberculosis been found in human Crohn's disease, but both Crohn's and Johne's both cross-react with the antigens of cattle paratuberculosis. Furthermore, central neurologic manifestations of Crohn's disease are not unknown. There is no known disease which better fits into what is occurring in Mad Cow and the spongiform enchephalopathies than bovine tuberculosis and its blood-brain barrier penetrating, virus-like, cell-wall-deficient forms. It is for these reasons that future research needs to be aimed in this direction.",
"title": ""
},
{
"docid": "967ebcd284a6a4dc58adf11eec0b10f0",
"text": "An innovative LDS 4G antenna solution operating in the 698-960 MHz band is presented. It is composed of two radiating elements recombined in a broadband single feed antenna system using a multiband matching circuit design. Matching interfaces are synthesized thanks to lumped components placed on the FR4 PCB supporting the LDS antenna. Measurement shows a reflection coefficient better than -6 dB over the 698-960 MHz band, with a 30% peak total efficiency. Measurement using a realistic phone casing showed the same performances. The proposed approach can be extended to additional bands, offering an innovative antenna solution able to address the multi band challenge related to 4G applications.",
"title": ""
},
{
"docid": "ace8bb06235e9ab70a85246a3fbee710",
"text": "How can humans acquire relational representations that enable analogical inference and other forms of high-level reasoning? Using comparative relations as a model domain, we explore the possibility that bottom-up learning mechanisms applied to objects coded as feature vectors can yield representations of relations sufficient to solve analogy problems. We introduce Bayesian analogy with relational transformations (BART) and apply the model to the task of learning first-order comparative relations (e.g., larger, smaller, fiercer, meeker) from a set of animal pairs. Inputs are coded by vectors of continuous-valued features, based either on human magnitude ratings, normed feature ratings (De Deyne et al., 2008), or outputs of the topics model (Griffiths, Steyvers, & Tenenbaum, 2007). Bootstrapping from empirical priors, the model is able to induce first-order relations represented as probabilistic weight distributions, even when given positive examples only. These learned representations allow classification of novel instantiations of the relations and yield a symbolic distance effect of the sort obtained with both humans and other primates. BART then transforms its learned weight distributions by importance-guided mapping, thereby placing distinct dimensions into correspondence. These transformed representations allow BART to reliably solve 4-term analogies (e.g., larger:smaller::fiercer:meeker), a type of reasoning that is arguably specific to humans. Our results provide a proof-of-concept that structured analogies can be solved with representations induced from unstructured feature vectors by mechanisms that operate in a largely bottom-up fashion. We discuss potential implications for algorithmic and neural models of relational thinking, as well as for the evolution of abstract thought.",
"title": ""
},
{
"docid": "dcab641c170b60e9b834b27e2048a457",
"text": "Over the past few years, we have been developing techniques for high-speed 3D shape measurement using digital fringe projection and phase-shifting techniques: various algorithms have been developed to improve the phase computation speed, parallel programming has been employed to further increase the processing speed, and advanced hardware technologies have been adopted to boost the speed of coordinate calculations and 3D geometry rendering. We have successfully achieved simultaneous 3D absolute shape acquisition, reconstruction, and display at a speed of 30 frames/s with 300 K points per frame. This paper presents the principles of the real-time 3D shape measurement techniques that we developed, summarizes the most recent progresses that have been made in this field, and discusses the challenges for advancing this technology further. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b0a1a782ce2cbf5f152a52537a1db63d",
"text": "In piezoelectric energy harvesting (PEH), with the use of the nonlinear technique named synchronized switching harvesting on inductor (SSHI), the harvesting efficiency can be greatly enhanced. Furthermore, the introduction of its self-powered feature makes this technique more applicable for standalone systems. In this article, a modified circuitry and an improved analysis for self-powered SSHI are proposed. With the modified circuitry, direct peak detection and better isolation among different units within the circuit can be achieved, both of which result in further removal on dissipative components. In the improved analysis, details in open circuit voltage, switching phase lag, and voltage inversion factor are discussed, all of which lead to a better understanding to the working principle of the self-powered SSHI. Both analyses and experiments show that, in terms of harvesting power, the higher the excitation level, the closer between self-powered and ideal SSHI; at the same time, the more beneficial the adoption of self-powered SSHI treatment in piezoelectric energy harvesting, compared to the standard energy harvesting (SEH) technique.",
"title": ""
},
{
"docid": "b5acaea3bf5c5a4ee5bda266bfe083ca",
"text": "The Internet provides the opportunity for investors to post online opinions that they share with fellow investors. Sentiment analysis of online opinion posts can facilitate both investors' investment decision making and stock companies' risk perception. This paper develops a novel sentiment ontology to conduct context-sensitive sentiment analysis of online opinion posts in stock markets. The methodology integrates popular sentiment analysis into machine learning approaches based on support vector machine and generalized autoregressive conditional heteroskedasticity modeling. A typical financial website called Sina Finance has been selected as an experimental platform where a corpus of financial review data was collected. Empirical results suggest solid correlations between stock price volatility trends and stock forum sentiment. Computational results show that the statistical machine learning approach has a higher classification accuracy than that of the semantic approach. Results also imply that investor sentiment has a particularly strong effect for value stocks relative to growth stocks.",
"title": ""
},
{
"docid": "3f807cb7e753ebd70558a0ce74b416b7",
"text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d4c2f6e28704017fa32075582d87875f",
"text": "All over the world we have been assisting to a significant increase of the telecommunication systems usage. People are faced day after day with strong marketing campaigns seeking their attention to new telecommunication products and services. Telecommunication companies struggle in a high competitive business arena. It seems that their efforts were well done, because customers are strongly adopting the new trends and use (and abuse) systematically communication services in their quotidian. Although fraud situations are rare, they are increasing and they correspond to a large amount of money that telecommunication companies lose every year. In this work, we studied the problem of fraud detection in telecommunication systems, especially the cases of superimposed fraud, providing an anomaly detection technique, supported by a signature schema. Our main goal is to detect deviate behaviors in useful time, giving better basis to fraud analysts to be more accurate in their decisions in the establishment of potential fraud situations.",
"title": ""
},
{
"docid": "8533b47323e9de6fb24e88a49c3e52fa",
"text": "An ontology is a set of deenitions of content-speciic knowledge representation prim-itives: classes, relations, functions, and object constants. Ontolingua is mechanism for writing ontologies in a canonical format, such that they can be easily translated into a variety of representation and reasoning systems. This allows one to maintain the ontol-ogy in a single, machine-readable form while using it in systems with diierent syntax and reasoning capabilities. The syntax and semantics are based on the KIF knowledge interchange format 11]. Ontolingua extends KIF with standard primitives for deening classes and relations, and organizing knowledge in object-centered hierarchies with inheritance. The Ontolingua software provides an architecture for translating from KIF-level sentences into forms that can be eeciently stored and reasoned about by target representation systems. Currently, there are translators into LOOM, Epikit, and Algernon, as well as a canonical form of KIF. This paper describes the basic approach of Ontolingua to the ontology sharing problem, introduces the syntax, and describes the semantics of a few ontological commitments made in the software. Those commitments, which are reeected in the on-tolingua syntax and the primitive vocabulary of the frame ontology, include: a distinction between deenitional and nondeenitional assertions; the organization of knowledge with classes, instances, sets, and second-order relations; and assertions whose meaning depends on the contents of the knowledge base. Limitations of Ontolingua's \\conser-vative\" approach to sharing ontologies and alternative approaches to the problem are discussed.",
"title": ""
},
{
"docid": "7e03d09882c7c8fcab5df7a6bd12764f",
"text": "This paper describes a background digital calibration technique based on bitwise correlation (BWC) to correct the capacitive digital-to-analog converter (DAC) mismatch error in successive-approximation-register (SAR) analog-to-digital converters (ADC's). Aided by a single-bit pseudorandom noise (PN) injected to the ADC input, the calibration engine extracts all bit weights simultaneously to facilitate a digital-domain correction. The analog overhead associated with this technique is negligible and the conversion speed is fully retained (in contrast to [1] in which the ADC throughput is halved). A prototype 12bit 50-MS/s SAR ADC fabricated in 90-nm CMOS measured a 66.5-dB peak SNDR and an 86.0-dB peak SFDR with calibration, while occupying 0.046 mm2 and dissipating 3.3 mW from a 1.2-V supply. The calibration logic is estimated to occupy 0.072 mm2 with a power consumption of 1.4 mW in the same process.",
"title": ""
},
{
"docid": "bc8c9bb602f0d0b368f11491c7897675",
"text": "Autonomous robot An autonomous robot is a robot that can perform tasks in unstructured environments with minimal human guidance. Planned path A planned path is a pre-determined, obstacle-free, trajectory that a robot can follow to reach its goal position from its starting position. Complete path planner A complete path planner is an algorithm that is guaranteed to find a path, if one exists. Deadlocked path planning A deadlock is a situation in path planning in which a solution cannot be found, even though one exists. Typically, this is caused by robots blocking each other’s paths, and the planner being unable to find a solution in which robots move out of each other’s way.",
"title": ""
},
{
"docid": "5cf71fc03658cd7210ac2a764f1425d7",
"text": "Most existing pose robust methods are too computational complex to meet practical applications and their performance under unconstrained environments are rarely evaluated. In this paper, we propose a novel method for pose robust face recognition towards practical applications, which is fast, pose robust and can work well under unconstrained environments. Firstly, a 3D deformable model is built and a fast 3D model fitting algorithm is proposed to estimate the pose of face image. Secondly, a group of Gabor filters are transformed according to the pose and shape of face image for feature extraction. Finally, PCA is applied on the pose adaptive Gabor features to remove the redundances and Cosine metric is used to evaluate the similarity. The proposed method has three advantages: (1) The pose correction is applied in the filter space rather than image space, which makes our method less affected by the precision of the 3D model, (2) By combining the holistic pose transformation and local Gabor filtering, the final feature is robust to pose and other negative factors in face recognition, (3) The 3D structure and facial symmetry are successfully used to deal with self-occlusion. Extensive experiments on FERET and PIE show the proposed method outperforms state-of-the-art methods significantly, meanwhile, the method works well on LFW.",
"title": ""
},
{
"docid": "457a662fd9928cdb1353ce460cb63422",
"text": "Learning and generating Chinese poems is a charming yet challenging task. Traditional approaches involve various language modeling and machine translation techniques, however, they perform not as well when generating poems with complex pattern constraints, for example Song iambics, a famous type of poems that involve variable-length sentences and strict rhythmic patterns. This paper applies the attention-based sequence-tosequence model to generate Chinese Song iambics. Specifically, we encode the cue sentences by a bi-directional Long-Short Term Memory (LSTM) model and then predict the entire iambic with the information provided by the encoder, in the form of an attention-based LSTM that can regularize the generation process by the fine structure of the input cues. Several techniques are investigated to improve the model, including global context integration, hybrid style training, character vector initialization and adaptation. Both the automatic and subjective evaluation results show that our model indeed can learn the complex structural and rhythmic patterns of Song iambics, and the generation is rather successful.",
"title": ""
},
{
"docid": "7c3f14bbbb3cf2bbe7c9caaf42361445",
"text": "In this paper, we present a method for generating fast conceptual urban design prototypes. We synthesize spatial configurations for street networks, parcels and building volumes. Therefore, we address the problem of implementing custom data structures for these configurations and how the generation process can be controlled and parameterized. We exemplify our method by the development of new components for Grasshopper/Rhino3D and their application in the scope of selected case studies. By means of these components, we show use case applications of the synthesis algorithms. In the conclusion, we reflect on the advantages of being able to generate fast urban design prototypes, but we also discuss the disadvantages of the concept and the usage of Grasshopper as a user interface.",
"title": ""
},
{
"docid": "2ec14d4544d1fcc6591b6f31140af204",
"text": "To better understand the molecular and cellular differences in brain organization between human and nonhuman primates, we performed transcriptome sequencing of 16 regions of adult human, chimpanzee, and macaque brains. Integration with human single-cell transcriptomic data revealed global, regional, and cell-type–specific species expression differences in genes representing distinct functional categories. We validated and further characterized the human specificity of genes enriched in distinct cell types through histological and functional analyses, including rare subpallial-derived interneurons expressing dopamine biosynthesis genes enriched in the human striatum and absent in the nonhuman African ape neocortex. Our integrated analysis of the generated data revealed diverse molecular and cellular features of the phylogenetic reorganization of the human brain across multiple levels, with relevance for brain function and disease.",
"title": ""
},
{
"docid": "b68336c869207720d6ab1880744b70be",
"text": "Particle Swarm Optimization (PSO) algorithms represent a new approach for optimization. In this paper image enhancement is considered as an optimization problem and PSO is used to solve it. Image enhancement is mainly done by maximizing the information content of the enhanced image with intensity transformation function. In the present work a parameterized transformation function is used, which uses local and global information of the image. Here an objective criterion for measuring image enhancement is used which considers entropy and edge information of the image. We tried to achieve the best enhanced image according to the objective criterion by optimizing the parameters used in the transformation function with the help of PSO. Results are compared with other enhancement techniques, viz. histogram equalization, contrast stretching and genetic algorithm based image enhancement.",
"title": ""
},
{
"docid": "39c1be028688904914fb8d7be729a272",
"text": "Projections of computer technology forecast processors with peak performance of 1,000 MIPS in the relatively near future. These processors could easily lose half or more of their performance in the memory hierarchy if the hierarchy design is based on conventional caching techniques. This paper presents hardware techniques to improve the performance of caches. Miss caching places a small fully-associative cache between a cache and its refill path. Misses in the cache that hit in the miss cache have only a one cycle miss penalty, as opposed to a many cycle miss penalty without the miss cache. Small miss caches of 2 to 5 entries are shown to be very effective in removing mapping conflict misses in first-level direct-mapped caches. Victim caching is an improvement to miss caching that loads the small fully-associative cache with the victim of a miss and not the requested line. Small victim caches of 1 to 5 entries are even more effective at removing conflict misses than miss caching. Stream buffers prefetch cache lines starting at a cache miss address. The prefetched data is placed in the buffer and not in the cache. Stream buffers are useful in removing capacity and compulsory cache misses, as well as some instruction cache conflict misses. Stream buffers are more effective than previously investigated prefetch techniques at using the next slower level in the memory hierarchy when it is pipelined. An extension to the basic stream buffer, called multi-way stream buffers, is introduced. Multi-way stream buffers are useful for prefetching along multiple intertwined data reference streams. Together, victim caches and stream buffers reduce the miss rate of the first level in the cache hierarchy by a factor of two to three on a set of six large benchmarks. Copyright 1990 Digital Equipment Corporation",
"title": ""
},
{
"docid": "0bd34312fe7fd932cca206a791c085ec",
"text": "In this paper, an accurate implementation of American Sign Language Translator is presented. It is a portable electronic hand glove to be used by any deaf/mute person to communicate effectively with the othesr who don't understand sign language. It provides the visual and audible output on an LCD and through a speaker respectively. This glove consists of five flex sensors that senses the variation in different signs, an accelerometer to distinguish between the static and dynamic signs, a contact sensor, Arduino Mega 2560 for processing of the data, VoiceBox shield, LCD and Speaker for the outputs. There exists a communication gap between the normal and the disabled people. A simpler, easier, useful and efficient solution to fill this void is presented in this paper.",
"title": ""
}
] |
scidocsrr
|
63c2dbc5381e39f00aadb281e35833b0
|
Be Selfish and Avoid Dilemmas: Fork After Withholding (FAW) Attacks on Bitcoin
|
[
{
"docid": "937d93600ad3d19afda31ada11ea1460",
"text": "Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the \"block withholding attack\". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.",
"title": ""
},
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "45d3e3e34b3a6217c59e5196d09774ef",
"text": "While showing great promise, Bitcoin requires users to wait tens of minutes for transactions to commit, and even then, offering only probabilistic guarantees. This paper introduces ByzCoin, a novel Byzantine consensus protocol that leverages scalable collective signing to commit Bitcoin transactions irreversibly within seconds. ByzCoin achieves Byzantine consensus while preserving Bitcoin’s open membership by dynamically forming hash power-proportionate consensus groups that represent recently-successful block miners. ByzCoin employs communication trees to optimize transaction commitment and verification under normal operation while guaranteeing safety and liveness under Byzantine faults, up to a near-optimal tolerance of f faulty group members among 3 f + 2 total. ByzCoin mitigates double spending and selfish mining attacks by producing collectively signed transaction blocks within one minute of transaction submission. Tree-structured communication further reduces this latency to less than 30 seconds. Due to these optimizations, ByzCoin achieves a throughput higher than Paypal currently handles, with a confirmation latency of 15-20 seconds.",
"title": ""
}
] |
[
{
"docid": "948ac7d5527cfcb978087f1465a918e6",
"text": "We investigate automatic analysis of teachers' instructional strategies from audio recordings collected in live classrooms. We collected a data set of teacher audio and human-coded instructional activities (e.g., lecture, question and answer, group work) in 76 middle school literature, language arts, and civics classes from eleven teachers across six schools. We automatically segment teacher audio to analyze speech vs. rest patterns, generate automatic transcripts of the teachers' speech to extract natural language features, and compute low-level acoustic features. We train supervised machine learning models to identify occurrences of five key instructional segments (Question & Answer, Procedures and Directions, Supervised Seatwork, Small Group Work, and Lecture) that collectively comprise 76% of the data. Models are validated independently of teacher in order to increase generalizability to new teachers from the same sample. We were able to identify the five instructional segments above chance levels with F1 scores ranging from 0.64 to 0.78. We discuss key findings in the context of teacher modeling for formative assessment and professional development.",
"title": ""
},
{
"docid": "fcf84abf8b829c33a5da1716e390971d",
"text": "The value of a visualization evolved in a digital humanities project is per se not evenly high for both involved research fields. When an approach is too complex – which counts as a strong argument for a publication in a visualization realm – it might get invaluable for humanities scholars due to problems of comprehension. On the other hand, if a clean, easily comprehensible visualization is valuable for a humanities scholar, the missing novelty most likely impedes a computer science publication. My own digital humanities background has shown that it is indeed a balancing act to generate beneficial research results for both the visualization and the digital humanities fields. To find out how visualizations are used as means to communicate humanities matters and to assess the impact of the visualization community to the digital humanities field, I surveyed the long papers of the last four annual digital humanities conferences, discovering that visualization scholars are rarely involved in collaborations that produce valuable digital humanities results, in other words, it seems hard to walk the tightrope of generating valuable research for both fields. Derived from my own digital humanities experiences, I suggest a methodology how to design a digital humanities project to overcome this issue.",
"title": ""
},
{
"docid": "c4256017c214eabda8e5b47c604e0e49",
"text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.",
"title": ""
},
{
"docid": "15b38be44110ded3407b152af2f65457",
"text": "What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.",
"title": ""
},
{
"docid": "e9b8d419cf5863fcb417025d3081453f",
"text": "Recent trends in targeted cyber-attacks has increased the interest of research in the field of cyber securit y. Such attacks have massive disruptive effects on organizati ons, enterprises and governments. Cyber kill chain is a model to describe cyber-attacks so as to develop incident response a nd analysis capabilities. Cyber kill chain in simple terms is an attack chain, the path that an intruder takes to penetrate information systems over time to execute an attack on the target. This paper broadly categories the methodologies, techniques an d tools involved in cyber-attacks. This paper intends to help a cybe r security researcher to realize the options available to an a ttacker at every stage of a cyber-attack. Keywords—Reconnaissance, RAT, Exploit, Cyber Attack, Persistence, Command & Control",
"title": ""
},
{
"docid": "d9d140c5fac606a62568313c948077ab",
"text": "We introduce global regularities in the 2.5D building modeling problem, to reflect the orientation and placement similarities between planar elements in building structures. Given a 2.5D point cloud scan, we present an automatic approach that simultaneously detects locally fitted plane primitives and global regularities. While global regularities are extracted by analyzing the plane primitives, they adjust the planes in return and effectively correct local fitting errors. We explore a broad variety of global regularities between 2.5D planar elements including both planer roof patches and planar facade patches. By aligning planar elements to global regularities, our method significantly improves the model quality in terms of both geometry and human judgement.",
"title": ""
},
{
"docid": "2a30aa44df358be7bb27afd0014a07ff",
"text": "The adoption of Smart Grid devices throughout utility networks will effect tremendous change in grid operations and usage of electricity over the next two decades. The changes in ways to control loads, coupled with increased penetration of renewable energy sources, offer a new set of challenges in balancing consumption and generation. Increased deployment of energy storage devices in the distribution grid will help make this process happen more effectively and improve system performance. This paper addresses the new types of storage being utilized for grid support and the ways they are integrated into the grid.",
"title": ""
},
{
"docid": "134297d45c943f0751f002fa5c456940",
"text": "Widespread application of real-time, Nonlinear Model Predictive Control (NMPC) algorithms to systems of large scale or with fast dynamics is challenged by the high associated computational cost, in particular in presence of long prediction horizons. In this paper, a fast NMPC strategy to reduce the on-line computational cost is proposed. A Curvature-based Measure of Nonlinearity (CMoN) of the system is exploited to reduce the required number of sensitivity computations, which largely contribute to the overall computational cost. The proposed scheme is validated by a simulation study on the chain of masses motion control problem, a toy example that can be easily extended to an arbitrary dimension. Simulations have been run with long prediction horizons and large state dimensions. Results show that sensitivity computations are significantly reduced with respect to other sensitivity updating schemes, while preserving control performance.",
"title": ""
},
{
"docid": "782e5dad69e951d854e10a1922b1b270",
"text": "Many experimental studies indicate that people are motivated by reciprocity. Rabin [Amer. Rev. 83 (1993) 1281] develops techniques for incorporating such concerns into game theo economics. His theory is developed for normal form games, and he abstracts from information the sequential structure of a strategic situation. We develop a theory of reciprocity for ext games in which the sequential structure of a strategic situation is made explicit, and propose solution concept—sequential reciprocity equilibrium—for which we prove an equilibrium exis result. The model is applied in several examples, and it is shown that it captures very well the in meaning of reciprocity as well as certain qualitative features of experimental evidence. 2003 Elsevier Inc. All rights reserved. JEL classification: A13; C70; D63",
"title": ""
},
{
"docid": "325796828b9d25d50eb69f62d9eabdbb",
"text": "We present a new algorithm to reduce the space complexity of heuristic search. It is most effective for problem spaces that grow polynomially wi th problem size, but contain large numbers of short cycles. For example, the problem of finding a lowest-cost corner-to-corner path in a d-dimensional grid has application to gene sequence alignment in computational biology. The main idea is to perform a bidirectional search, but saving only the Open lists and not the Closed lists. Once the search completes, we have one node on an optimal path, but don't have the solution path itself. The path is then reconstructed by recursively applying the same algorithm between the in i t ia l node and the in termediate node, and also between the intermediate node and the goal node. If n is the length of the grid in each dimension, and d is the number of dimensions, this algorithm reduces the memory requirement from to The time complexity only increases by a constant factor of in two dimensions, and 1.8 in three dimensions.",
"title": ""
},
{
"docid": "1c0be734eaff2b337edfd9af75a711fa",
"text": "This article is a fully referenced research review to overview progress in unraveling the details of the evolutionary Tree of Life, from life's first occurrence in the hypothetical RNA-era, to humanity's own emergence and diversification, through migration and intermarriage, using research diagrams and brief discussion of the current state of the art. The Tree of Life, in biological terms, has come to be identified with the evolutionary tree of biological diversity. It is this tree which represents the climax fruitfulness of the biosphere and the genetic foundation of our existence, embracing not just higher Eucaryotes, plants, animals and fungi, but Protista, Eubacteria and Archaea, the realm, including the extreme heat and salt-loving organisms, which appears to lie almost at the root of life itself. To a certain extent the notion of a tree based on generational evolution has become complicated by a variety of compounding factors. Gene transfer is not just vertical carried down the generations. There is also evidence for promiscuous incidences of horizontal gene transfer, genetic symbiosis, hybridization and even the formation of chimeras. This review will cover all these aspects, from the first life on Earth to Homo sapiens.",
"title": ""
},
{
"docid": "6c893b6c72f932978a996b6d6283bc02",
"text": "Deep metric learning aims to learn an embedding function, modeled as deep neural network. This embedding function usually puts semantically similar images close while dissimilar images far from each other in the learned embedding space. Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.",
"title": ""
},
{
"docid": "ed0d2151f5f20a233ed8f1051bc2b56c",
"text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.",
"title": ""
},
{
"docid": "bd02a00a6021edfa60edf8f5616ff5df",
"text": "The transition from product-centric to service-centric business models presents a major challenge to industrial automation and manufacturing systems. This transition increases Machine-to-Machine connectivity among industrial devices, industrial controls systems, and factory floor devices. While initiatives like Industry 4.0 or the Industrial Internet Consortium motivate this transition, the emergence of the Internet of Things and Cyber Physical Systems are key enablers. However, automated and autonomous processes require trust in the communication entities and transferred data. Therefore, we study how to secure a smart service use case for industrial maintenance scenarios. In this use case, equipment needs to securely transmit its status information to local and remote recipients. We investigate and compare two security technologies that provide isolation and a secured execution environment: ARM TrustZone and a Security Controller. To compare these technologies we design and implement a device snapshot authentication system. Our results indicate that the TrustZone based approach promises greater flexibility and performance, but only the Security Controller strongly protects against physical attacks. We argue that the best technology actually depends on the use case and propose a hybrid approach that maximizes security for high-security industrial applications. We believe that the insights we gained will help introducing advanced security mechanisms into the future Industrial Internet of Things.",
"title": ""
},
{
"docid": "fd111c4f99c0fe9d8731385f6c7eb04f",
"text": "We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks—the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance.",
"title": ""
},
{
"docid": "48a8709ba0f40d6b174a1fdfc0663865",
"text": "The resistive switching memory (RRAM) offers fast switching, low-voltage operation, and scalable device area. However, reliability and variability issues must be understood, particularly in the low-current operation regime. This letter addresses set-state variability and presents a new set failure phenomenon in RRAM, leading to a high-resistance tail in the set-state distribution. The set failure is due to complementary switching of the RRAM, causing an increase of resistance soon after the set transition. The dependence of set failure on the programing current is explained by the increasing voltage stress across the RRAM device causing filament disconnection.",
"title": ""
},
{
"docid": "5b73fd2439e02906349f3afe2c2e331c",
"text": "This paper presents a varactor-based power divider with reconfigurable power-dividing ratio and reconfigurable in-phase or out-of-phase phase relation between outputs. By properly controlling the tuning varactors, the power divider can be either in phase or out of phase and each with a wide range of tunable power-dividing ratio. The proposed microstrip power divider was prototyped and experimentally characterized. Measured and simulated results are in good agreement.",
"title": ""
},
{
"docid": "d8a13de3c5ca958b0afac1629930d6e7",
"text": "As the number and the diversity of news outlets on the Web grows, so does the opportunity for \"alternative\" sources of information to emerge. Using large social networks like Twitter and Facebook, misleading, false, or agenda-driven information can quickly and seamlessly spread online, deceiving people or influencing their opinions. Also, the increased engagement of tightly knit communities, such as Reddit and 4chan, further compounds the problem, as their users initiate and propagate alternative information, not only within their own communities, but also to different ones as well as various social media. In fact, these platforms have become an important piece of the modern information ecosystem, which, thus far, has not been studied as a whole.\n In this paper, we begin to fill this gap by studying mainstream and alternative news shared on Twitter, Reddit, and 4chan. By analyzing millions of posts around several axes, we measure how mainstream and alternative news flows between these platforms. Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that \"fringe\" communities often succeed in spreading alternative news to mainstream social networks and the greater Web.",
"title": ""
},
{
"docid": "6847a2aa2eaa6dd421743dead32e0d23",
"text": "BACKGROUND/AIM\nConcussion is a common injury in sport. Most individuals recover in 7-10 days but some have persistent symptoms. The objective of this study was to determine if a combination of vestibular rehabilitation and cervical spine physiotherapy decreased the time until medical clearance in individuals with prolonged postconcussion symptoms.\n\n\nMETHODS\nThis study was a randomised controlled trial. Consecutive patients with persistent symptoms of dizziness, neck pain and/or headaches following a sport-related concussion (12-30 years, 18 male and 13 female) were randomised to the control or intervention group. Both groups received weekly sessions with a physiotherapist for 8 weeks or until the time of medical clearance. Both groups received postural education, range of motion exercises and cognitive and physical rest until asymptomatic followed by a protocol of graded exertion. The intervention group also received cervical spine and vestibular rehabilitation. The primary outcome of interest was medical clearance to return to sport, which was evaluated by a study sport medicine physician who was blinded to the treatment group.\n\n\nRESULTS\nIn the treatment group, 73% (11/15) of the participants were medically cleared within 8 weeks of initiation of treatment, compared with 7% (1/14) in the control group. Using an intention to treat analysis, individuals in the treatment group were 3.91 (95% CI 1.34 to 11.34) times more likely to be medically cleared by 8 weeks.\n\n\nCONCLUSIONS\nA combination of cervical and vestibular physiotherapy decreased time to medical clearance to return to sport in youth and young adults with persistent symptoms of dizziness, neck pain and/or headaches following a sport-related concussion.\n\n\nTRIAL REGISTRATION NUMBER\nNCT01860755.",
"title": ""
},
{
"docid": "12344e450dbfba01476353e38f83358f",
"text": "This paper explores four issues that have emerged from the research on social, cognitive and teaching presence in an online community of inquiry. The early research in the area of online communities of inquiry has raised several issues with regard to the creation and maintenance of social, cognitive and teaching presence that require further research and analysis. The other overarching issue is the methodological validity associated with the community of inquiry framework. The first issue is about shifting social presence from socio-emotional support to a focus on group cohesion (from personal to purposeful relationships). The second issue concerns the progressive development of cognitive presence (inquiry) from exploration to resolution. That is, moving discussion beyond the exploration phase. The third issue has to do with how we conceive of teaching presence (design, facilitation, direct instruction). More specifically, is there an important distinction between facilitation and direct instruction? Finally, the methodological issue concerns qualitative transcript analysis and the validity of the coding protocol.",
"title": ""
}
] |
scidocsrr
|
2103f167a2bb6b6912aa9bbcfdefb781
|
Video Segmentation with Background Motion Models
|
[
{
"docid": "35cbd0c888d230c4778d3bb14ab796e1",
"text": "Occlusion relations inform the partition of the image domain into “objects” but are difficult to determine from a single image or short-baseline video. We show how long-term occlusion relations can be robustly inferred from video, and used within a convex optimization framework to segment the image domain into regions. We highlight the challenges in determining these occluder/occluded relations and ensuring regions remain temporally consistent, propose strategies to overcome them, and introduce an efficient numerical scheme to perform the partition directly on the pixel grid, without the need for superpixelization or other preprocessing steps.",
"title": ""
},
{
"docid": "522345eb9b2e53f05bb9d961c85fea23",
"text": "In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.",
"title": ""
}
] |
[
{
"docid": "a00ac4cefbb432ffcc6535dd8fd56880",
"text": "Mobile activity recognition focuses on inferring current user activities by leveraging sensory data available on today's sensor rich mobile phones. Supervised learning with static models has been applied pervasively for mobile activity recognition. In this paper, we propose a novel phone-based dynamic recognition framework with evolving data streams for activity recognition. The novel framework incorporates incremental and active learning for real-time recognition and adaptation in streaming settings. While stream evolves, we refine, enhance and personalise the learning model in order to accommodate the natural drift in a given data stream. Extensive experimental results using real activity recognition data have evidenced that the novel dynamic approach shows improved performance of recognising activities especially across different users. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a144509c91a0cc8f50f0bb7e3d8dbdd6",
"text": "The prefrontal cortex is necessary for directing thought and planning action. Working memory, the active, transient maintenance of information in mind for subsequent monitoring and manipulation, lies at the core of many simple, as well as high-level, cognitive functions. Working memory has been shown to be compromised in a number of neurological and psychiatric conditions and may contribute to the behavioral and cognitive deficits associated with these disorders. It has been theorized that working memory depends upon reverberating circuits within the prefrontal cortex and other cortical areas. However, recent work indicates that intracellular signals and protein dephosphorylation are critical for working memory. The present article will review recent research into the involvement of the modulatory neurotransmitters and their receptors in working memory. The intracellular signaling pathways activated by these receptors and evidence that indicates a role for G(q)-initiated PI-PLC and calcium-dependent protein phosphatase calcineurin activity in working memory will be discussed. Additionally, the negative influence of calcium- and cAMP-dependent protein kinase (i.e., calcium/calmodulin-dependent protein kinase II (CaMKII), calcium/diacylglycerol-activated protein kinase C (PKC), and cAMP-dependent protein kinase A (PKA)) activities on working memory will be reviewed. The implications of these experimental findings on the observed inverted-U relationship between D(1) receptor stimulation and working memory, as well as age-associated working memory dysfunction, will be presented. Finally, we will discuss considerations for the development of clinical treatments for working memory disorders.",
"title": ""
},
{
"docid": "9497731525a996844714d5bdbca6ae03",
"text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.",
"title": ""
},
{
"docid": "e121891a063a2a05a83c369a54b0ecea",
"text": "The number of vulnerabilities in open source libraries is increasing rapidly. However, the majority of them do not go through public disclosure. These unidentified vulnerabilities put developers' products at risk of being hacked since they are increasingly relying on open source libraries to assemble and build software quickly. To find unidentified vulnerabilities in open source libraries and secure modern software development, we describe an efficient automatic vulnerability identification system geared towards tracking large-scale projects in real time using natural language processing and machine learning techniques. Built upon the latent information underlying commit messages and bug reports in open source projects using GitHub, JIRA, and Bugzilla, our K-fold stacking classifier achieves promising results on vulnerability identification. Compared to the state of the art SVM-based classifier in prior work on vulnerability identification in commit messages, we improve precision by 54.55% while maintaining the same recall rate. For bug reports, we achieve a much higher precision of 0.70 and recall rate of 0.71 compared to existing work. Moreover, observations from running the trained model at SourceClear in production for over 3 months has shown 0.83 precision, 0.74 recall rate, and detected 349 hidden vulnerabilities, proving the effectiveness and generality of the proposed approach.",
"title": ""
},
{
"docid": "6544c01bbd76427c9078d7a2a7dad8d5",
"text": "Music is capable of inducing emotional arousal. While previous studies used brief musical excerpts to induce one specific emotion, the current study aimed to identify the physiological correlates of continuous changes in subjective emotional states while listening to a complete music piece. A total of 19 participants listened to the first movement of Ludwig van Beethoven’s 5th symphony (duration: ~7.4 min), during which a continuous 76-channel EEG was recorded. In a second session, the subjects evaluated their emotional arousal during the listening. A fast fourier transform was performed and covariance maps of spectral power were computed in association with the subjective arousal ratings. Subjective arousal ratings had good inter-individual correlations. Covariance maps showed a right-frontal suppression of lower alpha-band activity during high arousal. The results indicate that music is a powerful arousal-modulating stimulus. The temporal dynamics of the piece are well suited for sequential analysis, and could be necessary in helping unfold the full emotional power of music.",
"title": ""
},
{
"docid": "523677ed6d482ab6551f6d87b8ad761e",
"text": "To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1:1 matchings. To tackle this challenge, this article takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this “deep Web ” query interfaces generally form complex matchings between attribute groups (e.g., {author} corresponds to {first name, last name} in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes (e.g., {first name, last name}) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preprocessing, dual mining of positive and negative correlations, and finally matching construction. We evaluate the DCM framework on manually extracted interfaces and the results show good accuracy for discovering complex matchings. Further, to automate the entire matching process, we incorporate automatic techniques for interface extraction. Executing the DCM framework on automatically extracted interfaces, we find that the inevitable errors in automatic interface extraction may significantly affect the matching result. To make the DCM framework robust against such “noisy” schemas, we integrate it with a novel “ensemble” approach, which creates an ensemble of DCM matchers, by randomizing the schema data into many trials and aggregating their ranked results by taking majority voting. As a principled basis, we provide analytic justification of the robustness of the ensemble approach. Empirically, our experiments show that the “ensemblization” indeed significantly boosts the matching accuracy, over automatically extracted and thus noisy schema data. By employing the DCM framework with the ensemble approach, we thus complete an automatic process of matchings Web query interfaces.",
"title": ""
},
{
"docid": "f64e6f77891168c980e48ced53022184",
"text": "Text classification methods for tasks like factoid question answering typically use manually defined string matching rules or bag of words representations. These methods are ineffective when question text contains very few individual words (e.g., named entities) that are indicative of the answer. We introduce a recursive neural network (rnn) model that can reason over such input by modeling textual compositionality. We apply our model, qanta, to a dataset of questions from a trivia competition called quiz bowl. Unlike previous rnn models, qanta learns word and phrase-level representations that combine across sentences to reason about entities. The model outperforms multiple baselines and, when combined with information retrieval methods, rivals the best human players.",
"title": ""
},
{
"docid": "722b2d50bf854e002a0311f7511e433c",
"text": "The bat algorithm (BA) is a nature-inspired algorithm, which has recently been applied in many applications. BA can deal with both continuous optimization and discrete optimization problems. The literature has expanded significantly in the past few years, this paper provides a timely review of the latest developments. We also highlight some topics for further research.",
"title": ""
},
{
"docid": "cefcf529227d2d29780b09bb87b2c66c",
"text": "This paper presents a simple method o f trajectory generation of robot manipulators based on an optimal control problem formulation. It was found recently that the jerk, the third derivative of position, of the desired trajectory, adversely affects the efficiency of the control algorithms and therefore should be minimized. Assuming joint position, velocity and acceleration t o be constrained a cost criterion containing jerk is considered. Initially. the simple environment without obstacles and constrained by the physical l imitat ions o f the jo in t angles only i s examined. For practical reasons, the free execution t ime has been used t o handle the velocity and acceleration constraints instead of the complete bounded state variable formulation. The problem o f minimizing the jerk along an arbitrary Cartesian trajectory i s formulated and given analytical solution, making this method useful for real world environments containing obstacles.",
"title": ""
},
{
"docid": "de4ee63cd9bf19dff2c63e7bece833e1",
"text": "Big Data contains massive information, which are generating from heterogeneous, autonomous sources with distributed and anonymous platforms. Since, it raises extreme challenge to organizations to store and process these data. Conventional pathway of store and process is happening as collection of manual steps and it is consuming various resources. An automated real-time and online analytical process is the most cognitive solution. Therefore it needs state of the art approach to overcome barriers and concerns currently facing by the Big Data industry. In this paper we proposed a novel architecture to automate data analytics process using Nested Automatic Service Composition (NASC) and CRoss Industry Standard Platform for Data Mining (CRISPDM) as main based technologies of the solution. NASC is well defined scalable technology to automate multidisciplined problems domains. Since CRISP-DM also a well-known data science process which can be used as innovative accumulator of multi-dimensional data sets. CRISP-DM will be mapped with Big Data analytical process and NASC will automate the CRISP-DM process in an intelligent and innovative way.",
"title": ""
},
{
"docid": "24ade252fcc6bd5404484cb9ad5987a3",
"text": "The cornerstone of the IBM System/360 philosophy is that the architecture of a computer is basically independent of its physical implementation. Therefore, in System/360, different physical implementations have been made of the single architectural definition which is illustrated in Figure 1.",
"title": ""
},
{
"docid": "3a92798e81a03e5ef7fb18110e5da043",
"text": "BACKGROUND\nRespiratory failure is a serious complication that can adversely affect the hospital course and survival of multiply injured patients. Some studies have suggested that delayed surgical stabilization of spine fractures may increase the incidence of respiratory complications. However, the authors of these studies analyzed small sets of patients and did not assess the independent effects of multiple risk factors.\n\n\nMETHODS\nA retrospective cohort study was conducted at a regional level-I trauma center to identify risk factors for respiratory failure in patients with surgically treated thoracic and lumbar spine fractures. Demographic, diagnostic, and procedural variables were identified. The incidence of respiratory failure was determined in an adult respiratory distress syndrome registry maintained concurrently at the same institution. Univariate and multivariate analyses were used to determine independent risk factors for respiratory failure. An algorithm was formulated to predict respiratory failure.\n\n\nRESULTS\nRespiratory failure developed in 140 of the 1032 patients in the study cohort. Patients with respiratory failure were older; had a higher mean Injury Severity Score (ISS) and Charlson Comorbidity Index Score; had greater incidences of pneumothorax, pulmonary contusion, and thoracic level injury; had a lower mean Glasgow Coma Score (GCS); were more likely to have had a posterior surgical approach; and had a longer mean time from admission to surgical stabilization than the patients without respiratory failure (p < 0.05). Multivariate analysis identified five independent risk factors for respiratory failure: an age of more than thirty-five years, an ISS of > 25 points, a GCS of < or = 12 points, blunt chest injury, and surgical stabilization performed more than two days after admission. An algorithm was created to determine, on the basis of the number of preoperative predictors present, the relative risk of respiratory failure when surgery was delayed for more than two days.\n\n\nCONCLUSIONS\nIndependent risk factors for respiratory failure were identified in an analysis of a large cohort of patients who had undergone operative stabilization of thoracic and lumbar spine fractures. Early operative stabilization of these fractures, the only risk factor that can be controlled by the physician, may decrease the risk of respiratory failure in multiply injured patients.",
"title": ""
},
{
"docid": "c6005a99e6a60a4ee5f958521dcad4d3",
"text": "We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot’s “body energy” during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid’s initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits. For more information: Kod*Lab Disciplines Electrical and Computer Engineering | Engineering | Systems Engineering Comments BibTeX entry @article{canid_spie_2013, author = {Pusey, Jason L. and Duperret, Jeffrey M. and Haynes, G. Clark and Knopf, Ryan and Koditschek , Daniel E.}, title = {Free-Standing Leaping Experiments with a PowerAutonomous, Elastic-Spined Quadruped}, pages = {87410W-87410W-15}, year = {2013}, doi = {10.1117/ 12.2016073} } This work is supported by the National Science Foundation Graduate Research Fellowship under Grant Number DGE-0822, and by the Army Research Laboratory under Cooperative Agreement Number W911NF-10–2−0016. Copyright 2013 Society of Photo-Optical Instrumentation Engineers. Postprint version. This paper was (will be) published in Proceedings of the SPIE Defense, Security, and Sensing Conference, Unmanned Systems Technology XV (8741), and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/655 Free-Standing Leaping Experiments with a Power-Autonomous, Elastic-Spined Quadruped Jason L. Pusey a , Jeffrey M. Duperret b , G. Clark Haynes c , Ryan Knopf b , and Daniel E. Koditschek b a U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, b University of Pennsylvania, Philadelphia, PA, c National Robotics Engineering Center, Carnegie Mellon University, Pittsburgh, PA",
"title": ""
},
{
"docid": "cbc2a96515b9f3917e40515a3829ee8d",
"text": "We present the accurate modeling and analysis, followed by experimental validation, of a 1024-element (64-by-16) antenna array. This fixed-beam array radiates linear polarization in Ka-band (19.7–20.2 GHz). It acts as a first step in the design and modeling of future antenna arrays for satcom-on-the-move applications. Accurate prediction of the behavior of such a large array is a challenging task since full-wave simulation of the entire structure cannot be considered. By taking advantage of existing formalisms on periodic arrays and by using appropriate methods to efficiently exploit such formulations, it is possible to accurately define the performances of all building blocks, from the feeding circuits to the radiating elements, over a frequency range. Such a detailed design also allows an accurate physical analysis. It has been successfully used to guarantee the measured performances. This paper is intended to detail different steps to antenna designers.",
"title": ""
},
{
"docid": "27bc95568467efccb3e6cc185e905e42",
"text": "Major studios and independent production firms (Indies) often have to select or “greenlight” a portfolio of scripts to turn into movies. Despite the huge financial risk at stake, there is currently no risk management tool they can use to aid their decisions, even though such a tool is sorely needed. In this paper, we developed a forecasting and risk management tool, based on movies scripts, to aid movie studios and production firms in their green-lighting decisions. The methodology developed can also assist outside investors if they have access to the scripts. Building upon and extending the previous literature, we extracted three levels of textual information (genre/content, bag-of-words, and semantics) from movie scripts. We then incorporate these textual variables as predictors, together with the contemplated production budget, into a BART-QL (Bayesian Additive Regression Tree for Quasi-Linear) model to obtain the posterior predictive distributions, rather than point forecasts, of the box office revenues for the corresponding movies. We demonstrate how the predictive distributions of box office revenues can potentially be used to help movie producers intelligently select their movie production portfolios based on their risk preferences, and we describe an illustrative analysis performed for an independent production firm.",
"title": ""
},
{
"docid": "fcd9a80d35a24c7222392c11d3376c72",
"text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.",
"title": ""
},
{
"docid": "5fcda05ef200cd326ecb9c2412cf50b3",
"text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.",
"title": ""
},
{
"docid": "da7beedfca8e099bb560120fc5047399",
"text": "OBJECTIVE\nThis study aims to assess the relationship of late-night cell phone use with sleep duration and quality in a sample of Iranian adolescents.\n\n\nMETHODS\nThe study population consisted of 2400 adolescents, aged 12-18 years, living in Isfahan, Iran. Age, body mass index, sleep duration, cell phone use after 9p.m., and physical activity were documented. For sleep assessment, the Pittsburgh Sleep Quality Index questionnaire was used.\n\n\nRESULTS\nThe participation rate was 90.4% (n=2257 adolescents). The mean (SD) age of participants was 15.44 (1.55) years; 1270 participants reported to use cell phone after 9p.m. Overall, 56.1% of girls and 38.9% of boys reported poor quality sleep, respectively. Wake-up time was 8:17 a.m. (2.33), among late-night cell phone users and 8:03a.m. (2.11) among non-users. Most (52%) late-night cell phone users had poor sleep quality. Sedentary participants had higher sleep latency than their peers. Adjusted binary and multinomial logistic regression models showed that late-night cell users were 1.39 times more likely to have a poor sleep quality than non-users (p-value<0.001).\n\n\nCONCLUSION\nLate-night cell phone use by adolescents was associated with poorer sleep quality. Participants who were physically active had better sleep quality and quantity. As part of healthy lifestyle recommendations, avoidance of late-night cell phone use should be encouraged in adolescents.",
"title": ""
},
{
"docid": "809aed520d0023535fec644e81ddbb53",
"text": "This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "406fbdfff4f7abb505c0e238e08decca",
"text": "A computationally efficient method for detecting a chorus section in popular and rock music is presented. The method utilizes a distance matrix representation that is obtained by summing two separate distance matrices calculated using the mel-frequency cepstral coefficient and pitch chroma features. The benefit of computing two separate distance matrices is that different enhancement operations can be applied on each. An enhancement operation is found beneficial only for the chroma distance matrix. This is followed by detection of the off-diagonal segments of small distance from the distance matrix. From the detected segments, an initial chorus section is selected using a scoring mechanism utilizing several heuristics, and subjected to further processing. This further processing involves using image processing filters in a neighborhood of the distance matrix surrounding the initial chorus section. The final position and length of the chorus is selected based on the filtering results. On a database of 206 popular & rock music pieces an average F-measure of 86% is obtained. It takes about ten seconds to process a song with an average duration of three to four minutes on a Windows XP computer with a 2.8 GHz Intel Xeon processor.",
"title": ""
}
] |
scidocsrr
|
e03ae5068634e48ab64b2eb9316d2cc0
|
A Review on Human-Centered IoT-Connected Smart Labels for the Industry 4.0
|
[
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
}
] |
[
{
"docid": "496e0a7bfd230f00bafefd6c1c8f29da",
"text": "Modern society depends on information technology in nearly every facet of human activity including, finance, transportation, education, government, and defense. Organizations are exposed to various and increasing kinds of risks, including information technology risks. Several standards, best practices, and frameworks have been created to help organizations manage these risks. The purpose of this research work is to highlight the challenges facing enterprises in their efforts to properly manage information security risks when adopting international standards and frameworks. To assist in selecting the best framework to use in risk management, the article presents an overview of the most popular and widely used standards and identifies selection criteria. It suggests an approach to proper implementation as well. A set of recommendations is put forward with further research opportunities on the subject. KeywordsInformation security; risk management; security frameworks; security standards; security management.",
"title": ""
},
{
"docid": "204d65de995e63bf92a91977c77646c3",
"text": "Magnetic Resonance Imaging (MRI) is widely used in routine clinical diagnosis and treatment. However, variations in MRI acquisition protocols result in different appearances of normal and diseased tissue in the images. Convolutional neural networks (CNNs), which have shown to be successful in many medical image analysis tasks, are typically sensitive to the variations in imaging protocols. Therefore, in many cases, networks trained on data acquired with one MRI protocol, do not perform satisfactorily on data acquired with different protocols. This limits the use of models trained with large annotated legacy datasets on a new dataset with a different domain which is often a recurring situation in clinical settings. In this study, we aim to answer the following central questions regarding domain adaptation in medical image analysis: Given a fitted legacy model, 1) How much data from the new domain is required for a decent adaptation of the original network?; and, 2) What portion of the pre-trained model parameters should be retrained given a certain number of the new domain training samples? To address these questions, we conducted extensive experiments in white matter hyperintensity segmentation task. We trained a CNN on legacy MR images of brain and evaluated the performance of the domain-adapted network on the same task with images from a different domain. We then compared the performance of the model to the surrogate scenarios where either the same trained network is used or a new network is trained from scratch on the new dataset. The domain-adapted network tuned only by two training examples achieved a Dice score of 0.63 substantially outperforming a similar network trained on the same set of examples from scratch. ⋆ Mohsen Ghafoorian and Alireza Mehrtash contributed equally to this work.",
"title": ""
},
{
"docid": "e96791f42b6c78e29a9e19610ff6baba",
"text": "Although the fourth industrial revolution is already in pro-gress and advances have been made in automating factories, completely automated facilities are still far in the future. Human work is still an important factor in many factories and warehouses, especially in the field of logistics. Manual processes are, therefore, often subject to optimization efforts. In order to aid these optimization efforts, methods like human activity recognition (HAR) became of increasing interest in industrial settings. In this work a novel deep neural network architecture for HAR is introduced. A convolutional neural network (CNN), which employs temporal convolutions, is applied to the sequential data of multiple intertial measurement units (IMUs). The network is designed to separately handle different sensor values and IMUs, joining the information step-by-step within the architecture. An evaluation is performed using data from the order picking process recorded in two different warehouses. The influence of different design choices in the network architecture, as well as pre- and post-processing, will be evaluated. Crucial steps for learning a good classification network for the task of HAR in a complex industrial setting will be shown. Ultimately, it can be shown that traditional approaches based on statistical features as well as recent CNN architectures are outperformed.",
"title": ""
},
{
"docid": "d848a684aeddd5447f17282fdd2efaf0",
"text": "..........................................................................................................iii ACKNOWLEDGMENTS.........................................................................................iv TABLE OF CONTENTS .........................................................................................vi LIST OF TABLES................................................................................................viii LIST OF FIGURES ................................................................................................ix",
"title": ""
},
{
"docid": "9872b7f43bae38c45bbb7b47ea06e132",
"text": "OBJECT\nGlomus jugulare tumors are rare tumors that commonly involve the middle ear, temporal bone, and lower cranial nerves. Resection, embolization, and radiation therapy have been the mainstays of treatment. Despite these therapies, tumor control can be difficult to achieve particularly without undo risk of patient morbidity or mortality. The authors examine the safety and efficacy of gamma knife surgery (GKS) for glomus jugulare tumors.\n\n\nMETHODS\nA retrospective review was undertaken of the results obtained in eight patients who underwent GKS for recurrent, residual, or unresectable glomus jugulare tumors. The median radiosurgical dose to the tumor margin was 15 Gy (range 12-18 Gy). The median clinical follow-up period was 28 months, and the median period for radiological follow up was 32 months. All eight patients demonstrated neurological stability or improvement. No cranial nerve palsies arose or deteriorated after GKS. In the seven patients in whom radiographic follow up was obtained, the tumor size decreased in four and remained stable in three.\n\n\nCONCLUSIONS\nGamma knife surgery would seem to afford effective local tumor control and preserves neurological function in patients with glomus jugulare tumors. If long-term results with GKS are equally efficacious, the role of stereotactic radiosurgery will expand.",
"title": ""
},
{
"docid": "de04d3598687b34b877d744956ca4bcd",
"text": "We investigate the reputational impact of financial fraud for outside directors based on a sample of firms facing shareholder class action lawsuits. Following a financial fraud lawsuit, outside directors do not face abnormal turnover on the board of the sued firm but experience a significant decline in other board seats held. The decline in other directorships is greater for more severe cases of fraud and when the outside director bears greater responsibility for monitoring fraud. Interlocked firms that share directors with the sued firm exhibit valuation declines at the lawsuit filing. When fraud-affiliated directors depart from boards of interlocked firms, these firms experience a significant increase in valuation.",
"title": ""
},
{
"docid": "8ca3e6b48e021b4fc7938af4ad5fd796",
"text": "The most common methods used in cyber attack detection are signature scan and anomaly detection. In the case of applying these approaches, a countermeasure against an upcoming cyber attack is made only if a signature of cyber attack or an anomaly is detected. That means cyber defense systems encounter cyber attacks with no preparation, and our study focuses on this problem. This time, we attempt to discover the useful social data for the prediction of cyber attack motivation and opportunity. For the prediction of cyber attack motivation, the news articles were used as the dataset. As a result, using Artificial Neural Networks and the core keywords extracted from the news articles directly correlated to a cyber attack or the news articles not correlated to cyber attack brought better precision/recall. For the prediction of cyber attack opportunity, the security vulnerability feeds were used as the dataset. The precision/recall of the prediction result was better when using the core keywords as the feature and Artificial Neural Networks as the prediction algorithm.",
"title": ""
},
{
"docid": "19a73e2e729fa115a89c64058eafc9ca",
"text": "This paper aims to present a framework for describing Customer Knowledge Management in online purchase process using two models from literature including consumer online purchase process and ECKM. Since CKM is a recent concept and little empirical research is available, we will first present the theories from which CKM derives. In the first stage we discuss about e-commerce trend and increasing importance of customer loyalty in today’s business environment. Then some related concepts about Knowledge Management, Customer Relationship Management and CKM are presented, in order to provide the reader with a better understanding and clear picture regarding CKM. Finally, providing models representing e-CKM and online purchasing process, we propose a comprehensive procedure to manage customer data and knowledge in e-commerce.",
"title": ""
},
{
"docid": "945c5c7cd9eb2046c1b164e64318e52f",
"text": "This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches.",
"title": ""
},
{
"docid": "7d507a0b754a8029d28216e795cb7286",
"text": "a Lake Michigan Field Station/Great Lakes Environmental Research Laboratory/NOAA, 1431 Beach St, Muskegon, MI 49441, USA b Great Lakes Environmental Research Laboratory/NOAA, 4840 S. State Rd., Ann Arbor, MI 48108, USA c School Forest Resources, Pennsylvania State University, 434 Forest Resources Building, University Park, PA 16802, USA d School of Natural Resources and Environment, University of Michigan, 440 Church St., Ann Arbor, MI 48109, USA",
"title": ""
},
{
"docid": "67411bb40671a8c1dafe328c379b0cd4",
"text": "Continuous EEG Monitoring is becoming a commonly used tool in assessing brain function in critically ill patients. However, there is no uniformly accepted nomenclature for EEG patterns frequently encountered in these patients such as periodic discharges, fluctuating rhythmic patterns, and combinations thereof. Similarly, there is no consensus on which patterns are associated with ongoing neuronal injury, which patterns need to be treated, or how aggressively to treat them. The first step in addressing these issues is to standardize terminology to allow multicenter research projects and to facilitate communication. To this end, we gathered a group of electroencephalographers with particular expertise or interest in this area in order to develop standardized terminology to be used primarily in the research setting. One of the main goals was to eliminate terms with clinical connotations, intended or not, such as “triphasic waves,” a term that implies a metabolic encephalopathy with no relationship to seizures for many clinicians. We also avoid the use of “ictal,” “interictal” and “epileptiform” for the equivocal patterns that are the primary focus of this report. A standardized method of quantifying interictal discharges is also included for the same reasons, with no attempt to alter the existing definition of epileptiform discharges (sharp waves and spikes [Noachtar et al 1999]). Finally, we suggest here a scheme for categorizing background EEG activity. The revisions proposed here were based on solicited feedback on the initial version of the Report [Hirsch LJ et al 2005], from within and outside this committee and society, including public presentations and discussion at many venues. Interand intraobserver agreement between expert EEG readers using the initial version of the terminology was found to be moderate for major terms but only slight to fair for modifiers. [Gerber PA et al 2008] A second assessment was performed on an interim version after extensive changes were introduced. This assessment showed significant improvement with an inter-rater agreement almost perfect for main terms (k = 0.87, 0.92) and substantial agreement for the modifiers of amplitude (93%) and frequency (80%) (Mani R, et al, 2012). Last, after official posting on the ACNS Website and solicitation of comment from ACNS members and others, additional minor additions and revisions were enacted. To standardize terminology of periodic and rhythmic EEG patterns in the critically ill in order to aid communication and future research involving such patterns. Our goal is to avoid terms with clinical connotations and to define terms thoroughly enough to maximize inter-rater reliability. Not included in this nomenclature: Unequivocal electrographic seizures including the following: Generalized spike-wave discharges at 3/s or faster; and clearly evolving discharges of any type that reach a frequency .4/s, whether focal or generalized. These would still be referred to as electrographic seizures. However, their prevalence, duration, frequency and relation to stimulation should be stated as described below when being used for research purposes. Corollary: The following patterns are included in this nomenclature and would not be termed electrographic seizures for research purposes (whether or not these patterns are determined to represent seizures clinically in a given patient): Generalized spike and wave patterns slower than 3/s; and evolving discharges that remain slower than or equal to 4/s. This does not imply that these patterns are not ictal, but simply that they may or may not be. Clinical correlation, including response to treatment, may be necessary to make this determination. N.B.: This terminology can be applied to all ages, but is not intended for use in neonates.",
"title": ""
},
{
"docid": "42e2aec24a5ab097b5fff3ec2fe0385d",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "ccc4994ba255084af5456925ba6c164e",
"text": "This letter proposes a novel, small, printed monopole antenna for ultrawideband (UWB) applications with dual band-notch function. By cutting an inverted fork-shaped slit in the ground plane, additional resonance is excited, and hence much wider impedance bandwidth can be produced. To generate dual band-notch characteristics, we use a coupled inverted U-ring strip in the radiating patch. The measured results reveal that the presented dual band-notch monopole antenna offers a wide bandwidth with two notched bands, covering all the 5.2/5.8-GHz WLAN, 3.5/5.5-GHz WiMAX, and 4-GHz C-bands. The proposed antenna has a small size of 12<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{2}$</tex> </formula> or about <formula formulatype=\"inline\"><tex Notation=\"TeX\">$0.15 \\lambda \\times 0.25 \\lambda$</tex></formula> at 4.2 GHz (first resonance frequency), which has a size reduction of 28% with respect to the previous similar antenna. Simulated and measured results are presented to validate the usefulness of the proposed antenna structure UWB applications.",
"title": ""
},
{
"docid": "f8a8ac0f26667a1744f724a5df213734",
"text": "An important design decision in the construction of a simulator is how to enable users to access the data generated in each run of a simulation experiment. As the simulator executes, the samples of performance metrics that are generated beg to be exposed either in their raw state or after having undergone mathematical processing. Also of concern is the particular format this data assumes when externalized to mass storage, since it determines the ease of processing by other applications or interpretation by the user. In this paper, we present a framework for the ns-3 network simulator for capturing data from inside an experiment, subjecting it to mathematical transformations, and ultimately marshaling it into various output formats. The application of this functionality is illustrated and analyzed via a study of common use cases. Although the implementation of our approach is specific to ns-3, this design presents lessons transferrable to other platforms.",
"title": ""
},
{
"docid": "7b99361ec595958457819fd2c4c67473",
"text": "At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. In this work, we consider how the electrical properties of humans and their attire can be used to support user differentiation on touchscreens. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing, which measures the impedance of a user to the environment (i.e., ground) across a range of AC frequencies. Different people have different bone densities and muscle mass, wear different footwear, and so on. This, in turn, yields different impedance profiles, which allows for touch events, including multitouch gestures, to be attributed to a particular user. This has many interesting implications for interactive design. We describe and evaluate our sensing approach, demonstrating that the technique has considerable promise. We also discuss limitations, how these might be overcome, and next steps.",
"title": ""
},
{
"docid": "7c4c33097c12f55a08f8a7cc3634c5cb",
"text": "Pattern queries are widely used in complex event processing (CEP) systems. Existing pattern matching techniques, however, can provide only limited performance for expensive queries in real-world applications, which may involve Kleene closure patterns, flexible event selection strategies, and events with imprecise timestamps. To support these expensive queries with high performance, we begin our study by analyzing the complexity of pattern queries, with a focus on the fundamental understanding of which features make pattern queries more expressive and at the same time more computationally expensive. This analysis allows us to identify performance bottlenecks in processing those expensive queries, and provides key insights for us to develop a series of optimizations to mitigate those bottlenecks. Microbenchmark results show superior performance of our system for expensive pattern queries while most state-of-the-art systems suffer from poor performance. A thorough case study on Hadoop cluster monitoring further demonstrates the efficiency and effectiveness of our proposed techniques.",
"title": ""
},
{
"docid": "de0d7e467d3fca65ebe997df34504188",
"text": "An improved architecture for efficiently computing the sum of absolute differences (SAD) on FPGAs is proposed in this work. It is based on a configurable adder/subtractor implementation in which each adder input can be negated at runtime. The negation of both inputs at the same time is explicitly allowed and used to compute the sum of absolute values in a single adder stage. The architecture can be mapped to modern FPGAs from Xilinx and Altera. An analytic complexity model as well as synthesis experiments yield an average look-up table (LUT) reduction of 17.4% for an input word size of 8 bit compared to state-of-the-art. As the SAD computation is a resource demanding part in image processing applications, the proposed circuit can be used to replace the SAD core of many applications to enhance their efficiency.",
"title": ""
},
{
"docid": "e084557ddfafe910cfce5f823cb446ee",
"text": "Avoiding kernel vulnerabilities is critical to achieving security of many systems, because the kernel is often part of the trusted computing base. This paper evaluates the current state-of-the-art with respect to kernel protection techniques, by presenting two case studies of Linux kernel vulnerabilities. First, this paper presents data on 141 Linux kernel vulnerabilities discovered from January 2010 to March 2011, and second, this paper examines how well state-of-the-art techniques address these vulnerabilities. The main findings are that techniques often protect against certain exploits of a vulnerability but leave other exploits of the same vulnerability open, and that no effective techniques exist to handle semantic vulnerabilities---violations of high-level security invariants.",
"title": ""
},
{
"docid": "da088acea8b1d2dc68b238e671649f4f",
"text": "Water is a naturally circulating resource that is constantly recharged. Therefore, even though the stocks of water in natural and artificial reservoirs are helpful to increase the available water resources for human society, the flow of water should be the main focus in water resources assessments. The climate system puts an upper limit on the circulation rate of available renewable freshwater resources (RFWR). Although current global withdrawals are well below the upper limit, more than two billion people live in highly water-stressed areas because of the uneven distribution of RFWR in time and space. Climate change is expected to accelerate water cycles and thereby increase the available RFWR. This would slow down the increase of people living under water stress; however, changes in seasonal patterns and increasing probability of extreme events may offset this effect. Reducing current vulnerability will be the first step to prepare for such anticipated changes.",
"title": ""
}
] |
scidocsrr
|
f4a693734a9e3266f25a5e5a9b742941
|
A Fingerprint Orientation Model Based on 2D Fourier Expansion (FOMFE) and Its Application to Singular-Point Detection and Fingerprint Indexing
|
[
{
"docid": "7b09cc252d9c401280b2599e34429e95",
"text": "Fingerprint orientation is crucial for automatic fingerprint identification. However, recovery of orientation is still difficult especially in noisy region. A way to aid recovery of the orientation is to provide an orientation model. In this paper, an orientation model for the entire fingerprint orientation using high order phase portrait is suggested. Proper analysis of the orientation pattern at the singular point regions is provided. Then a low-order phase portrait near each of the singular point is added as constraint to the high-order phase portrait to provide accurate orientation modeling for the entire fingerprint image. The main advantage of the proposed approach is that the nonlinear model itself is able to model all type of fingerprint orientations completely. Experiments and visual analysis show the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "07179c396c820f72ba53dcbff1c5e25b",
"text": "In a fingerprint identification system, a person is identified only by his fingerprint. To accomplish this, a database is searched by matching all entries to the given fingerprint. However, the maximum size of the database is limited, since each match takes some amount of time and has a small probability of error. A solution to this problem is to reduce the number of fingerprints that have to be matched. This is achieved by extracting features from the fingerprints and first matching the fingerprints that have the smallest feature distance to the query fingerprint. Using this indexing method, modern systems are able to search databases up to a few hundred fingerprints. In this paper, three possible fingerprint indexing features are discussed: the registered directional field estimate, FingerCode and minutiae triplets. It is shown that indexing schemes that are based on these features, are able to search a database more effectively than a simple linear scan. Next, a new indexing scheme is constructed that is based on advanced methods of combining these features. It is shown that this scheme results in a considerably better performance than the schemes that are based on the individual features or on more naive methods of combining the features, thus allowing much larger fingerprint databases to be searched. Keywords—image processing, fingerprint image classification, database search, multiple classifiers.",
"title": ""
}
] |
[
{
"docid": "b101bbf15197b3506e9344a31fc68593",
"text": "Health—exploring complexity: an interdisciplinary systems approach HEC2016 28 August–2 September 2016, Munich, Germany Eva Grill • Martin Müller • Ulrich Mansmann Springer Science+Business Media Dordrecht 2016 Health is a complex process with potentially interacting components from the molecular to the societal and environmental level. Adequate research designs and data analysis methods are needed to improve our understanding of this complexity, to ultimately derive high quality evidence to inform patients, health professionals, and health policy decision makers. Also, effective patient-centred health care has to address the complexity of health, functioning, and disability, not only by implementing interventions, but also by using information technology that represents the complexity of health care to inform all actors. Given this background, we developed the concept of our conference HEC2016 as an interdisciplinary European event in beautiful Bavaria, in the city of München. Quite ironically this is the place, where William of Ockham, whose ideas of parsimony are the very opposite of complexity, accused of heresy, spent 17 years under the protection of the Bavarian King Ludwig IV. Furthermore, our local public health hero Max von Pettenkofer (1818–1901) contributed a lot to the basic systemic understanding of health, especially the relevance of a healthy environment. Under the joint theme of health as a complex system we joined the activities of five scientific disciplines: Medical Informatics, Medical Biometry, Bioinformatics, Epidemiology and Health Data Management. The mission behind this interdisciplinary effort was to serve as an important scientific forum for the exchange of new ideas and applications to strengthen health sciences on a national and international level. The analysis of health as a complex system opens needed perspectives on a challenging reality: filtering current hypotheses, resolving controversies, and tailoring interventions to the need of the individual within a health system environment. The conference encouraged the dialogue of the disciplines in order to advance our understanding of health and to decrease burden of disease. HEC2016 brought together the annual conferences of the German Association for Medical Informatics, Biometry and Epidemiology (GMDS), the German Society for Epidemiology (DGEpi), the International Epidemiological AssociationEuropean Region (IEA-EEF) and the European Federation for Medical Informatics Association (EFMI, MIE 2016). HEC2016 took place in München, Germany, in the main building of the Ludwig-Maximilians-Universität (LMU) from 28 August to 2 September 2016 under the auspices of the Institute for Medical Information Processing, Biometry and Epidemiology of LMU. The conference received 832 contributions for oral and poster presentation (Table 1). Fourteen percent of them were from outside Europe with the largest group of 10 % from Asia (Table 2). Scientific program committees and reviewers selected 408 submissions as oral contributions and 303 for poster presentations. The program was surrounded by twelve tutorials held by international renowned scientists and covered a broad spectrum from innovative biostatistical and epidemiological methods to tutorials in application of innovative software, scientific writing and data protection issues. Over 50 panel discussions and workshops allowed in-depth exchange of ideas on specific topics and underscored the interactive nature of HEC2016. A special focus of HEC2016 was on the promotion of young scientists from all disciplines whose participation was supported by numerous travel grants. We would like to express our deepest gratitude to all the colleagues who supported us as speakers, committee members and reviewers, lent us a hand before, during and after the conference, gave critical but friendly comments at all stages of the preparations, supported us by providing coffee, audience or Butterbrezen, and specifically to those who submitted contributions to the conference and attended the conference and its many tutorials, lectures and sessions. We extend our gratitude to the Deutsche Forschungsgemeinschaft for generous financial support (grant no. GR 3608/4-1). Last not least we would like to thank our families who allowed us to spend most of our weekends with organizing this conference, to William of Ockham for lending us his razor (from time to time) and to Max von Pettenkofer for guidance. Eva Grill Martin Müller Ulrich Mansmann for the local organizing committee (Tables 1, 2) 123 Eur J Epidemiol (2016) 31:S1–S239 DOI 10.1007/s10654-016-0183-1",
"title": ""
},
{
"docid": "7e02da9e8587435716db99396c0fbbc7",
"text": "To examine thrombus formation in a living mouse, new technologies involving intravital videomicroscopy have been applied to the analysis of vascular windows to directly visualize arterioles and venules. After vessel wall injury in the microcirculation, thrombus development can be imaged in real time. These systems have been used to explore the role of platelets, blood coagulation proteins, endothelium, and the vessel wall during thrombus formation. The study of biochemistry and cell biology in a living animal offers new understanding of physiology and pathology in complex biologic systems.",
"title": ""
},
{
"docid": "e0d5bc46d57e6e0be0eef59a34391553",
"text": "In this paper, the novel methods and problems of Concave Hull are given through the scattered points set. The — Concave Hull algorithms, which simply connected region, are proposed based on Graham's scanning of Convex Hull. By the — Hull definition, the Convex Hull is the case when. The — Concave Hull algorithms and Graham's scanning are equivalent when. The experiment of the fault plane extraction of 3D seismic showed that the algorithm is an effective method for the extracting the fault plane problems under the appropriate conditions.",
"title": ""
},
{
"docid": "126e75d1873d094db5a67d6de425425a",
"text": "Exosomes are small extracellular vesicles that are thought to participate in intercellular communication. Recent work from our laboratory suggests that, in normal and cystic liver, exosome-like vesicles accumulate in the lumen of intrahepatic bile ducts, presumably interacting with cholangiocyte cilia. However, direct evidence for exosome-ciliary interaction is limited and the physiological relevance of such interaction remains unknown. Thus, in this study, we tested the hypothesis that biliary exosomes are involved in intercellular communication by interacting with cholangiocyte cilia and inducing intracellular signaling and functional responses. Exosomes were isolated from rat bile by differential ultracentrifugation and characterized by scanning, transmission, and immunoelectron microscopy. The exosome-ciliary interaction and its effects on ERK1/2 signaling, expression of the microRNA, miR-15A, and cholangiocyte proliferation were studied on ciliated and deciliated cultured normal rat cholangiocytes. Our results show that bile contains vesicles identified as exosomes by their size, characteristic \"saucer-shaped\" morphology, and specific markers, CD63 and Tsg101. When NRCs were exposed to isolated biliary exosomes, the exosomes attached to cilia, inducing a decrease of the phosphorylated-to-total ERK1/2 ratio, an increase of miR-15A expression, and a decrease of cholangiocyte proliferation. All these effects of biliary exosomes were abolished by the pharmacological removal of cholangiocyte cilia. Our findings suggest that bile contains exosomes functioning as signaling nanovesicles and influencing intracellular regulatory mechanisms and cholangiocyte proliferation through interaction with primary cilia.",
"title": ""
},
{
"docid": "c0b96de9ee7ab0295d2162338ff4c80f",
"text": "PURPOSE\nTo uncover the genetic events leading to transformation of pediatric low-grade glioma (PLGG) to secondary high-grade glioma (sHGG).\n\n\nPATIENTS AND METHODS\nWe retrospectively identified patients with sHGG from a population-based cohort of 886 patients with PLGG with long clinical follow-up. Exome sequencing and array CGH were performed on available samples followed by detailed genetic analysis of the entire sHGG cohort. Clinical and outcome data of genetically distinct subgroups were obtained.\n\n\nRESULTS\nsHGG was observed in 2.9% of PLGGs (26 of 886 patients). Patients with sHGG had a high frequency of nonsilent somatic mutations compared with patients with primary pediatric high-grade glioma (HGG; median, 25 mutations per exome; P = .0042). Alterations in chromatin-modifying genes and telomere-maintenance pathways were commonly observed, whereas no sHGG harbored the BRAF-KIAA1549 fusion. The most recurrent alterations were BRAF V600E and CDKN2A deletion in 39% and 57% of sHGGs, respectively. Importantly, all BRAF V600E and 80% of CDKN2A alterations could be traced back to their PLGG counterparts. BRAF V600E distinguished sHGG from primary HGG (P = .0023), whereas BRAF and CDKN2A alterations were less commonly observed in PLGG that did not transform (P < .001 and P < .001 respectively). PLGGs with BRAF mutations had longer latency to transformation than wild-type PLGG (median, 6.65 years [range, 3.5 to 20.3 years] v 1.59 years [range, 0.32 to 15.9 years], respectively; P = .0389). Furthermore, 5-year overall survival was 75% ± 15% and 29% ± 12% for children with BRAF mutant and wild-type tumors, respectively (P = .024).\n\n\nCONCLUSION\nBRAF V600E mutations and CDKN2A deletions constitute a clinically distinct subtype of sHGG. The prolonged course to transformation for BRAF V600E PLGGs provides an opportunity for surgical interventions, surveillance, and targeted therapies to mitigate the outcome of sHGG.",
"title": ""
},
{
"docid": "0efecea75d3821a5710f3de91986f119",
"text": "Atherosclerosis is a chronic inflammatory disease, and is the primary cause of heart disease and stroke in Western countries. Derivatives of cannabinoids such as delta-9-tetrahydrocannabinol (THC) modulate immune functions and therefore have potential for the treatment of inflammatory diseases. We investigated the effects of THC in a murine model of established atherosclerosis. Oral administration of THC (1 mg kg-1 per day) resulted in significant inhibition of disease progression. This effective dose is lower than the dose usually associated with psychotropic effects of THC. Furthermore, we detected the CB2 receptor (the main cannabinoid receptor expressed on immune cells) in both human and mouse atherosclerotic plaques. Lymphoid cells isolated from THC-treated mice showed diminished proliferation capacity and decreased interferon-γ secretion. Macrophage chemotaxis, which is a crucial step for the development of atherosclerosis, was also inhibited in vitro by THC. All these effects were completely blocked by a specific CB2 receptor antagonist. Our data demonstrate that oral treatment with a low dose of THC inhibits atherosclerosis progression in the apolipoprotein E knockout mouse model, through pleiotropic immunomodulatory effects on lymphoid and myeloid cells. Thus, THC or cannabinoids with activity at the CB2 receptor may be valuable targets for treating atherosclerosis.",
"title": ""
},
{
"docid": "b092297ca953a4c8080e500f0dff8653",
"text": "[1] High-pressure metamorphic rocks provide evidence that in subduction zones material can return from depths of more than 100 km to the surface. The pressure-temperature paths recorded by these rocks are variable, mostly revealing cooling during decompression, while the time constraints are generally narrow and indicate that the exhumation rates can be on the order of plate velocities. As such, subduction cannot be considered as a single pass process; instead, return flow of a considerable portion of crustal and upper mantle material must be accounted for. Our numerical simulations provide insight into the self-organizing large-scale flow patterns and temperature field of subduction zones, primarily controlled by rheology, phase transformations, fluid budget, and heat transfer, which are all interrelated. They show the development of a subduction channel with forced return flow of low-viscosity material and progressive widening by hydration of the mantle wedge. The large-scale structures and the array of pressure-temperature paths obtained by these simulations favorably compare to the record of natural rocks and the structure of high-pressure metamorphic areas.",
"title": ""
},
{
"docid": "a8a51268e3e4dc3b8dd5102dafcb8f36",
"text": "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"title": ""
},
{
"docid": "ca6d23374e0caa125a91618164284b9a",
"text": "We propose a spectral clustering algorithm for the multi-view setting where we have access to multiple views of the data, each of which can be independently used for clustering. Our spectral clustering algorithm has a flavor of co-training, which is already a widely used idea in semi-supervised learning. We work on the assumption that the true underlying clustering would assign a point to the same cluster irrespective of the view. Hence, we constrain our approach to only search for the clusterings that agree across the views. Our algorithm does not have any hyperparameters to set, which is a major advantage in unsupervised learning. We empirically compare with a number of baseline methods on synthetic and real-world datasets to show the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "5a3f65509a2acd678563cd495fe287de",
"text": "Auditory menus have the potential to make devices that use visual menus accessible to a wide range of users. Visually impaired users could especially benefit from the auditory feedback received during menu navigation. However, auditory menus are a relatively new concept, and there are very few guidelines that describe how to design them. This paper details how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, this set of studies examined possible ways of designing an auditory scrollbar for an auditory menu. The following different auditory scrollbar designs were evaluated: single-tone, double-tone, alphabetical grouping, and proportional grouping. Three different evaluations were conducted to determine the best design. The first two evaluations were conducted with sighted users, and the last evaluation was conducted with visually impaired users. The results suggest that pitch polarity does not matter, and proportional grouping is the best of the auditory scrollbar designs evaluated here.",
"title": ""
},
{
"docid": "531993c9b38ebf64a720864a0f8da807",
"text": "The advancement of wireless networks and mobile computing necessitates more advanced applications and services to be built with context-awareness enabled and adaptability to their changing contexts. Today, building context-aware services is a complex task due to the lack of an adequate infrastructure support in pervasive computing environments. In this article, we propose a ServiceOriented Context-Aware Middleware (SOCAM) architecture for the building and rapid prototyping of context-aware services. It provides efficient support for acquiring, discovering, interpreting and accessing various contexts to build context-aware services. We also propose a formal context model based on ontology using Web Ontology Language to address issues including semantic representation, context reasoning, context classification and dependency. We describe our context model and the middleware architecture, and present a performance study for our prototype in a smart home environment. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "22258617dbed9cf68951ea5529ee11e7",
"text": "Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.",
"title": ""
},
{
"docid": "2e7d42b44affb9fa1c12833ea8b00a96",
"text": "The objective of this work is human pose estimation in videos, where multiple frames are available. We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow. To this end we propose a network architecture with the following novelties: (i) a deeper network than previously investigated for regressing heatmaps, (ii) spatial fusion layers that learn an implicit spatial model, (iii) optical flow is used to align heatmap predictions from neighbouring frames, and (iv) a final parametric pooling layer which learns to combine the aligned heatmaps into a pooled confidence map. We show that this architecture outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion. The new architecture outperforms the state of the art by a large margin on three video pose estimation datasets, including the very challenging Poses in the Wild dataset, and outperforms other deep methods that don't use a graphical model on the single-image FLIC benchmark (and also [5, 35] in the high precision region).",
"title": ""
},
{
"docid": "d32fdc6d5dd535079b93b2695ca917d5",
"text": "We present a discrete spectral framework for the sparse or cardinality-constrained solution of a generalized Rayleigh quotient. This NP-hard combinatorial optimization problem is central to supervised learning tasks such as sparse LDA, feature selection and relevance ranking for classification. We derive a new generalized form of the Inclusion Principle for variational eigenvalue bounds, leading to exact and optimal sparse linear discriminants using branch-and-bound search. An efficient greedy (approximate) technique is also presented. The generalization performance of our sparse LDA algorithms is demonstrated with real-world UCI ML benchmarks and compared to a leading SVM-based gene selection algorithm for cancer classification.",
"title": ""
},
{
"docid": "ccd7e49646f1ef1d31f033f84c63c6e6",
"text": "Language modeling is a prototypical unsupervised task of natural language processing (NLP). It has triggered the developments of essential bricks of models used in speech recognition, translation or summarization. More recently, language modeling has been shown to give a sensible loss function for learning high-quality unsupervised representations in tasks like text classification (Howard & Ruder, 2018), sentiment detection (Radford et al., 2017) or word vector learning (Peters et al., 2018) and there is thus a revived interest in developing better language models. More generally, improvement in sequential prediction models are believed to be beneficial for a wide range of applications like model-based planning or reinforcement learning whose models have to encode some form of memory.",
"title": ""
},
{
"docid": "394854761e27aa7baa6fa2eea60f347d",
"text": "Our goal is to complement an entity ranking with human-readable explanations of how those retrieved entities are connected to the information need. Relation extraction technology should aid in finding such support passages, especially in combination with entities and query terms. This work explores how the current state of the art in unsupervised relation extraction (OpenIE) contributes to a solution for the task, assessing potential, limitations, and avenues for further investigation.",
"title": ""
},
{
"docid": "9aad59aeeb07a390062314fbb1c33d73",
"text": "An 8b 1.2 GS/s single-channel Successive Approximation Register (SAR) ADC is implemented in 32 nm CMOS, achieving 39.3 dB SNDR and a Figure-of-Merit (FoM) of 34 fJ per conversion step. High-speed operation is achieved by converting each sample with two alternate comparators clocked asynchronously and a redundant capacitive DAC with constant common mode to improve the accuracy of the comparator. A low-power, clocked capacitive reference buffer is used, and fractional reference voltages are provided to reduce the number of unit capacitors in the capacitive DAC (CDAC). The ADC stacks the CDAC with the reference capacitor to reduce the area and enhance the settling speed. Background calibration of comparator offset is implemented. The ADC consumes 3.1 mW from a 1 V supply and occupies 0.0015 mm2.",
"title": ""
},
{
"docid": "e8eab2f5481f10201bc82b7a606c1540",
"text": "This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics.",
"title": ""
},
{
"docid": "b6637d1367e47a550f5a5b29b7224be9",
"text": "Ovarian cancer is the most lethal gynecologic malignancy among women worldwide and is presumed to result from the presence of ovarian cancer stem cells. To overcome the limitation of current anticancer agents, another anticancer strategy is necessary to effectively target cancer stem cells in ovarian cancer. In many types of malignancies, including ovarian cancer, metformin, one of the most popular antidiabetic drugs, has been demonstrated to exhibit chemopreventive and anticancer efficacy with respect to incidence and overall survival rates. Thus, the metabolic reprogramming of cancer and cancer stem cells driven by genetic alterations during carcinogenesis and cancer progression could be therapeutically targeted. In this review, the potential efficacy and anticancer mechanisms of metformin against ovarian cancer stem cells will be discussed.",
"title": ""
},
{
"docid": "deb11d49329ec8b29ab86be090e8aded",
"text": "The relationships among five aspects of academic procrastination--behavioral delay, personal upset about the delay, task aversiveness, task capability, and the desire to reduce behavioral delay--were investigated in 10th-grade Israeli students (N = 195). Upset about delay was weakly related to delay itself, and--unlike delay--was strongly related to perceived capability to perform academic tasks and to the desire to change delaying behavior. Students delayed more on academic tasks labeled unpleasant than pleasant, were neutral in between, and were correspondingly more upset about the former than the latter. They more frequently acknowledged reasons for academic procrastination that were less threatening to their self-image (e.g., problems in time management) than reasons that were more threatening (e.g., lack of ability). Interest in reducing delay is related more to self-perceived ability to handle tasks than to time spent procrastinating or reasons given for procrastinating.",
"title": ""
}
] |
scidocsrr
|
eb8e5e7d48d36403be603fdc2d5c0cbd
|
Beliefs about emotional residue: the idea that emotions leave a trace in the physical environment.
|
[
{
"docid": "83fa1f0948dd4beb95ad4b09f2dc7259",
"text": "It is to the highest degree probable that the subject['s] ... general attitude of mind is that of ready complacency and cheerful willingness to assist the investigator in every possible way by reporting to him those very things which he is most eager to find, and that the very questions of the experimenter ... suggest the shade of reply expected . . . . Indeed . it seems too often as if the subject were now regarded as a stupid automaton .... A. H. PIERCE, 1908 3",
"title": ""
}
] |
[
{
"docid": "cc086da5b3eb84e5294a14b09cdfae63",
"text": "In high-performance microprocessor cores, the on-die supply voltage seen by the transistors is non-ideal and exhibits significant fluctuations. These supply fluctuations are caused by sudden changes in the current consumed by the microprocessor in response to variations in workloads. This non-ideal supply can cause performance degradation or functional failures. Therefore, a significant amount of margin (10-15%) needs to be added to the ideal voltage (if there were no AC voltage variations) to ensure that the processor always executes correctly at the committed voltage-frequency points. This excess voltage wastes power proportional to the square of the voltage increase.",
"title": ""
},
{
"docid": "2377b7926cebeee93a92eb03e71e77d2",
"text": "Electronic commerce has enabled a number of online pay-for-answer services. However, despite commercial interest, we still lack a comprehensive understanding of how financial incentives support question asking and answering. Using 800 questions randomly selected from a pay-for-answer site, along with site usage statistics, we examined what factors impact askers' decisions to pay. We also explored how financial rewards affect answers, and if question pricing can help organize Q&A exchanges for archival purposes. We found that askers' decisions are two-part--whether or not to pay and how much to pay. Askers are more likely to pay when requesting facts and will pay more when questions are more difficult. On the answer side, our results support prior findings that paying more may elicit a higher number of answers and answers that are longer, but may not elicit higher quality answers (as rated by the askers). Finally, we present evidence that questions with higher rewards have higher archival value, which suggests that pricing can be used to support archival use.",
"title": ""
},
{
"docid": "3906637b2c1df46a4eaa8b3e762a2c68",
"text": "In this paper, we investigate factors and issues related to human locomotion behavior and proxemics in the presence of a real or virtual human in augmented reality (AR). First, we discuss a unique issue with current-state optical see-through head-mounted displays, namely the mismatch between a small augmented visual field and a large unaugmented periphery, and its potential impact on locomotion behavior in close proximity of virtual content. We discuss a potential simple solution based on restricting the field of view to the central region, and we present the results of a controlled human-subject study. The study results show objective benefits for this approach in producing behaviors that more closely match those that occur when seeing a real human, but also some drawbacks in overall acceptance of the restricted field of view. Second, we discuss the limited multimodal feedback provided by virtual humans in AR, present a potential improvement based on vibrotactile feedback induced via the floor to compensate for the limited augmented visual field, and report results showing that benefits of such vibrations are less visible in objective locomotion behavior than in subjective estimates of co-presence. Third, we investigate and document significant differences in the effects that real and virtual humans have on locomotion behavior in AR with respect to clearance distances, walking speed, and head motions. We discuss potential explanations for these effects related to social expectations, and analyze effects of different types of behaviors including idle standing, jumping, and walking that such real or virtual humans may exhibit in the presence of an observer.",
"title": ""
},
{
"docid": "e146526fbd2561d1dac33ab82470efae",
"text": "Using daily returns of the S&P500 stocks from 2001 to 2011, we perform a backtesting study of the portfolio optimization strategy based on the extreme risk index (ERI). This method uses multivariate extreme value theory to minimize the probability of large portfolio losses. With more than 400 stocks to choose from, our study applies extreme value techniques in portfolio management on a large scale. We compare the performance of this strategy with the Markowitz approach and investigate how the ERI method can be applied most effectively. Our results show that the annualized return of the ERI strategy is particularly high for assets with heavy tails. The comparison also includes maximal drawdown, transaction costs, portfolio concentration, and asset diversity in the portfolio. In addition to that we study the impact of an alternative tail index estimator.",
"title": ""
},
{
"docid": "06ca9b3cdeeae59e67d25235ee410f73",
"text": "Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA’s machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance. * Corresponding author",
"title": ""
},
{
"docid": "8383cd262477e2b80c57742229c9dd64",
"text": "Pie charts and their variants are prevalent in business settings and many other uses, even if they are not popular with the academic community. In a recent study, we found that contrary to general belief, there is no clear evidence that these charts are read based on the central angle. Instead, area and arc length appear to be at least equally important. In this paper, we build on that study to test several pie chart variations that are popular in information graphics: exploded pie chart, pie with larger slice, elliptical pie, and square pie (in addition to a regular pie chart used as the baseline). We find that even variants that do not distort central angle cause greater error than regular pie charts. Charts that distort the shape show the highest error. Many of our predictions based on the previous study’s results are borne out by this study’s findings.",
"title": ""
},
{
"docid": "6906f5983de48395b043b947b0574d8e",
"text": "As information technology and the popularity of Internet technology and in-depth applications, e-commerce is at unprecedented pace. People become more and more the focus of attention. At present, relatively fast development of e-commerce activities are online sales, online promotions, and online services. Globalization of electronic commerce as the development of enterprises provided many opportunities, but in developing country’s electricity business is still in the initial stage of development, how to improve e-commerce environment of consumer satisfaction, consumer loyalty and thus, related to the electron the performance of business enterprises. Therefore, to the upsurge in e-commerce for more benefits, for many enterprises, there is still need for careful analysis of the business environment electricity consumer behavior, understanding the factors that affect their consumption and thus the basis of network marketing the characteristics of the network setup customer satisfaction evaluation index system, then the theory based on customer satisfaction, on this basis to take corresponding countermeasures, the development of effective and reasonable marketing strategy, E-commerce can improve business performance, and promote the sound development of their self. At present, domestic and international network of scholars on consumer psychology, motivation and behavior have more exposition, however, how in ecommerce environment impact factors of customer satisfaction and how to improve ecommerce customer satisfaction studies are not many see. This article is from the analysis the impact of e-commerce network environment factors in consumer satisfaction.",
"title": ""
},
{
"docid": "02c00d998952d935ee694922953c78d1",
"text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.",
"title": ""
},
{
"docid": "a288a610a6cd4ff32b3fff4e2124aee0",
"text": "According to the survey done by IBM business consulting services in 2006, global CEOs stated that business model innovation will have a greater impact on operating margin growth, than product or service innovation. We also noticed that some enterprises in China's real estate industry have improved their business models for sustainable competitive advantage and surplus profit in recently years. Based on the case studies of Shenzhen Vanke, as well as literature review, a framework for business model innovation has been developed. The framework provides an integrated means of making sense of new business model. These include critical dimensions of new customer value propositions, technological innovation, collaboration of the business infrastructure and the economic feasibility of a new business model.",
"title": ""
},
{
"docid": "224bacc72ba9785d158f506eea68e4c9",
"text": "A model of commumcations protocols based on finite-state machines is investigated. The problem addressed is how to ensure certain generally desirable properties, which make protocols \"wellformed,\" that is, specify a response to those and only those events that can actually occur. It is determined to what extent the problem is solvable, and one approach to solving it ts described. Categories and SubJect Descriptors' C 2 2 [Computer-Conununication Networks]: Network Protocols-protocol verification; F 1 1 [Computation by Abstract Devices] Models of Computation--automata; G.2.2 [Discrete Mathematics] Graph Theory--graph algoruhms; trees General Terms: Reliability, Verification Additional",
"title": ""
},
{
"docid": "e062d88651a8bdc637ecf57b4cbb1b2b",
"text": "Wireless Underground Sensor Networks (WUSNs) consist of wirelessly connected underground sensor nodes that communicate untethered through soil. WUSNs have the potential to impact a wide variety of novel applications including intelligent irrigation, environment monitoring, border patrol, and assisted navigation. Although its deployment is mainly based on underground sensor nodes, a WUSN still requires aboveground devices for data retrieval, management, and relay functionalities. Therefore, the characterization of the bi-directional communication between a buried node and an aboveground device is essential for the realization of WUSNs. In this work, empirical evaluations of underground-to- aboveground (UG2AG) and aboveground-to-underground (AG2UG) communication are presented. More specifically, testbed experiments have been conducted with commodity sensor motes in a real-life agricultural field. The results highlight the asymmetry between UG2AG and AG2UG communication with distinct behaviors for different burial depths. To combat the adverse effects of the change in wavelength in soil, an ultra wideband antenna scheme is deployed, which increases the communication range by more than 350% compared to the original antennas. The results also reveal that a 21% increase in the soil moisture decreases the communication range by more than 70%. To the best of our knowledge, this is the first empirical study that highlights the effects of the antenna design, burial depth, and soil moisture on both UG2AG and AG2UG communication performance. These results have a significant impact on the development of multi-hop networking protocols for WUSNs.",
"title": ""
},
{
"docid": "c8e446ab0dbdaf910b5fb98f672a35dc",
"text": "MinHash and SimHash are the two widely adopted Locality Sensitive Hashing (LSH) algorithms for large-scale data processing applications. Deciding which LSH to use for a particular problem at hand is an important question, which has no clear answer in the existing literature. In this study, we provide a theoretical answer (validated by experiments) that MinHash virtually always outperforms SimHash when the data are binary, as common in practice such as search. The collision probability of MinHash is a function of resemblance similarity (R), while the collision probability of SimHash is a function of cosine similarity (S). To provide a common basis for comparison, we evaluate retrieval results in terms of S for both MinHash and SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH with respect to S, by using a general inequality S ≤ R ≤ S 2−S . Our worst case analysis can show that MinHash significantly outperforms SimHash in high similarity region. Interestingly, our intensive experiments reveal that MinHash is also substantially better than SimHash even in datasets where most of the data points are not too similar to each other. This is partly because, in practical data, often R ≥ S z−S holds where z is only slightly larger than 2 (e.g., z ≤ 2.1). Our restricted worst case analysis by assuming S z−S ≤ R ≤ S 2−S shows that MinHash indeed significantly outperforms SimHash even in low similarity region. We believe the results in this paper will provide valuable guidelines for search in practice, especially when the data are sparse. Appearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors.",
"title": ""
},
{
"docid": "0ff702ed9fed0393e16e120f8a704530",
"text": "Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hiddenMarkov models.Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR), and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.",
"title": ""
},
{
"docid": "a5cd7d46dc74d15344e2f3e9b79388a3",
"text": "A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.",
"title": ""
},
{
"docid": "f27547cfee95505fe8a2f44f845ddaed",
"text": "High-performance, two-dimensional arrays of parallel-addressed InGaN blue micro-light-emitting diodes (LEDs) with individual element diameters of 8, 12, and 20 /spl mu/m, respectively, and overall dimensions 490 /spl times/490 /spl mu/m, have been fabricated. In order to overcome the difficulty of interconnecting multiple device elements with sufficient step-height coverage for contact metallization, a novel scheme involving the etching of sloped-sidewalls has been developed. The devices have current-voltage (I-V) characteristics approaching those of broad-area reference LEDs fabricated from the same wafer, and give comparable (3-mW) light output in the forward direction to the reference LEDs, despite much lower active area. The external efficiencies of the micro-LED arrays improve as the dimensions of the individual elements are scaled down. This is attributed to scattering at the etched sidewalls of in-plane propagating photons into the forward direction.",
"title": ""
},
{
"docid": "83f5af68f54f9db0608d8173432188f9",
"text": "JaTeCS is an open source Java library that supports research on automatic text categorization and other related problems, such as ordinal regression and quantification, which are of special interest in opinion mining applications. It covers all the steps of an experimental activity, from reading the corpus to the evaluation of the experimental results. As JaTeCS is focused on text as the main input data, it provides the user with many text-dedicated tools, e.g.: data readers for many formats, including the most commonly used text corpora and lexical resources, natural language processing tools, multi-language support, methods for feature selection and weighting, the implementation of many machine learning algorithms as well as wrappers for well-known external software (e.g., SVMlight) which enable their full control from code. JaTeCS support its expansion by abstracting through interfaces many of the typical tools and procedures used in text processing tasks. The library also provides a number of “template” implementations of typical experimental setups (e.g., train-test, k-fold validation, grid-search optimization, randomized runs) which enable fast realization of experiments just by connecting the templates with data readers, learning algorithms and evaluation measures.",
"title": ""
},
{
"docid": "85b9f94cfd96dd6189832199320b1d06",
"text": "We propose TrajGraph, a new visual analytics method, for studying urban mobility patterns by integrating graph modeling and visual analysis with taxi trajectory data. A special graph is created to store and manifest real traffic information recorded by taxi trajectories over city streets. It conveys urban transportation dynamics which can be discovered by applying graph analysis algorithms. To support interactive, multiscale visual analytics, a graph partitioning algorithm is applied to create region-level graphs which have smaller size than the original street-level graph. Graph centralities, including Pagerank and betweenness, are computed to characterize the time-varying importance of different urban regions. The centralities are visualized by three coordinated views including a node-link graph view, a map view and a temporal information view. Users can interactively examine the importance of streets to discover and assess city traffic patterns. We have implemented a fully working prototype of this approach and evaluated it using massive taxi trajectories of Shenzhen, China. TrajGraph's capability in revealing the importance of city streets was evaluated by comparing the calculated centralities with the subjective evaluations from a group of drivers in Shenzhen. Feedback from a domain expert was collected. The effectiveness of the visual interface was evaluated through a formal user study. We also present several examples and a case study to demonstrate the usefulness of TrajGraph in urban transportation analysis.",
"title": ""
},
{
"docid": "dbe0b895c78dd90c69cc1a1f8289aadf",
"text": "This paper presents the design procedure of monolithic microwave integrated circuit (MMIC) high-power amplifiers (HPAs) as well as implementation of high-efficiency and compact-size HPAs in a 0.25- μm AlGaAs-InGaAs pHEMT technology. Presented design techniques used to extend bandwidth, improve efficiency, and reduce chip area of the HPAs are described in detail. The first HPA delivers 5 W of output power with 40% power-added efficiency (PAE) in the frequency band of 8.5-12.5 GHz, while providing 20 dB of small-signal gain. The second HPA delivers 8 W of output power with 35% PAE in the frequency band of 7.5-12 GHz, while maintaining a small-signal gain of 17.5 dB. The 8-W HPA chip area is 8.8 mm2, which leads to the maximum power/area ratio of 1.14 W/mm2. These are the lowest area and highest power/area ratio reported in GaAs HPAs operating within the same frequency band.",
"title": ""
},
{
"docid": "d263d778738494e26e160d1c46874fff",
"text": "We introduce new online models for two important aspectsof modern financial markets: Volume Weighted Average Pricetrading and limit order books. We provide an extensivestudy of competitive algorithms in these models and relatethem to earlier online algorithms for stock trading.",
"title": ""
}
] |
scidocsrr
|
9b9b06c4997d8235ed5cd021cab66161
|
A Novel Video Dataset for Change Detection Benchmarking
|
[
{
"docid": "a5999023893d996f0485abcf991ffbe1",
"text": "In this paper, we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning.",
"title": ""
}
] |
[
{
"docid": "84f2bdd2977885acdb2f92e6c34cc705",
"text": "Each forensic case is characterized by its own uniqueness. Deficient forensic cases require additional sources of human identifiers to assure the identity. We report on two different cases illustrating the role of teeth in answering challenging forensic questions. The first case involves identification of an adipocere male found in a car submersed in water for approximately 2 years. The second scenario, which involves paternity DNA testing of an exhumed body, was performed approximately 2.8 years post-mortem. The difficulty in anticipating the degradation of the DNA is one of the main obstacles. DNA profiling of dental tissues, DNA quantification by using real-time PCR (PowerQuant™ System/Promega) and a histological dental examination have been performed to address the encountered impediments of adverse post-mortem changes. Our results demonstrate that despite the adverse environmental conditions, a successful STR profile of DNA isolated from the root of teeth can be generated with respect to tooth type and apportion. We conclude that cementocytes are a fruitful source of DNA. Cementum resists DNA degradation in comparison to other tissues with respect to the intra- and inter-individual variation of histological and anatomical structures.",
"title": ""
},
{
"docid": "2bb194184bea4b606ec41eb9eee0bfaa",
"text": "Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.",
"title": ""
},
{
"docid": "12219c8b01a7a816b2acc55b7f836360",
"text": "The hospital readmission rate of patients within 30 days after discharge is broadly accepted as a healthcare quality measure and cost driver in the United States. The ability to estimate hospitalization costs alongside 30 day risk-stratification for such readmissions provides additional benefit for accountable care, now a global issue and foundation for the U.S. government mandate under the Affordable Care Act. Recent data mining efforts either predict healthcare costs or risk of hospital readmission, but not both. In this paper we present a dual predictive modeling effort that utilizes healthcare data to predict the risk and cost of any hospital readmission (“all-cause”). For this purpose, we explore machine learning algorithms to do accurate predictions of healthcare costs and risk of 30-day readmission. Results on risk prediction for “all-cause” readmission compared to the standardized readmission tool (LACE) are promising, and the proposed techniques for cost prediction consistently outperform baseline models and demonstrate substantially lower mean absolute error (MAE).",
"title": ""
},
{
"docid": "6dcb97c680ca3b1c350b0b9c059bd14f",
"text": "Resource allocation and scheduling on clouds are required to harness the power of the underlying resource pool such that the service provider can meet the quality of service requirements of users, which are often captured in service level agreements (SLAs). This paper focuses on resource allocation and scheduling on clouds and clusters that process MapReduce jobs with SLAs. The resource allocation and scheduling problem is modelled as an optimization problem using constraint programming, and a novel MapReduce Constraint Programming based Resource Management algorithm (MRCP-RM) is devised that can effectively process an open stream of MapReduce jobs where each job is characterized by an SLA comprising an earliest start time, a required execution time, and an end-to-end deadline. A detailed performance evaluation of MRCP-RM is conducted for an open system subjected to a stream of job arrivals using both simulation and experimentation on a real system. The experiments on a real system are performed on a Hadoop cluster (deployed on Amazon EC2) that runs our new Hadoop Constraint Programming based Resource Management algorithm (HCP-RM) that incorporates a technique for handling data locality. The results of the performance evaluation demonstrate the effectiveness of MRCP-RM/HCP-RM in generating a schedule that leads to a low proportion of jobs missing their deadlines (P) and also provide insights into system behaviour and performance. In the simulation experiments, it is observed that MRCP-RM achieves on average an 82 percent lower P compared to a technique from the existing literature when processing a synthetic workload from Facebook. Furthermore, in the experiments performed on a Hadoop cluster deployed on Amazon EC2, it is observed that HCP-RM achieved on average a 63 percent lower P compared to an EDF-Scheduler for a wide variety of workload and system parameters experimented with.",
"title": ""
},
{
"docid": "636076c522ea4ac91afbdc93d58fa287",
"text": "Aspect-based opinion mining has attracted lots of attention today. In this thesis, we address the problem of product aspect rating prediction, where we would like to extract the product aspects, and predict aspect ratings simultaneously. Topic models have been widely adapted to jointly model aspects and sentiments, but existing models may not do the prediction task well due to their weakness in sentiment extraction. The sentiment topics usually do not have clear correspondence to commonly used ratings, and the model may fail to extract certain kinds of sentiments due to skewed data. To tackle this problem, we propose a sentiment-aligned topic model(SATM), where we incorporate two types of external knowledge: product-level overall rating distribution and word-level sentiment lexicon. Experiments on real dataset demonstrate that SATM is effective on product aspect rating prediction, and it achieves better performance compared to the existing approaches.",
"title": ""
},
{
"docid": "c51462988ce97a93da02e00af075127b",
"text": "By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). In addition to simplifying data acquisition single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper, we discuss the geometry and calibration of catadioptric stereo with two planar mirrors. In particular, we will show that the relative orientation of a catadioptric stereo rig is restricted to the class of planar motions thus reducing the number of external calibration parameters from 6 to 5. Next we derive the epipolar geometry for catadioptric stereo and show that it has 6 degrees of freedom rather than 7 for traditional stereo. Furthermore, we show how focal length can be recovered from a single catadioptric image solely from a set of stereo correspondences. To test the accuracy of the calibration we present a comparison to Tsai camera calibration and we measure the quality of Euclidean reconstruction. In addition, we will describe a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo.",
"title": ""
},
{
"docid": "77f40fa3df43c8dbf6e483f106ee1d8d",
"text": "We performed a prospective study to document, by intra-operative manipulation under anaesthesia (MUA) of the pelvic ring, the stability of lateral compression type 1 injuries that were managed in a Level-I Trauma Centre. The documentation of the short-term outcome of the management of these injuries was our secondary aim. A total of 63 patients were included in the study. Thirty-five patients (group A) were treated surgically whereas 28 (group B) were managed nonoperatively. Intraoperative rotational instability, evident by more than two centimetres of translation during the manipulation manoeuvre, was combined with a complete sacral fracture in all cases. A statistically significant difference was present between the length of hospital stay, the time to independent pain-free mobilisation, post-manipulation pain levels and opioid requirements between the two groups, with group A demonstrating significantly decreased values in all these four variables (p < 0.05). There was also a significant difference between the pre- and 72-hour post-manipulation visual analogue scale and analgesic requirements of the group A patients, whereas the patients in group B did not demonstrate such a difference. LC-1 injuries with a complete posterior sacral injury are inheritably rotationally unstable and patients presenting with these fracture patterns definitely gain benefit from surgical stabilisation.",
"title": ""
},
{
"docid": "38a4b3c515ee4285aa88418b30937c62",
"text": "Docker containers have recently become a popular approach to provision multiple applications over shared physical hosts in a more lightweight fashion than traditional virtual machines. This popularity has led to the creation of the Docker Hub registry, which distributes a large number of official and community images. In this paper, we study the state of security vulnerabilities in Docker Hub images. We create a scalable Docker image vulnerability analysis (DIVA) framework that automatically discovers, downloads, and analyzes both official and community images on Docker Hub. Using our framework, we have studied 356,218 images and made the following findings: (1) both official and community images contain more than 180 vulnerabilities on average when considering all versions; (2) many images have not been updated for hundreds of days; and (3) vulnerabilities commonly propagate from parent images to child images. These findings demonstrate a strong need for more automated and systematic methods of applying security updates to Docker images and our current Docker image analysis framework provides a good foundation for such automatic security update.",
"title": ""
},
{
"docid": "ee397703a8d5a751c7fd7c76f92ebd73",
"text": "Autografting of dopamine-producing adrenal medullary tissue to the striatal region of the brain is now being attempted in patients with Parkinson's disease. Since the success of this neurosurgical approach to dopamine-replacement therapy may depend on the selection of the most appropriate subregion of the striatum for implantation, we examined the pattern and degree of dopamine loss in striatum obtained at autopsy from eight patients with idiopathic Parkinson's disease. We found that in the putamen there was a nearly complete depletion of dopamine in all subdivisions, with the greatest reduction in the caudal portions (less than 1 percent of the dopamine remaining). In the caudate nucleus, the only subdivision with severe dopamine reduction was the most dorsal rostral part (4 percent of the dopamine remaining); the other subdivisions still had substantial levels of dopamine (up to approximately 40 percent of control levels). We propose that the motor deficits that are a constant and characteristic feature of idiopathic Parkinson's disease are for the most part a consequence of dopamine loss in the putamen, and that the dopamine-related caudate deficits (in \"higher\" cognitive functions) are, if present, less marked or restricted to discrete functions only. We conclude that the putamen--particularly its caudal portions--may be the most appropriate site for intrastriatal application of dopamine-producing autografts in patients with idiopathic Parkinson's disease.",
"title": ""
},
{
"docid": "c846b57f9147324af96420be66bb07f4",
"text": "An important component of current research in big data is graph analytics on very large graphs. Of the many problems of interest in this domain, graph pattern matching is both challenging and practically important. The problem is, given a relatively small query graph, finding matching patterns in a large data graph. Algorithms to address this problem are used in large social networks and graph databases. Though fast querying is highly desirable, the scalability of pattern matching algorithms is hindered by the NP-completeness of the subgraph isomorphism problem. This paper presents a conceptually simple, memory-efficient, pruning-based algorithm for the subgraph isomorphism problem that outperforms commonly used algorithms on large graphs. The high performance is due in large part to the effectiveness of the pruning algorithm, which in many cases removes a large percentage of the vertices not found in isomorphic matches. In this paper, the runtime of the algorithm is tested alongside other algorithms on graphs of up to 10 million vertices and 250 million edges.",
"title": ""
},
{
"docid": "92fb73e03b487d5fbda44e54cf59640d",
"text": "The eyes and periocular area are the central aesthetic unit of the face. Facial aging is a dynamic process that involves skin, subcutaneous soft tissues, and bony structures. An understanding of what is perceived as youthful and beautiful is critical for success. Knowledge of the functional aspects of the eyelid and periocular area can identify pre-preoperative red flags.",
"title": ""
},
{
"docid": "b279c6d3e71e544d4e99152d968d0a83",
"text": "Registration of 3D point clouds is a problem that arises in a variety of research areas such as computer vision, computer graphics and computational geometry. This situation causes most papers in the area to focus on solving practical problems by using data structures often developed in theoretical contexts. Consequently, discrepancies arise between asymptotic cost and experimental performance. The point cloud registration or matching problem encompasses many different steps. Among them, the computation of the distance between two point sets (often refereed to as residue computation) is crucial and can be seen as an aggregate of range searching or nearest neighbor searching. In this paper, we aim at providing theoretical analysis and experimental performance of range searching and nearest neighbor data structures applied to 3D point cloud registration. Performance of widely used data structures such as compressed octrees, KDtrees, BDtrees and regular grids is reported. Additionally, we present a new hybrid data structure named GridDS, which combines a regular grid with some preexisting “inner” data structure in order to meet the best asymptotic bounds while also obtaining the best performance. The experimental evaluation in both synthetic and real data demonstrates that the hybrid data structures built using GridDS improve the running times of the single data structures. Thus, as we have studied the performances of the state-of-the-art techniques managing to improve their respective running times thanks to GridDS, this paper presents the best running time for point cloud residue computation up to date.",
"title": ""
},
{
"docid": "3561b00601c3ba1cadf1103591ee3d24",
"text": "Strategies to prevent or reduce the risk of allergic diseases are needed. The time of exclusive breastfeeding and introduction of solid foods is a key factor that may influence the development of allergy. For this reason, the aim of this review was to examine the association between exposure to solid foods in the infant's diet and the development of allergic diseases in children. Classical prophylactic feeding guidelines recommended a delayed introduction of solids for the prevention of atopic diseases. Is it really true that a delayed introduction of solids (after the 4th or 6th month) is protective against the development of eczema, asthma, allergic rhinitis and food or inhalant sensitisation? In recent years, many authors have found that there is no statistically significant association between delayed introduction of solids and protection for the development of allergic diseases. Furthermore, late introduction of solid foods could be associated with increased risk of allergic sensitisation to foods, inhalant allergens and celiac disease in children. Tolerance may be driven by the contact of the mucosal immune system with the allergen at the right time of life; the protective effects seem to be enhanced by the practice of the breastfeeding at the same time when weaning is started. Therefore, recent guidelines propose a \"window\" approach for weaning practice starting at the 17th week and introducing almost all foods within the 27th week of life to reduce the risk of chronic diseases such as allergic ones and the celiac disease. Guidelines emphasize the role of breastfeeding during the weaning practice.",
"title": ""
},
{
"docid": "2088c56bb59068a33de09edc6831e74b",
"text": "We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional treestructured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the stateof-the-art feature-based model on end-toend relation extraction, achieving 3.5% and 4.8% relative error reductions in F1score on ACE2004 and ACE2005, respectively. We also show a 2.5% relative error reduction in F1-score over the state-ofthe-art convolutional neural network based model on nominal relation classification (SemEval-2010 Task 8).",
"title": ""
},
{
"docid": "bf7e9cb3e7eb376582ae6b279ab27a7b",
"text": "Although control mechanisms have been widely studied in IS research within and between organizations, there is still a lack of research on control mechanisms applied in software-based platforms, on which the complex interactions between a platform owner and a myriad of third-party developers have to be coordinated. Drawing on IS control literature and self-determination theory, our study presents the findings of a laboratory experiment with 138 participants in which we examined how formal (i.e., output and process) and informal (i.e., self) control mechanisms affect third-party developers’ intentions to stick with a mobile app development platform. We demonstrate that selfcontrol plays a significantly more important role than formal control modes in increasing platform stickiness. We also find that the relationship between control modes and platform stickiness is fully mediated by developers’ perceived autonomy. Taken together, our study highlights the theoretically important finding that self-determination and self-regulation among third-party developers are stronger driving forces for staying with a platform than typical hierarchical control mechanisms. Implications for research and practice are discussed.",
"title": ""
},
{
"docid": "729b63fe33d2cc7048a887e3fdb41662",
"text": "Integrating biomechanics, behavior and ecology requires a mechanistic understanding of the processes producing the movement of animals. This calls for contemporaneous biomechanical, behavioral and environmental data along movement pathways. A recently formulated unifying movement ecology paradigm facilitates the integration of existing biomechanics, optimality, cognitive and random paradigms for studying movement. We focus on the use of tri-axial acceleration (ACC) data to identify behavioral modes of GPS-tracked free-ranging wild animals and demonstrate its application to study the movements of griffon vultures (Gyps fulvus, Hablizl 1783). In particular, we explore a selection of nonlinear and decision tree methods that include support vector machines, classification and regression trees, random forest methods and artificial neural networks and compare them with linear discriminant analysis (LDA) as a baseline for classifying behavioral modes. Using a dataset of 1035 ground-truthed ACC segments, we found that all methods can accurately classify behavior (80-90%) and, as expected, all nonlinear methods outperformed LDA. We also illustrate how ACC-identified behavioral modes provide the means to examine how vulture flight is affected by environmental factors, hence facilitating the integration of behavioral, biomechanical and ecological data. Our analysis of just over three-quarters of a million GPS and ACC measurements obtained from 43 free-ranging vultures across 9783 vulture-days suggests that their annual breeding schedule might be selected primarily in response to seasonal conditions favoring rising-air columns (thermals) and that rare long-range forays of up to 1750 km from the home range are performed despite potentially heavy energetic costs and a low rate of food intake, presumably to explore new breeding, social and long-term resource location opportunities.",
"title": ""
},
{
"docid": "062a575f7b519aa8a6aee4ec5e67955b",
"text": "This document provides a survey of the mathematical methods currently used for position estimation in indoor local positioning systems (LPS), particularly those based on radiofrequency signals. The techniques are grouped into four categories: geometry-based methods, minimization of the cost function, fingerprinting, and Bayesian techniques. Comments on the applicability, requirements, and immunity to nonline-of-sight (NLOS) propagation of the signals of each method are provided.",
"title": ""
},
{
"docid": "bf81781cebd7ec0b92132b87b36dafcb",
"text": "3D fingerprint recognition is an emerging technology in biometrics. However, current 3D fingerprint acquisition systems are usually with complex structure and high-cost and that has become the main obstacle for its popularization. In this work, we present a novel photometric method and an experimental setup for real-time 3D fingerprint reconstruction. The proposed system consists of seven LED lights that mounted around one camera. In the surface reflectance modeling of finger surface, a simplified Hanrahan-Krueger model is introduced. And a neural network approach is used to solve the model for accurate estimation of surface normals. A calibration method is also proposed to determine the lighting directions as well as the correction of the lighting fields. Moreover, to stand out the fingerprint ridge features and get better visual effects, a linear transformation is applied to the recovered normals field. Experiments on live fingerprint and the comparison with traditional photometric stereo algorithm are used to demonstrate its high performance.",
"title": ""
},
{
"docid": "41ebdf724580830ce2c106ec0415912f",
"text": "Standard Multi-Armed Bandit (MAB) problems assume that the arms are independent. However, in many application scenarios, the information obtained by playing an arm provides information about the remainder of the arms. Hence, in such applications, this informativeness can and should be exploited to enable faster convergence to the optimal solution. In this paper, formalize a new class of multi-armed bandit methods, Global Multi-armed Bandit (GMAB), in which arms are globally informative through a global parameter, i.e., choosing an arm reveals information about all the arms. We propose a greedy policy for the GMAB which always selects the arm with the highest estimated expected reward, and prove that it achieves bounded parameter-dependent regret. Hence, this policy selects suboptimal arms only finitely many times, and after a finite number of initial time steps, the optimal arm is selected in all of the remaining time steps with probability one. In addition, we also study how the informativeness of the arms about each other’s rewards affects the speed of learning. Specifically, we prove that the parameter-free (worst-case) regret is sublinear in time, and decreases with the informativeness of the arms. We also prove a sublinear in time Bayesian risk bound for the GMAB which reduces to the well-known Bayesian risk bound for linearly parameterized bandits when the arms are fully informative. GMABs have applications ranging from drug dosage control to dynamic pricing. Appearing in Proceedings of the 18 International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38. Copyright 2015 by the authors.",
"title": ""
},
{
"docid": "ec93b4c61694916dd494e9376102726b",
"text": "In 1969 Barlow introduced the phrase economy of impulses to express the tendency for successive neural systems to use lower and lower levels of cell firings to produce equivalent encodings. From this viewpoint, the ultimate economy of impulses is a neural code of minimal redundancy. The hypothesis motivating our research is that energy expenditures, e.g., the metabolic cost of recovering from an action potential relative to the cost of inactivity, should also be factored into the economy of impulses. In fact, coding schemes with the largest representational capacity are not, in general, optimal when energy expenditures are taken into account. We show that for both binary and analog neurons, increased energy expenditure per neuron implies a decrease in average firing rate if energy efficient information transmission is to be maintained.",
"title": ""
}
] |
scidocsrr
|
be1abccabaddd58a9a0cb6c5e5a2e60c
|
Shirtless and Dangerous: Quantifying Linguistic Signals of Gender Bias in an Online Fiction Writing Community
|
[
{
"docid": "ccf40417ca3858d69c4cd3fd031ea7c1",
"text": "Online social networks (OSNs) have become popular platforms for people to connect and interact with each other. Among those networks, Pinterest has recently become noteworthy for its growth and promotion of visual over textual content. The purpose of this study is to analyze this imagebased network in a gender-sensitive fashion, in order to understand (i) user motivation and usage pattern in the network, (ii) how communications and social interactions happen and (iii) how users describe themselves to others. This work is based on more than 220 million items generated by 683,273 users. We were able to find significant differences w.r.t. all mentioned aspects. We observed that, although the network does not encourage direct social communication, females make more use of lightweight interactions than males. Moreover, females invest more effort in reciprocating social links, are more active and generalist in content generation, and describe themselves using words of affection and positive emotions. Males, on the other hand, are more likely to be specialists and tend to describe themselves in an assertive way. We also observed that each gender has different interests in the network, females tend to make more use of the network’s commercial capabilities, while males are more prone to the role of curators of items that reflect their personal taste. It is important to understand gender differences in online social networks, so one can design services and applications that leverage human social interactions and provide more targeted and relevant user experiences.",
"title": ""
},
{
"docid": "c76fc0f9ce4422bee1d2cf3964f1024c",
"text": "The subjective nature of gender inequality motivates the analysis and comparison of data from real and fictional human interaction. We present a computational extension of the Bechdel test: A popular tool to assess if a movie contains a male gender bias, by looking for two female characters who discuss about something besides a man. We provide the tools to quantify Bechdel scores for both genders, and we measure them in movie scripts and large datasets of dialogues between users of MySpace and Twitter. Comparing movies and users of social media, we find that movies and Twitter conversations have a consistent male bias, which does not appear when analyzing MySpace. Furthermore, the narrative of Twitter is closer to the movies that do not pass the Bechdel test than to",
"title": ""
}
] |
[
{
"docid": "501307936e9288564f58b537678a7e52",
"text": "In today’s world automobile industries are coming out with so many new features in cars. Road infrastructures are also getting much better in all over the world. Due to this, the number of road accidents has increased and more number of people are dying in it. Researchers have tried to solve this problem by using virtual driver feature in the car. A computer vision based road detection and navigation system is put in the car that can help the driver about upcoming accident and rough driving. But the major problem in development of this type of system is identification of road using computer. And also detection of the unstructured roads or structured roads without remarkable boundaries and marking is a very difficult task for computer. For a given image of any road, that may difficult to identify clear edges or texture orientation or priori known color, is it possible for computer to find road? This paper gives answer to that question. First step is to find vanishing point, which uses Gabor filters to find texture orientation at each pixel and voting scheme is used to find vanishing point. Second step is to identify boundaries from this estimated vanishing point. This paper describes the considerable work that has been done towards vanishing point identification methods so that it can be used in real time for further applications.",
"title": ""
},
{
"docid": "7e8feb5f8d816a0c0626f6fdc4db7c04",
"text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.",
"title": ""
},
{
"docid": "8bb30efa3f14fa0860d1e5bc1265c988",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "3e59a228e89127a2a53e977d667cbfff",
"text": "▶ Introduces a flexible decision forest model capable of addressing a large and diverse set of image and video analysis tasks, covering both theoretical foundations and practical implementation ▶ Includes exercises and experiments throughout the text, with solutions, slides, demo videos and other supplementary material provided at an associated website ▶ Provides a free, user-friendly software library, enabling the reader to experiment with forests in a hands-on manner",
"title": ""
},
{
"docid": "abec336a59db9dd1fdea447c3c0ff3d3",
"text": "Neural network training relies on our ability to find “good” minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and wellchosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple “filter normalization” method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.",
"title": ""
},
{
"docid": "5cd70dede0014f4a58c0dc8460ba8513",
"text": "In this paper the Model Predictive Control (MPC) strategy is used to solve the mobile robot trajectory tracking problem, where controller must ensure that robot follows pre-calculated trajectory. The so-called explicit optimal controller design and implementation are described. The MPC solution is calculated off-line and expressed as a piecewise affine function of the current state of a mobile robot. A linearized kinematic model of a differential drive mobile robot is used for the controller design purpose. The optimal controller, which has a form of a look-up table, is tested in simulation and experimentally.",
"title": ""
},
{
"docid": "1982db485fbef226a5a1b839fa9bf12e",
"text": "The photopigment in the human eye that transduces light for circadian and neuroendocrine regulation, is unknown. The aim of this study was to establish an action spectrum for light-induced melatonin suppression that could help elucidate the ocular photoreceptor system for regulating the human pineal gland. Subjects (37 females, 35 males, mean age of 24.5 +/- 0.3 years) were healthy and had normal color vision. Full-field, monochromatic light exposures took place between 2:00 and 3:30 A.M. while subjects' pupils were dilated. Blood samples collected before and after light exposures were quantified for melatonin. Each subject was tested with at least seven different irradiances of one wavelength with a minimum of 1 week between each nighttime exposure. Nighttime melatonin suppression tests (n = 627) were completed with wavelengths from 420 to 600 nm. The data were fit to eight univariant, sigmoidal fluence-response curves (R(2) = 0.81-0.95). The action spectrum constructed from these data fit an opsin template (R(2) = 0.91), which identifies 446-477 nm as the most potent wavelength region providing circadian input for regulating melatonin secretion. The results suggest that, in humans, a single photopigment may be primarily responsible for melatonin suppression, and its peak absorbance appears to be distinct from that of rod and cone cell photopigments for vision. The data also suggest that this new photopigment is retinaldehyde based. These findings suggest that there is a novel opsin photopigment in the human eye that mediates circadian photoreception.",
"title": ""
},
{
"docid": "10858cdad9f821a88c3e2e56642b239c",
"text": "The clustering algorithm DBSCAN relies on a density-based notion of clusters and is designed to discover clusters of arbitrary shape as well as to distinguish noise. In this paper, we generalize this algorithm in two important directions. The generalized algorithm—called GDBSCAN—can cluster point objects as well as spatially extended objects according to both, their spatial and their nonspatial attributes. In addition, four applications using 2D points (astronomy), 3D points (biology), 5D points (earth science) and 2D polygons (geography) are presented, demonstrating the applicability of GDBSCAN to real-world problems.",
"title": ""
},
{
"docid": "db65e9771d00293e21fe96c99a4896c5",
"text": "Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequenceto-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the “order-matters” problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9% to 13% on the WikiSQL task.",
"title": ""
},
{
"docid": "d233e7031b84316f66a4f4568c907545",
"text": "The specific biomechanical alterations related to vitality loss or endodontic procedures are confusing issues for the practitioner and have been controversially approached from a clinical standpoint. The aim of part 1 of this literature review is to present an overview of the current knowledge about composition changes, structural alterations, and status following endodontic therapy and restorative procedures. The basic search process included a systematic review of the PubMed/Medline database between 1990 and 2005, using single or combined key words to obtain the most comprehensive list of references; a perusal of the references of the relevant sources completed the review. Only negligible alterations in tissue moisture and composition attributable to vitality loss or endodontic therapy were reported. Loss of vitality followed by proper endodontic therapy proved to affect tooth biomechanical behavior only to a limited extent. Conversely, tooth strength is reduced in proportion to coronal tissue loss, due to either caries lesion or restorative procedures. Therefore, the best current approach for restoring endodontically treated teeth seems to (1) minimize tissue sacrifice, especially in the cervical area so that a ferrule effect can be created, (2) use adhesive procedures at both radicular and coronal levels to strengthen remaining tooth structure and optimize restoration stability and retention, and (3) use post and core materials with physical properties close to those of natural dentin, because of the limitations of current adhesive procedures.",
"title": ""
},
{
"docid": "583b8cda1ef421011f7801bc35b82b8b",
"text": "This paper presents a natural language processing based automated system for NL text to OO modeling the user requirements and generating code in multi-languages. A new rule-based model is presented for analyzing the natural languages (NL) and extracting the relative and required information from the given software requirement notes by the user. User writes the requirements in simple English in a few paragraphs and the designed system incorporates NLP methods to analyze the given script. First the NL text is semantically analyzed to extract classes, objects and their respective, attributes, methods and associations. Then UML diagrams are generated on the bases of previously extracted information. The designed system also provides with the respective code automatically of the already generated diagrams. The designed system provides a quick and reliable way to generate UML diagrams to save the time and budget of both the user and system analyst.",
"title": ""
},
{
"docid": "5efa00e0b5973515dff10d8267ac025f",
"text": "Porokeratosis, a disorder of keratinisation, is clinically characterized by the presence of annular plaques with a surrounding keratotic ridge. Clinical variants include linear, disseminated superficial actinic, verrucous/hypertrophic, disseminated eruptive, palmoplantar and porokeratosis of Mibelli (one or two typical plaques with atrophic centre and guttered keratotic rim). All of these subtypes share the histological feature of a cornoid lamella, characterized by a column of 'stacked' parakeratosis with focal absence of the granular layer, and dysmaturation (prematurely keratinised cells in the upper spinous layer). In recent years, a proposed new subtype, follicular porokeratosis (FP_, has been described, in which the cornoid lamella are exclusively located in the follicular ostia. We present four new cases that showed typical histological features of FP.",
"title": ""
},
{
"docid": "5b4412d7deacc826e68d9db0c47687a7",
"text": "In this paper, we describe a novel method for calibrating display-camera setups from reflections in a user’s eyes. Combining both devices creates a capable controlled illumination system that enables a range of interesting vision applications in non-professional environments, including object/face reconstruction and human–computer interaction. One major issue barring such systems from average homes is the geometric calibration to obtain the pose of the display which requires special hardware and tedious user interaction. Our proposed approach eliminates this requirement by introducing the novel idea of analyzing screen reflections in the cornea of the human eye, a mirroring device that is always available. We employ a simple shape model to recover pose and reflection characteristics of the eye. Thorough experimental evaluation shows that the basic strategy results in a large error and discusses possible reasons. Based on the findings, a non-linear optimization strategy is developed that exploits geometry constraints within the system to considerably improve the initial estimate. It further allows to automatically resolve an inherent ambiguity that arises in image-based eye pose estimation. The strategy may also be integrated to improve spherical mirror calibration. We describe several comprehensive experimental studies which show that the proposed method performs stably with respect to varying subjects, display poses, eye positions, and gaze directions. The results are feasible and should be sufficient for many applications. In addition, the findings provide general insight on the application of eye reflections for geometric reconstruction. 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a6f2cee851d2c22d471f473caf1710a1",
"text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.",
"title": ""
},
{
"docid": "1c5591bec1b8bfab63309aa2eb488e83",
"text": "When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.",
"title": ""
},
{
"docid": "8b52917e8774f9812ebea197f4cbcc7f",
"text": "This paper explores the relevance of recent feminist reconstructions of objectivity for the development of alternative visions of technology production and use. I take as my starting place the working relations that make up the design and use of technical systems. Working relations are understood as networks or webs of connections that sustain the visible and invisible work required to construct coherent technologies and put them into use. I outline the boundaries that characterize current relations of development and use, and the boundary crossings required to transform them. Three contrasting premises for design-the view from nowhere, detached engagement, and located accountability — are taken to represent incommensurate alternatives for a politics of professional design. From the position of located accountability, I close by sketching aspects of what a feminist politics and associated practices of system development could be.",
"title": ""
},
{
"docid": "abca2da2772fa97aee12110b4cb7ff18",
"text": "The key challenge of intelligent fault diagnosis is to develop features that can distinguish different categories. Because of the unique properties of mechanical data, predetermined features based on prior knowledge are usually used as inputs for fault classification. However, proper selection of features often requires expertise knowledge and becomes more difficult and time consuming when volume of data increases. In this paper, a novel deep learning network (LiftingNet) is proposed to learn features adaptively from raw mechanical data without prior knowledge. Inspired by convolutional neural network and second generation wavelet transform, the LiftingNet is constructed to classify mechanical data even though inputs contain considerable noise and randomness. The LiftingNet consists of split layer, predict layer, update layer, pooling layer, and full-connection layer. Different kernel sizes are allowed in convolutional layers to improve learning ability. As a multilayer neural network, deep features are learned from shallow ones to represent complex structures in raw data. Feasibility and effectiveness of the LiftingNet is validated by two motor bearing datasets. Results show that the proposed method could achieve layerwise feature learning and successfully classify mechanical data even with different rotating speed and under the influence of random noise.",
"title": ""
},
{
"docid": "3d8be6d4478154bc711d9cf241e7edb5",
"text": "The use of multimedia technology to teach language in its authentic cultural context represents a double challenge for language learners and teachers. On the one hand, the computer gives learners access to authentic video footage and other cultural materials that can help them get a sense of the sociocultural context in which the language is used. On the other hand, CD-ROM multimedia textualizes this context in ways that need to be \"read\" and interpreted. Learners are thus faced with the double task of (a) observing and choosing culturally relevant features of the context and (b) putting linguistic features in relation to other features to arrive at some understanding of language in use. This paper analyzes the interaction of text and context in a multimedia Quechua language program, and makes suggestions for teaching foreign languages through multimedia technology.",
"title": ""
},
{
"docid": "c5c4f4cab75bc6f997803212ee8d30a2",
"text": "The privacy and integrity of tenant's data highly rely on the infrastructure of multi-tenant cloud being secure. However, with both hardware and software being controlled by potentially curious or even malicious cloud operators, it is no surprise to see frequent reports of data leakages or abuses in cloud. Unfortunately, most prior solutions require intrusive changes to the cloud platform and none can protect a VM against adversaries controlling the physical machine. This paper analyzes the challenges of transparent VM protection against sophisticated adversaries controlling the whole software and hardware stack. Based on the analysis, this paper proposes HyperCoffer, a hardware-software framework that guards the privacy and integrity of tenant's VMs. HyperCoffer only trusts the processor chip and makes no security assumption on external memory and devices. Hyper-Coffer extends existing processor virtualization with memory encryption and integrity checking to secure data communication with off-chip memory. Unlike prior hardware-based approaches, HyperCoffer retains transparency with existing virtual machines (i.e., operating systems) and requires very few changes to the (untrusted) hypervisor. HyperCoffer introduces a mechanism called VM-Shim that runs in-between a guest VM and the hypervisor. Each VM-Shim instance for a VM runs in a separate protected context and only declassifies necessary information designated by the VM to the hypervisor and external environments (e.g., through NICs). We have implemented a prototype of HyperCoffer in a QEMU-based full-system emulator and the VM-Shim mechanism in a real machine. Performance measurement using trace-based simulation and on a real hardware platform shows that the performance overhead is small (ranging from 0.6% to 13.9% on simulated platform and 0.3% to 6.8% on real hardware for the VM-Shim mechanism).",
"title": ""
}
] |
scidocsrr
|
07da656685c52631db40d003dc646001
|
Evidence of mirror neurons in human inferior frontal gyrus.
|
[
{
"docid": "9f01b1e2bbc2d2b940c04f07b05bf5bb",
"text": "Inferior parietal lobule (IPL) neurons were studied when monkeys performed motor acts embedded in different actions and when they observed similar acts done by an experimenter. Most motor IPL neurons coding a specific act (e.g., grasping) showed markedly different activations when this act was part of different actions (e.g., for eating or for placing). Many motor IPL neurons also discharged during the observation of acts done by others. Most responded differentially when the same observed act was embedded in a specific action. These neurons fired during the observation of an act, before the beginning of the subsequent acts specifying the action. Thus, these neurons not only code the observed motor act but also allow the observer to understand the agent's intentions.",
"title": ""
},
{
"docid": "9e8210e2030b78ea40f211f05359e5be",
"text": "Understanding the goals or intentions of other people requires a broad range of evaluative processes including the decoding of biological motion, knowing about object properties, and abilities for recognizing task space requirements and social contexts. It is becoming increasingly evident that some of this decoding is based in part on the simulation of other people's behavior within our own nervous system. This review focuses on aspects of action understanding that rely on embodied cognition, that is, the knowledge of the body and how it interacts with the world. This form of cognition provides an essential knowledge base from which action simulation can be used to decode at least some actions performed by others. Recent functional imaging studies or action understanding are interpreted with a goal of defining conditions when simulation operations occur and how this relates with other constructs, including top-down versus bottom-up processing and the functional distinctions between action observation and social networks. From this it is argued that action understanding emerges from the engagement of highly flexible computational hierarchies driven by simulation, object properties, social context, and kinematic constraints and where the hierarchy is driven by task structure rather than functional or strict anatomic rules.",
"title": ""
}
] |
[
{
"docid": "14bb62c02192f837303dcc2e327475a6",
"text": "In this paper, we have proposed three kinds of network security situation awareness (NSSA) models. In the era of big data, the traditional NSSA methods cannot analyze the problem effectively. Therefore, the three models are designed for big data. The structure of these models are very large, and they are integrated into the distributed platform. Each model includes three modules: network security situation detection (NSSD), network security situation understanding (NSSU), and network security situation projection (NSSP). Each module comprises different machine learning algorithms to realize different functions. We conducted a comprehensive study of the safety of these models. Three models compared with each other. The experimental results show that these models can improve the efficiency and accuracy of data processing when dealing with different problems. Each model has its own advantages and disadvantages.",
"title": ""
},
{
"docid": "7480b463612c919f7b0de5d9a12089f1",
"text": "Importance\nAmid the current opioid epidemic in the United States, the enhanced recovery after surgery pathway (ERAS) has emerged as one of the best strategies to improve the value and quality of surgical care and has been increasingly adopted for a broad range of complex surgical procedures. The goal of this article was to outline important components of opioid-sparing analgesic regimens.\n\n\nObservations\nRegional analgesia, acetaminophen, nonsteroidal anti-inflammatory agents, gabapentinoids, tramadol, lidocaine, and/or the N-methyl-d-aspartate class of glutamate receptor antagonists have been shown to be effective adjuncts to narcotic analgesia. Nonsteroidal anti-inflammatory agents are not associated with an increase in postoperative bleeding. A meta-analysis of 27 randomized clinical trials found no difference in postoperative bleeding between the groups taking ketorolac tromethamine (33 of 1304 patients [2.5%]) and the control groups (21 of 1010 [2.1%]) (odds ratio [OR], 1.1; 95% CI, 0.61-2.06; P = .72). After adoption of the multimodal analgesia approach for a colorectal ERAS pathway, most patients used less opioids while in the hospital and many did not need opioids after hospital discharge, although approximately 50% of patients received some opioid during their stay.\n\n\nConclusions and Relevance\nMultimodal analgesia is readily available and the evidence is strong to support its efficacy. Surgeons should use this effective approach for patients both using and not using the ERAS pathway to reduce opioid consumption.",
"title": ""
},
{
"docid": "41c718697d19ee3ca0914255426a38ab",
"text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.",
"title": ""
},
{
"docid": "9dd83eb5760e8dbf6f3bd918eb73c79f",
"text": "Pontine tegmental cap dysplasia (PTCD) is a recently described hindbrain malformation characterized by pontine hypoplasia and ectopic dorsal transverse pontine fibers (1). To date, a total of 19 cases of PTCD have been published, all patients had sensorineural hearing loss (SNHL). We contribute 1 additional case of PTCD with SNHL with and VIIIth cranial nerve and temporal bone abnormalities using dedicated magnetic resonance (MR) and high-resolution temporal bone computed tomographic (CT) images.",
"title": ""
},
{
"docid": "5f393e79895bf234c0b96b7ece0d1cae",
"text": "Energy consumption of routers in commonly used mesh-based on-chip networks for chip multiprocessors is an increasingly important concern: these routers consist of a crossbar and complex control logic and can require significant buffers, hence high energy and area consumption. In contrast, an alternative design uses ring-based networks to connect network nodes with small and simple routers. Rings have been used in recent commercial designs, and are well-suited to smaller core counts. However, rings do not scale as efficiently as meshes. In this paper, we propose an energy-efficient yet high performance alternative to traditional mesh-based and ringbased on-chip networks. We aim to attain the scalability of meshes with the router simplicity and efficiency of rings. Our design is a hierarchical ring topology which consists of small local rings connected via one or more global ring. Routing between rings is accomplished using bridge routers that have minimal buffering, and use deflection in place of buffered flow control for simplicity. We comprehensively explore new issues in the design of such a topology, including the design of the routers, livelock freedom, energy, performance and scalability. We propose new router microarchitectures and show that these routers are significantly simpler and more area and energy efficient than both buffered and bufferless mesh based routers. We develop new mechanisms to preserve livelock-free routing in our topology and router design. Our evaluations compare our proposal to a traditional ring network and conventional buffered and bufferless mesh based networks, showing that our proposal reduces average network power by 52.4% (30.4%) and router area footprint by 70.5% from a buffered mesh in 16-node (64-node) configurations, while also improving system performance by 0.6% (5.0%).",
"title": ""
},
{
"docid": "dfb30df3153a0bdc9dc6e9b464a06269",
"text": "The scientific interest in meditation and mindfulness practice has recently seen an unprecedented surge. After an initial phase of presenting beneficial effects of mindfulness practice in various domains, research is now seeking to unravel the underlying psychological and neurophysiological mechanisms. Advances in understanding these processes are required for improving and fine-tuning mindfulness-based interventions that target specific conditions such as eating disorders or attention deficit hyperactivity disorders. This review presents a theoretical framework that emphasizes the central role of attentional control mechanisms in the development of mindfulness skills. It discusses the phenomenological level of experience during meditation, the different attentional functions that are involved, and relates these to the brain networks that subserve these functions. On the basis of currently available empirical evidence specific processes as to how attention exerts its positive influence are considered and it is concluded that meditation practice appears to positively impact attentional functions by improving resource allocation processes. As a result, attentional resources are allocated more fully during early processing phases which subsequently enhance further processing. Neural changes resulting from a pure form of mindfulness practice that is central to most mindfulness programs are considered from the perspective that they constitute a useful reference point for future research. Furthermore, possible interrelations between the improvement of attentional control and emotion regulation skills are discussed.",
"title": ""
},
{
"docid": "41c165eec3e201156217ee7bf91867b2",
"text": "This position paper advocates a communicationsinspired approach to the design of machine learning systems on energy-constrained embedded ‘always-on’ platforms. The communicationsinspired approach has two versions 1) a deterministic version where existing low-power communication IC design methods are repurposed, and 2) a stochastic version referred to as Shannon-inspired statistical information processing employing information-based metrics, statistical error compensation (SEC), and retraining-based methods to implement ML systems on stochastic circuit/device fabrics operating at the limits of energy-efficiency. The communications-inspired approach has the potential to fully leverage the opportunities afforded by ML algorithms and applications in order to address the challenges inherent in their deployment on energy-constrained platforms.",
"title": ""
},
{
"docid": "12519f0131b8d451654ea790c977acd0",
"text": "In the early 1980s, Scandinavian software designers who sought to make systems design more participatory and democratic turned to prototyping. The \"Scandinavian challenge\" of making computers more democratic inspired others who became interested in user-centered design; information designers on both sides of the Atlantic began to employ prototyping as a way to encourage user participation and feedback in various design approaches. But, as European and North American researchers have pointed out, prototyping is seen as meeting very different needs in Scandinavia and in the US. Thus design approaches that originate on either side of the Atlantic have implemented prototyping quite differently, have deployed it to meet quite different goals, and have tended to understand prototyping results in different ways.These differences are typically glossed over in technical communication research. Technical communicators have lately become quite excited about prototyping's potential to help design documentation, but the technical communication literature shows little critical awareness of the methodological differences between Scandinavian and US prototyping. In this presentation, I map out some of these differences by comparing prototyping in a variety of design approaches originating in Scandinavia and the US, such as mock-ups, cooperative prototyping, CARD, PICTIVE, and contextual design. Finally, I discuss implications for future technical communication research involving prototyping.",
"title": ""
},
{
"docid": "d9e39f2513e74917023d508596cff6c7",
"text": "In recent years the data mining is data analyzing techniques that used to analyze crime data previously stored from various sources to find patterns and trends in crimes. In additional, it can be applied to increase efficiency in solving the crimes faster and also can be applied to automatically notify the crimes. However, there are many data mining techniques. In order to increase efficiency of crime detection, it is necessary to select the data mining techniques suitably. This paper reviews the literatures on various data mining applications, especially applications that applied to solve the crimes. Survey also throws light on research gaps and challenges of crime data mining. In additional to that, this paper provides insight about the data mining for finding the patterns and trends in crime to be used appropriately and to be a help for beginners in the research of crime data mining.",
"title": ""
},
{
"docid": "a026cb81bddfa946159d02b5bb2e341d",
"text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.",
"title": ""
},
{
"docid": "e9768df1b2a679e7d9e81588d4c2af02",
"text": "Over the last few decades, the electric utilities have seen a very significant increase in the application of metal oxide surge arresters on transmission lines in an effort to reduce lightning initiated flashovers, maintain high power quality and to avoid damages and disturbances especially in areas with high soil resistivity and lightning ground flash density. For economical insulation coordination in transmission and substation equipment, it is necessary to predict accurately the lightning surge overvoltages that occur on an electric power system.",
"title": ""
},
{
"docid": "59021dcb134a2b25122b3be73243bea6",
"text": "The path taken by a packet traveling across the Internet depends on a large number of factors, including routing protocols and per-network routing policies. The impact of these factors on the end-to-end performance experienced by users is poorly understood. In this paper, we conduct a measurement-based study comparing the performance seen using the \"default\" path taken in the Internet with the potential performance available using some alternate path. Our study uses five distinct datasets containing measurements of \"path quality\", such as round-trip time, loss rate, and bandwidth, taken between pairs of geographically diverse Internet hosts. We construct the set of potential alternate paths by composing these measurements to form new synthetic paths. We find that in 30-80% of the cases, there is an alternate path with significantly superior quality. We argue that the overall result is robust and we explore two hypotheses for explaining it.",
"title": ""
},
{
"docid": "252d6a298208337488960568c3d36ec7",
"text": "The rapid development of remote sensing technology allows us to get images with high and very high resolution (VHR). VHR imagery scene classification has become an important and challenging problem. In this paper, we introduce a framework for VHR scene understanding. First, the pretrained visual geometry group network (VGG-Net) model is proposed as deep feature extractors to extract informative features from the original VHR images. Second, we select the fully connected layers constructed by VGG-Net in which each layer is regarded as separated feature descriptors. And then we combine between them to construct final representation of the VHR image scenes. Third, discriminant correlation analysis (DCA) is adopted as feature fusion strategy to further refine the original features extracting from VGG-Net, which allows a more efficient fusion approach with small cost than the traditional feature fusion strategies. We apply our approach to three challenging data sets: 1) UC MERCED data set that contains 21 different areal scene categories with submeter resolution; 2) WHU-RS data set that contains 19 challenging scene categories with various resolutions; and 3) the Aerial Image data set that has a number of 10 000 images within 30 challenging scene categories with various resolutions. The experimental results demonstrate that our proposed method outperforms the state-of-the-art approaches. Using feature fusion technique achieves a higher accuracy than solely using the raw deep features. Moreover, the proposed method based on DCA fusion produces good informative features to describe the images scene with much lower dimension.",
"title": ""
},
{
"docid": "296b4d3341395dafc10130081c294a6f",
"text": "Partially observable Markov decision processes (POMDPs) provide a principled framework for sequential planning in uncertain single agent settings. An extension of POMDPs to multiagent settings, called interactive POMDPs (I-POMDPs), replaces POMDP belief spaces with interactive hierarchical belief systems which represent an agent’s belief about the physical world, about beliefs of other agents, and about their beliefs about others’ beliefs. This modification makes the difficulties of obtaining solutions due to complexity of the belief and policy spaces even more acute. We describe a general method for obtaining approximate solutions of I-POMDPs based on particle filtering (PF). We introduce the interactive PF, which descends the levels of the interactive belief hierarchies and samples and propagates beliefs at each level. The interactive PF is able to mitigate the belief space complexity, but it does not address the policy space complexity. To mitigate the policy space complexity – sometimes also called the curse of history – we utilize a complementary method based on sampling likely observations while building the look ahead reachability tree. While this approach does not completely address the curse of history, it beats back the curse’s impact substantially. We provide experimental results and chart future work.",
"title": ""
},
{
"docid": "e01d5be587c73aaa133acb3d8aaed996",
"text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "6906b1b670dacadfe6262b5ca80099fd",
"text": "A difficult problem in networking courses is to find hands-on projects that have the right balance between the level of realism and complexity. This is especially true for projects that focus on the internal functionality of routers and other network devices. We developed a capstone course called \"Network Design and Evaluation\" that uses a network processor-based platform for networking projects. This platform is more realistic than traditional approaches based on software emulation environments or PC-based routers running Unix, but it is significantly less complex to work with than real commercial routers or even PC-based routers. We are currently teaching this course for the third year, and our experience has been extremely positive. Students enjoy the realism of the platform and not only learn a lot about the internal operation of the network, but also about network configuration and management.",
"title": ""
},
{
"docid": "1c7251c55cf0daea9891c8a522bbd3ec",
"text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.",
"title": ""
},
{
"docid": "1bf8cc02cf21015385cd1fd20ffb2f4e",
"text": "© 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. 1Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA. 2Berkeley Sensor and Actuator Center, University of California, Berkeley, CA, USA. 3Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA. *e-mail: ajavey@eecs.berkeley.edu Healthcare systems today are mostly reactive. Patients contact doctors after they have developed ailments with noticeable symptoms, and are thereafter passive recipients of care and monitoring by specialists. This approach largely fails in preventing the onset of health conditions, prioritizing diagnostics and treatment over proactive healthcare. It further occludes individuals from being active agents in monitoring their own health. The growing field of wearable sensors (or wearables) aims to tackle the limitations of centralized, reactive healthcare by giving individuals insight into the dynamics of their own physiology. The long-term vision is to develop sensors that can be integrated into wearable formats like clothing, wristbands, patches, or tattoos to continuously probe a range of body indicators. By relaying physiological information as the body evolves over healthy and sick states, these sensors will enable individuals to monitor themselves without expensive equipment or trained professionals (Fig. 1). Various physical and chemical sensors will need to be integrated to obtain a complete picture of dynamic health. These sensors will generate vast time series of data that will need to be parsed with big-data techniques to generate personalized baselines indicative of the user’s health1–4. Sensor readings that cohere with the established baseline can then indicate that the body is in a healthy, equilibrium state, while deviations from the baseline can provide early warnings about developing health conditions. Eventually, deviations caused by different pathologies can be ‘fingerprinted’ to make diagnosis more immediate and autonomous. Together, the integration of wearables with big-data analytics can enable individualized fitness monitoring, early detection of developing health conditions, and better management of chronic diseases. This envisioned medical landscape built around continuous, point-of-care sensing spearheads personalized, predictive, and ultimately preventive healthcare.",
"title": ""
},
{
"docid": "9ddfab2e6fa9f4009767364335bfc09e",
"text": "This paper aims at presenting a configuration for a multi-functional spherical aerial vehicle designed for search and rescue purposes which has the advantages of hover, level flight and amazing mobility. This is an advanced version of a Single-Copter and is called the Thrust Vectoring Aerial Craft (TVAC). The Spherical frame allows the vehicle to encounter an object without the risk of damaging the propeller and on board components and can land virtually anywhere without the need for critical pilot training The swash plate enabled thrust vectoring provides extra maneuverability and responsiveness to the craft in extreme environments. This first prototype consists of a single propeller, a 2-axis swash-plate and four control surfaces enclosed in a light-weight spherical frame. The craft is stabilized using a KK multicopter gyro based controller with four servo outputs and one motor output. A successful attitude control and stability is demonstrated with the swash-plate mechanism providing greater efficiency than its Single-Copter counterpart. Various experimental results after flight tests regarding the center of gravity and appropriate projected areas of various control vanes are also emphasized.",
"title": ""
},
{
"docid": "795bede0ff85ce04e956cdc23f8ecb0a",
"text": "Neuromorphic computing using post-CMOS technologies is gaining immense popularity due to its promising abilities to address the memory and power bottlenecks in von-Neumann computing systems. In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs). Prior works were primarily focused on device and circuit implementations of SNNs on crossbars. RESPARC advances this by proposing a complete system for SNN acceleration and its subsequent analysis. RESPARC utilizes the energy-efficiency of MCAs for inner-product computation and realizes a hierarchical reconfigurable design to incorporate the data-flow patterns in an SNN in a scalable fashion. We evaluate the proposed architecture on different SNNs ranging in complexity from 2k-230k neurons and 1.2M-5.5M synapses. Simulation results on these networks show that compared to the baseline digital CMOS architecture, RESPARC achieves 500x (15x) efficiency in energy benefits at 300x (60x) higher throughput for multi-layer perceptrons (deep convolutional networks). Furthermore, RESPARC is a technology-aware architecture that maps a given SNN topology to the most optimized MCA size for the given crossbar technology.",
"title": ""
}
] |
scidocsrr
|
ffaec47ec1492f67ccaff7ff405b8ea3
|
ELASTIC: A Client-Side Controller for Dynamic Adaptive Streaming over HTTP (DASH)
|
[
{
"docid": "3ab6fbc7957d429f07e2c09112304daf",
"text": "Improving users' quality of experience (QoE) is crucial for sustaining the advertisement and subscription based revenue models that enable the growth of Internet video. Despite the rich literature on video and QoE measurement, our understanding of Internet video QoE is limited because of the shift from traditional methods of measuring video quality (e.g., Peak Signal-to-Noise Ratio) and user experience (e.g., opinion scores). These have been replaced by new quality metrics (e.g., rate of buffering, bitrate) and new engagement centric measures of user experience (e.g., viewing time and number of visits). The goal of this paper is to develop a predictive model of Internet video QoE. To this end, we identify two key requirements for the QoE model: (1) it has to be tied in to observable user engagement and (2) it should be actionable to guide practical system design decisions. Achieving this goal is challenging because the quality metrics are interdependent, they have complex and counter-intuitive relationships to engagement measures, and there are many external factors that confound the relationship between quality and engagement (e.g., type of video, user connectivity). To address these challenges, we present a data-driven approach to model the metric interdependencies and their complex relationships to engagement, and propose a systematic framework to identify and account for the confounding factors. We show that a delivery infrastructure that uses our proposed model to choose CDN and bitrates can achieve more than 20\\% improvement in overall user engagement compared to strawman approaches.",
"title": ""
}
] |
[
{
"docid": "3ec1da9b86b3338b1ad4890add51a20b",
"text": "In this paper, we present the dynamic modeling and controller design of a tendon-driven system that is antagonistically driven by elastic tendons. In the dynamic modeling, the tendons are approximated as linear axial springs, neglecting their masses. An overall equation for motion is established by following the Euler–Lagrange formalism of dynamics, combined with rigid-body rotation and vibration. The controller is designed using the singular perturbation approach, which leads to a composite controller (i.e., consisting of a fast sub-controller and a slow sub-controller). An appropriate internal force is superposed to the control action to ensure the tendons to be in tension for all configurations. Experimental results are provided to demonstrate the validity and effectiveness of the proposed controller for the antagonistic tendon-driven system.",
"title": ""
},
{
"docid": "5229fb13c66ca8a2b079f8fe46bb9848",
"text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.",
"title": ""
},
{
"docid": "f452a9b9711bdb43da1467fe8f9d72c9",
"text": "In this paper we present a novel genetic algorithm (GA) solution to a simple yet challenging commercial puzzle game known as Zen Puzzle Garden (ZPG). We describe the game in detail, before presenting a suitable encoding scheme and fitness function for candidate solutions. We then compare the performance of the genetic algorithm with that of the A* algorithm. Our results show that the GA is competitive with informed search in terms of solution quality, and significantly out-performs it in terms of computational resource requirements. We conclude with a brief discussion of the implications of our findings for game solving and other “real world” problems.",
"title": ""
},
{
"docid": "7db2f661465cb18abf68e9148f50ce66",
"text": "When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural language problems is piecewise constant and riddled with local minima, many systems instead optimize log-likelihood, which is conveniently differentiable and convex. We propose training instead to minimize the expected loss, or risk. We define this expectation using a probability distribution over hypotheses that we gradually sharpen (anneal) to focus on the 1-best hypothesis. Besides the linear loss functions used in previous work, we also describe techniques for optimizing nonlinear functions such as precision or the BLEU metric. We present experiments training log-linear combinations of models for dependency parsing and for machine translation. In machine translation, annealed minimum risk training achieves significant improvements in BLEU over standard minimum error training. We also show improvements in labeled dependency parsing. 1 Direct Minimization of Error Researchers in empirical natural language processing have expended substantial ink and effort in developing metrics to evaluate systems automatically against gold-standard corpora. The ongoing evaluation literature is perhaps most obvious in the machine translation community’s efforts to better BLEU (Papineni et al., 2002). Despite this research, parsing or machine translation systems are often trained using the much simpler and harsher metric of maximum likelihood. One reason is that in supervised training, the log-likelihood objective function is generally convex, meaning that it has a single global maximum that can be easily found (indeed, for supervised generative models, the parameters at this maximum may even have a closed-form solution). In contrast to the likelihood surface, the error surface for discrete structured prediction is not only riddled with local minima, but piecewise constant ∗This work was supported by an NSF graduate research fellowship for the first author and by NSF ITR grant IIS0313193 and ONR grant N00014-01-1-0685. The views expressed are not necessarily endorsed by the sponsors. We thank Sanjeev Khudanpur, Noah Smith, Markus Dreyer, and the reviewers for helpful discussions and comments. and not everywhere differentiable with respect to the model parameters (Figure 1). Despite these difficulties, some work has shown it worthwhile to minimize error directly (Och, 2003; Bahl et al., 1988). We show improvements over previous work on error minimization by minimizing the risk or expected error—a continuous function that can be derived by combining the likelihood with any evaluation metric (§2). Seeking to avoid local minima, deterministic annealing (Rose, 1998) gradually changes the objective function from a convex entropy surface to the more complex risk surface (§3). We also discuss regularizing the objective function to prevent overfitting (§4). We explain how to compute expected loss under some evaluation metrics common in natural language tasks (§5). We then apply this machinery to training log-linear combinations of models for dependency parsing and for machine translation (§6). Finally, we note the connections of minimum risk training to max-margin training and minimum Bayes risk decoding (§7), and recapitulate our results (§8). 2 Training Log-Linear Models In this work, we focus on rescoring with loglinear models. In particular, our experiments consider log-linear combinations of a relatively small number of features over entire complex structures, such as trees or translations, known in some previous work as products of experts (Hinton, 1999) or logarithmic opinion pools (Smith et al., 2005). A feature in the combined model might thus be a log probability from an entire submodel. Giving this feature a small or negative weight can discount a submodel that is foolishly structured, badly trained, or redundant with the other features. For each sentence xi in our training corpus S, we are given Ki possible analyses yi,1, . . . yi,Ki . (These may be all of the possible translations or parse trees; or only the Ki most probable under Figure 1: The loss surface for a machine translation system: while other parameters are held constant, we vary the weights on the distortion and word penalty features. Note the piecewise constant regions with several local maxima. some other model; or only a random sample of size Ki.) Each analysis has a vector of real-valued features (i.e., factors, or experts) denoted fi,k. The score of the analysis yi,k is θ · fi,k, the dot product of its features with a parameter vector θ. For each sentence, we obtain a normalized probability distribution over the Ki analyses as pθ(yi,k | xi) = exp θ · fi,k ∑Ki k′=1 exp θ · fi,k′ (1) We wish to adjust this model’s parameters θ to minimize the severity of the errors we make when using it to choose among analyses. A loss function Ly∗(y) assesses a penalty for choosing y when y∗ is correct. We will usually write this simply as L(y) since y∗ is fixed and clear from context. For clearer exposition, we assume below that the total loss over some test corpus is the sum of the losses on individual sentences, although we will revisit that assumption in §5. 2.1 Minimizing Loss or Expected Loss One training criterion directly mimics test conditions. It looks at the loss incurred if we choose the best analysis of each xi according to the model:",
"title": ""
},
{
"docid": "bbe564c84a58db6c4342b566e8fbfc0d",
"text": "We present a detail-driven deep neural network for point set upsampling. A high-resolution point set is essential for point-based rendering and surface reconstruction. Inspired by the recent success of neural image super-resolution techniques, we progressively train a cascade of patch-based upsampling networks on different levels of detail end-to-end. We propose a series of architectural design contributions that lead to a substantial performance boost. The effect of each technical contribution is demonstrated in an ablation study. Qualitative and quantitative experiments show that our method significantly outperforms the state-of-theart learning-based [58, 59], and optimazation-based [23] approaches, both in terms of handling low-resolution inputs and revealing high-fidelity details. The data and code are at https://github.com/yifita/3pu.",
"title": ""
},
{
"docid": "cea7debee0413a79a9c7c5e54d82e337",
"text": "Viral Marketing, the idea of exploiting social interactions of users to propagate awareness for products, has gained considerable focus in recent years. One of the key issues in this area is to select the best seeds that maximize the influence propagated in the social network. In this paper, we define the seed selection problem (called t-Influence Maximization, or t-IM) for multiple products. Specifically, given the social network and t products along with their seed requirements, we want to select seeds for each product that maximize the overall influence. As the seeds are typically sent promotional messages, to avoid spamming users, we put a hard constraint on the number of products for which any single user can be selected as a seed. In this paper, we design two efficient techniques for the t-IM problem, called Greedy and FairGreedy. The Greedy algorithm uses simple greedy hill climbing, but still results in a 1/3-approximation to the optimum. Our second technique, FairGreedy, allocates seeds with not only high overall influence (close to Greedy in practice), but also ensures fairness across the influence of different products. We also design efficient heuristics for estimating the influence of the selected seeds, that are crucial for running the seed selection on large social network graphs. Finally, using extensive simulations on real-life social graphs, we show the effectiveness and scalability of our techniques compared to existing and naive strategies.",
"title": ""
},
{
"docid": "b2bcf059713aaa9802f9d8e7793106dd",
"text": "A framework is presented for analyzing most of the experimental work performed in software engineering over the past several years. The framework of experimentation consists of four categories corresponding to phases of the experimentation process: definition, planning, operation, and interpretation. A variety of experiments are described within the framework and their contribution to the software engineering discipline is discussed. Some recommendations for the application of the experimental process in software engineering are included.",
"title": ""
},
{
"docid": "80d4f6a622edea6530ffc7e29590af74",
"text": "Data protection is the process of backing up data in case of a data loss event. It is one of the most critical routine activities for every organization. Detecting abnormal backup jobs is important to prevent data protection failures and ensure the service quality. Given the large scale backup endpoints and the variety of backup jobs, from a backup-as-a-service provider viewpoint, we need a scalable and flexible outlier detection method that can model a huge number of objects and well capture their diverse patterns. In this paper, we introduce H2O, a novel hybrid and hierarchical method to detect outliers from millions of backup jobs for large scale data protection. Our method automatically selects an ensemble of outlier detection models for each multivariate time series composed by the backup metrics collected for each backup endpoint by learning their exhibited characteristics. Interactions among multiple variables are considered to better detect true outliers and reduce false positives. In particular, a new seasonal-trend decomposition based outlier detection method is developed, considering the interactions among variables in the form of common trends, which is robust to the presence of outliers in the training data. The model selection process is hierarchical, following a global to local fashion. The final outlier is determined through an ensemble learning by multiple models. Built on top of Apache Spark, H2O has been deployed to detect outliers in a large and complex data protection environment with more than 600,000 backup endpoints and 3 million daily backup jobs. To the best of our knowledge, this is the first work that selects and constructs large scale outlier detection models for multivariate time series on big data platforms.",
"title": ""
},
{
"docid": "d3482af2e92510c03cf4762bf7815811",
"text": "Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, improving the best published result in terms of CIDEr score from 114.7 to 117.9 and BLEU-4 from 35.2 to 36.9. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain a new state-of-the-art on the VQA v2.0 dataset with 70.2% overall accuracy.",
"title": ""
},
{
"docid": "0ea239ac71e65397d0713fe8c340f67c",
"text": "Mutations in leucine-rich repeat kinase 2 (LRRK2) are a common cause of familial and sporadic Parkinson's disease (PD). Elevated LRRK2 kinase activity and neurodegeneration are linked, but the phosphosubstrate that connects LRRK2 kinase activity to neurodegeneration is not known. Here, we show that ribosomal protein s15 is a key pathogenic LRRK2 substrate in Drosophila and human neuron PD models. Phosphodeficient s15 carrying a threonine 136 to alanine substitution rescues dopamine neuron degeneration and age-related locomotor deficits in G2019S LRRK2 transgenic Drosophila and substantially reduces G2019S LRRK2-mediated neurite loss and cell death in human dopamine and cortical neurons. Remarkably, pathogenic LRRK2 stimulates both cap-dependent and cap-independent mRNA translation and induces a bulk increase in protein synthesis in Drosophila, which can be prevented by phosphodeficient T136A s15. These results reveal a novel mechanism of PD pathogenesis linked to elevated LRRK2 kinase activity and aberrant protein synthesis in vivo.",
"title": ""
},
{
"docid": "b776b58f6f78e77c81605133c6e4edce",
"text": "The phase response of noisy speech has largely been ignored, but recent research shows the importance of phase for perceptual speech quality. A few phase enhancement approaches have been developed. These systems, however, require a separate algorithm for enhancing the magnitude response. In this paper, we present a novel framework for performing monaural speech separation in the complex domain. We show that much structure is exhibited in the real and imaginary components of the short-time Fourier transform, making the complex domain appropriate for supervised estimation. Consequently, we define the complex ideal ratio mask (cIRM) that jointly enhances the magnitude and phase of noisy speech. We then employ a single deep neural network to estimate both the real and imaginary components of the cIRM. The evaluation results show that complex ratio masking yields high quality speech enhancement, and outperforms related methods that operate in the magnitude domain or separately enhance magnitude and phase.",
"title": ""
},
{
"docid": "4820b3bfcf8c75011f5f5e1345be39c6",
"text": "In this paper we introduce visual phrases, complex visual composites like “a person riding a horse”. Visual phrases often display significantly reduced visual complexity compared to their component objects, because the appearance of those objects can change profoundly when they participate in relations. We introduce a dataset suitable for phrasal recognition that uses familiar PASCAL object categories, and demonstrate significant experimental gains resulting from exploiting visual phrases. We show that a visual phrase detector significantly outperforms a baseline which detects component objects and reasons about relations, even though visual phrase training sets tend to be smaller than those for objects. We argue that any multi-class detection system must decode detector outputs to produce final results; this is usually done with non-maximum suppression. We describe a novel decoding procedure that can account accurately for local context without solving difficult inference problems. We show this decoding procedure outperforms the state of the art. Finally, we show that decoding a combination of phrasal and object detectors produces real improvements in detector results.",
"title": ""
},
{
"docid": "fabcb243bff004279cfb5d522a7bed4b",
"text": "Vein pattern is the network of blood vessels beneath person’s skin. Vein patterns are sufficiently different across individuals, and they are stable unaffected by ageing and no significant changed in adults by observing. It is believed that the patterns of blood vein are unique to every individual, even among twins. Finger vein authentication technology has several important features that set it apart from other forms of biometrics as a highly secure and convenient means of personal authentication. This paper presents a finger-vein image matching method based on minutiae extraction and curve analysis. This proposed system is implemented in MATLAB. Experimental results show that the proposed method performs well in improving finger-vein matching accuracy.",
"title": ""
},
{
"docid": "d95b182517307844faa458e3f4edf0ab",
"text": "Scilab and Scicos are open-source and free software packages for design, simulation and realization of industrial process control systems. They can be used as the center of an integrated platform for the complete development process, including running controller with real plant (ScicosHIL: Hardware In the Loop) and automatic code generation for real time embedded platforms (Linux, RTAI/RTAI-Lab, RTAIXML/J-RTAI-Lab). These tools are mature, working alternatives to closed source, proprietary solutions for educational, academic, research and industrial applications. We present, using a working example, a complete development chain, from the design tools to the automatic code generation of stand alone embedded control and user interface program.",
"title": ""
},
{
"docid": "0141a93f93a7cf3c8ee8fd705b0a9657",
"text": "We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT’14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.",
"title": ""
},
{
"docid": "a99785b0563ca5922da304f69aa370c0",
"text": "Marcel Fritz, Christian Schlereth, Stefan Figge Empirical Evaluation of Fair Use Flat Rate Strategies for Mobile Internet The fair use flat rate is a promising tariff concept for the mobile telecommunication industry. Similar to classical flat rates it allows unlimited usage at a fixed monthly fee. Contrary to classical flat rates it limits the access speed once a certain usage threshold is exceeded. Due to the current global roll-out of the LTE (Long Term Evolution) technology and the related economic changes for telecommunication providers, the application of fair use flat rates needs a reassessment. We therefore propose a simulation model to evaluate different pricing strategies and their contribution margin impact. The key input element of the model is provided by socalled discrete choice experiments that allow the estimation of customer preferences. Based on this customer information and the simulation results, the article provides the following recommendations. Classical flat rates do not allow profitable provisioning of mobile Internet access. Instead, operators should apply fair use flat rates with a lower usage threshold of 1 or 3 GB which leads to an improved contribution margin. Bandwidth and speed are secondary and do merely impact customer preferences. The main motivation for new mobile technologies such as LTE should therefore be to improve the cost structure of an operator rather than using it to skim an assumed higher willingness to pay of mobile subscribers.",
"title": ""
},
{
"docid": "844c75292441af560ed2d2abc1d175f6",
"text": "Completion rates for massive open online classes (MOOCs) are notoriously low, but learner intent is an important factor. By studying students who drop out despite their intent to complete the MOOC, it may be possible to develop interventions to improve retention and learning outcomes. Previous research into predicting MOOC completion has focused on click-streams, demographics, and sentiment analysis. This study uses natural language processing (NLP) to examine if the language in the discussion forum of an educational data mining MOOC is predictive of successful class completion. The analysis is applied to a subsample of 320 students who completed at least one graded assignment and produced at least 50 words in discussion forums. The findings indicate that the language produced by students can predict with substantial accuracy (67.8 %) whether students complete the MOOC. This predictive power suggests that NLP can help us both to understand student retention in MOOCs and to develop automated signals of student success.",
"title": ""
},
{
"docid": "f1a162f64838817d78e97a3c3087fae4",
"text": "Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.",
"title": ""
},
{
"docid": "e29774fe6bd529b769faca8e54202be1",
"text": "The main objective of this research is to develop a n Intelligent System using data mining modeling tec hnique, namely, Naive Bayes. It is implemented as web based applica tion in this user answers the predefined questions. It retrieves hidden data from stored database and compares the u er values with trained data set. It can answer com plex queries for diagnosing heart disease and thus assist healthcare practitioners to make intelligent clinical decisio ns which traditional decision support systems cannot. By providing effec tiv treatments, it also helps to reduce treatment cos s. Keyword: Data mining Naive bayes, heart disease, prediction",
"title": ""
},
{
"docid": "2795c78d2e81a064173f49887c9b1bb1",
"text": "This paper reports a continuously tunable lumped bandpass filter implemented in a third-order coupled resonator configuration. The filter is fabricated on a Borosilicate glass substrate using a surface micromachining technology that offers hightunable passive components. Continuous electrostatic tuning is achieved using three tunable capacitor banks, each consisting of one continuously tunable capacitor and three switched capacitors with pull-in voltage of less than 40 V. The center frequency of the filter is tuned from 1 GHz down to 600 MHz while maintaining a 3-dB bandwidth of 13%-14% and insertion loss of less than 4 dB. The maximum group delay is less than 10 ns across the entire tuning range. The temperature stability of the center frequency from -50°C to 50°C is better than 2%. The measured tuning speed of the filter is better than 80 s, and the is better than 20 dBm, which are in good agreement with simulations. The filter occupies a small size of less than 1.5 cm × 1.1 cm. The implemented filter shows the highest performance amongst the fully integrated microelectromechanical systems filters operating at sub-gigahertz range.",
"title": ""
}
] |
scidocsrr
|
d7cc3c51c988267a4f7b77b225ebd4a0
|
A Vision for All-Spin Neural Networks: A Device to System Perspective
|
[
{
"docid": "75c5d060d99058585292a77a94e75dba",
"text": "In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented.",
"title": ""
}
] |
[
{
"docid": "187b77bfa5ac3110a7fd91ff17a1b456",
"text": "Educational games have enhanced the value of instruction procedures in institutions and business organizations. Factors that increase students’ adoption of learning games have been widely studied in past; however, the effect of these factors on learners’ performance is yet to be explored. In this study, factors of Enjoyment, Happiness, and Intention to Use were chosen as important attitudes in learning educational games and increasing learning performance. A two-step between group experiment was conducted: the first study compared game-based learning and traditional instruction in order to verify the value of the game. 41 Gymnasium (middle school) students were involved, and the control and experimental groups were formed based on a pretest method. The second study, involving 46 Gymnasium students, empirically evaluates whether and how certain attitudinal factors affect learners’ performance. The results of the two-part experiment showed that a) the game demonstrated good performance (as compared to traditional instruction) concerning the gain of knowledge, b) learners’ enjoyment of the game has a significant relation with their performance, and c) learners’ intention to use and happiness with the game do not have any relation with their performance. Our results suggest that there are attitudinal factors affecting knowledge acquisition gained by a game. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7b6b2a376cae230c888c494f4668bdb0",
"text": "Behavioral studies suggest that children under age 10 process faces using a piecemeal strategy based on individual distinctive facial features, whereas older children use a configural strategy based on the spatial relations among the face's features. The purpose of this study was to determine whether activation of the fusiform gyrus, which is involved in face processing in adults, is greater during face processing in older children (1214 years) than in younger children (8 10 years). Functional MRI scans were obtained while children viewed faces and houses. A developmental change was observed: Older children, but not younger children, showed significantly more activation in bilateral fusiform gyri for faces than for houses. Activation in the fusiform gyrus correlated significantly with age and with a behavioral measure of configural face processing. Regions believed to be involved in processing basic facial features were activated in both younger and older children. Some evidence was also observed for greater activation for houses versus faces for the older children than for the younger children, suggesting that processing of these two stimulus types becomes more differentiated as children age. The current results provide biological insight into changes in visual processing of faces that occur with normal development.",
"title": ""
},
{
"docid": "0e48de6dc8d1f51eb2a7844d4d67b8f5",
"text": "Vygotsky asserted that the student who had mastered algebra had attained “a new higher plane of thought”, a level of abstraction and generalization which transformed the meaning of the lower (arithmetic) level. He also affirmed the importance of the mastery of scientific concepts for the development of the ability to think theoretically, and emphasized the mediating role of semiotic forms and symbol systems in developing this ability. Although historically in mathematics and traditionally in education, algebra followed arithmetic, Vygotskian theory supports the reversal of this sequence in the service of orienting children to the most abstract and general level of understanding initially. This organization of learning activity for the development of algebraic thinking is very different from the introduction of elements of algebra into the study of arithmetic in the early grades. The intended theoretical (algebraic) understanding is attained through appropriation of psychological tools, in the form of specially designed schematics, whose mastery is not merely incidental to but the explicit focus of instruction. The author’s research in implementing Davydov’s Vygotskian-based elementary mathematics curriculum in the U.S. suggests that these characteristics function synergistically to develop algebraic understanding and computational competence as well. Kurzreferat: Vygotsky ging davon aus, dass Lernende, denen es gelingt, Algebra zu beherrschen, „ein höheres gedankliches Niveau” erreicht hätten, eine Ebene von Abstraktion und Generalisierung, welche die Bedeutung der niederen (arithmetischen) Ebene verändert. Er bestätigte auch die Relevanz der Beherrschung von wissenschaftlichen Begriffen für die Entwicklung der Fähigkeit, theoretisch zu denken und betonte dabei die vermittelnde Rolle von semiotischen Formen und Symbolsystemen für die Ausformung dieser Fähigkeit. Obwohl mathematik-his tor isch und t radi t ionel l erziehungswissenschaftlich betrachtet, Algebra der Arithmetik folgte, stützt Vygotski’s Theorie die Umkehrung dieser Sequenz bei dem Bemühen, Kinder an das abstrakteste und allgemeinste Niveau des ersten Verstehens heranzuführen. Diese Organisation von Lernaktivitäten für die Ausbildung algebraischen Denkens unterscheidet sich erheblich von der Einführung von Algebra-Elementen in das Lernen von Arithmetik während der ersten Schuljahre. Das beabsichtigte theoretische (algebraische) Verstehen wird erreicht durch die Aneignung psychologischer Mittel, und zwar in Form von dafür speziell entwickelten Schemata, deren Beherrschung nicht nur beiläufig erfolgt, sondern Schwerpunkt des Unterrichts ist. Die im Beitrag beschriebenen Forschungen zur Implementierung von Davydov’s elementarmathematischen Curriculum in den Vereinigten Staaten, das auf Vygotsky basiert, legt die Vermutung nahe, dass diese Charakteristika bei der Entwicklung von algebraischem Verstehen und von Rechenkompetenzen synergetisch funktionieren. ZDM-Classification: C30, D30, H20 l. Historical Context Russian psychologist Lev Vygotsky stated clearly his perspective on algebraic thinking. Commenting on its development within the structure of the Russian curriculum in the early decades of the twentieth century,",
"title": ""
},
{
"docid": "8503c9989f9706805a74bbd5c964ab07",
"text": "Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.",
"title": ""
},
{
"docid": "13ac8eddda312bd4ef3ba194c076a6ea",
"text": "With the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset, a novel dataset was introduced to the computer vision and multimedia research community. To maximize the benefit for the research community and utilize its potential, this dataset has to be made accessible by tools allowing to search for target concepts within the dataset and mechanism to browse images and videos of the dataset. Following best practice from data collections, such as ImageNet and MS COCO, this paper presents means of accessibility for the YFCC100m dataset. This includes a global analysis of the dataset and an online browser to explore and investigate subsets of the dataset in real-time. Providing statistics of the queried images and videos will enable researchers to refine their query successively, such that the users desired subset of interest can be narrowed down quickly. The final set of image and video can be downloaded as URLs from the browser for further processing.",
"title": ""
},
{
"docid": "72f7c13f21c047e4dcdf256fbbbe1b74",
"text": "Programming by Examples (PBE) has the potential to revolutionize end-user programming by enabling end users, most of whom are non-programmers, to create small scripts for automating repetitive tasks. However, examples, though often easy to provide, are an ambiguous specification of the user's intent. Because of that, a key impedance in adoption of PBE systems is the lack of user confidence in the correctness of the program that was synthesized by the system. We present two novel user interaction models that communicate actionable information to the user to help resolve ambiguity in the examples. One of these models allows the user to effectively navigate between the huge set of programs that are consistent with the examples provided by the user. The other model uses active learning to ask directed example-based questions to the user on the test input data over which the user intends to run the synthesized program. Our user studies show that each of these models significantly reduces the number of errors in the performed task without any difference in completion time. Moreover, both models are perceived as useful, and the proactive active-learning based model has a slightly higher preference regarding the users' confidence in the result.",
"title": ""
},
{
"docid": "d0ea12e1aa134a1b033d78e9c94ff59e",
"text": "This introductory tutorial presents an over view of the process of conducting a simulation study of any discrete system. The basic viewpoint is that conducting such a study requires both art and science. Some of the issues addressed are how to get started, the steps to be followed, the issues to be faced at each step, the potential pitfalls occurring at each step, and the most common causes of failures.",
"title": ""
},
{
"docid": "17c8766c5fcc9b6e0d228719291dcea5",
"text": "In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.",
"title": ""
},
{
"docid": "8b9a7201d3b0ea20705c8aea7751d59f",
"text": "Positive patient outcomes require effective teamwork, communication, and technological literacy. These skills vary among the unprecedented five generations in the nursing workforce, spanning the \"Silent Generation\" nurses deferring retirement to the newest \"iGeneration.\" Nursing professional development educators must understand generational differences; address communication, information technology, and team-building competencies across generations; and promote integration of learner-centered strategies into professional development activities.",
"title": ""
},
{
"docid": "a33ea513421e67eecde4d62de5712531",
"text": "In this paper, we set out to develop a theoretical and conceptual framework for the new field of Radical Embodied Cognitive Neuroscience. This framework should be able to integrate insights from several relevant disciplines: theory on embodied cognition, ecological psychology, phenomenology, dynamical systems theory, and neurodynamics. We suggest that the main task of Radical Embodied Cognitive Neuroscience is to investigate the phenomenon of skilled intentionality from the perspective of the self-organization of the brain-body-environment system, while doing justice to the phenomenology of skilled action. In previous work, we have characterized skilled intentionality as the organism's tendency toward an optimal grip on multiple relevant affordances simultaneously. Affordances are possibilities for action provided by the environment. In the first part of this paper, we introduce the notion of skilled intentionality and the phenomenon of responsiveness to a field of relevant affordances. Second, we use Friston's work on neurodynamics, but embed a very minimal version of his Free Energy Principle in the ecological niche of the animal. Thus amended, this principle is helpful for understanding the embeddedness of neurodynamics within the dynamics of the system \"brain-body-landscape of affordances.\" Next, we show how we can use this adjusted principle to understand the neurodynamics of selective openness to the environment: interacting action-readiness patterns at multiple timescales contribute to the organism's selective openness to relevant affordances. In the final part of the paper, we emphasize the important role of metastable dynamics in both the brain and the brain-body-environment system for adequate affordance-responsiveness. We exemplify our integrative approach by presenting research on the impact of Deep Brain Stimulation on affordance responsiveness of OCD patients.",
"title": ""
},
{
"docid": "70a94ef8bf6750cdb4603b34f0f1f005",
"text": "What does this paper demonstrate. We show that a very simple 2D architecture (in the sense that it does not make any assumption or reasoning about the 3D information of the object) generally used for object classification, if properly adapted to the specific task, can provide top performance also for pose estimation. More specifically, we demonstrate how a 1-vs-all classification framework based on a Fisher Vector (FV) [1] pyramid or convolutional neural network (CNN) based features [2] can be used for pose estimation. In addition, suppressing neighboring viewpoints during training seems key to get good results.",
"title": ""
},
{
"docid": "fdd4295dc3be3ec06c1785f3bdadd00e",
"text": "The paper presents a method for automatically detecting pallets and estimating their position and orientation. For detection we use a sliding window approach with efficient candidate generation, fast integral features and a boosted classifier. Specific information regarding the detection task such as region of interest, pallet dimensions and pallet structure can be used to speed up and validate the detection process. Stereo reconstruction is employed for depth estimation by applying Semi-Global Matching aggregation with Census descriptors. Offline test results show that successful detection is possible under 0.5 seconds.",
"title": ""
},
{
"docid": "35c7cb759c1ee8e7f547d9789e74b0f0",
"text": "This research investigates an axial flux single-rotor single-stator asynchronous motor (AFAM) with aluminum and copper cage windings. In order to avoid using die casting of the rotor cage winding an open rotor slot structure was implemented. In future, this technique allows using copper cage winding avoiding critically high temperature treatment as in the die casting processing of copper material. However, an open slot structure leads to a large equivalent air gap length. Therefore, semi-magnetic wedges should be used to reduce the effect of open slots and consequently to improve the machine performance. The paper aims to investigate the feasibility of using open slot rotor structure (for avoiding die casting) and impact of semi-magnetic wedges to eliminate negative effects of open slots. The results were mainly obtained by 2D finite element method (FEM) simulations. Measurement results of mechanical performance of the prototype (with aluminum cage winding) given in the paper prove the simulated results.",
"title": ""
},
{
"docid": "d439d9562a7643d6441710a95c7ab2db",
"text": "This document reviews 1) the measurement properties of commonly used exercise tests in patients with chronic respiratory diseases and 2) published studies on their utilty and/or evaluation obtained from MEDLINE and Cochrane Library searches between 1990 and March 2015.Exercise tests are reliable and consistently responsive to rehabilitative and pharmacological interventions. Thresholds for clinically important changes in performance are available for several tests. In pulmonary arterial hypertension, the 6-min walk test (6MWT), peak oxygen uptake and ventilation/carbon dioxide output indices appear to be the variables most responsive to vasodilators. While bronchodilators do not always show clinically relevant effects in chronic obstructive pulmonary disease, high-intensity constant work-rate (endurance) tests (CWRET) are considerably more responsive than incremental exercise tests and 6MWTs. High-intensity CWRETs need to be standardised to reduce interindividual variability. Additional physiological information and responsiveness can be obtained from isotime measurements, particularly of inspiratory capacity and dyspnoea. Less evidence is available for the endurance shuttle walk test. Although the incremental shuttle walk test and 6MWT are reliable and less expensive than cardiopulmonary exercise testing, two repetitions are needed at baseline. All exercise tests are safe when recommended precautions are followed, with evidence suggesting that no test is safer than others.",
"title": ""
},
{
"docid": "d655222bf22e35471b18135b67326ac5",
"text": "In this paper we approach the robust motion planning problem through the lens of perception-aware planning, whereby we seek a low-cost motion plan subject to a separate constraint on perception localization quality. To solve this problem we introduce the Multiobjective Perception-Aware Planning (MPAP) algorithm which explores the state space via a multiobjective search, considering both cost and a perception heuristic. This perception-heuristic formulation allows us to both capture the history dependence of localization drift and represent complex modern perception methods. The solution trajectory from this heuristic-based search is then certified via Monte Carlo methods to be robust. The additional computational burden of perception-aware planning is offset through massive parallelization on a GPU. Through numerical experiments the algorithm is shown to find robust solutions in about a second. Finally, we demonstrate MPAP on a quadrotor flying perceptionaware and perception-agnostic plans using Google Tango for localization, finding the quadrotor safely executes the perception-aware plan every time, while crashing over 20% of the time on the perception-agnostic due to loss of localization.",
"title": ""
},
{
"docid": "e3c3f3fb3dd432017bf92e0fe5f7c341",
"text": "This study aimed to evaluate the accuracy of intraoral scanners in full-arch scans. A representative model with 14 prepared abutments was digitized using an industrial scanner (reference scanner) as well as four intraoral scanners (iTero, CEREC AC Bluecam, Lava C.O.S., and Zfx IntraScan). Datasets obtained from different scans were loaded into 3D evaluation software, superimposed, and compared for accuracy. One-way analysis of variance (ANOVA) was implemented to compute differences within groups (precision) as well as comparisons with the reference scan (trueness). A level of statistical significance of p < 0.05 was set. Mean trueness values ranged from 38 to 332.9 μm. Data analysis yielded statistically significant differences between CEREC AC Bluecam and other scanners as well as between Zfx IntraScan and Lava C.O.S. Mean precision values ranged from 37.9 to 99.1 μm. Statistically significant differences were found between CEREC AC Bluecam and Lava C.O.S., CEREC AC Bluecam and iTero, Zfx Intra Scan and Lava C.O.S., and Zfx Intra Scan and iTero (p < 0.05). Except for one intraoral scanner system, all tested systems showed a comparable level of accuracy for full-arch scans of prepared teeth. Further studies are needed to validate the accuracy of these scanners under clinical conditions. Despite excellent accuracy in single-unit scans having been demonstrated, little is known about the accuracy of intraoral scanners in simultaneous scans of multiple abutments. Although most of the tested scanners showed comparable values, the results suggest that the inaccuracies of the obtained datasets may contribute to inaccuracies in the final restorations.",
"title": ""
},
{
"docid": "82335fb368198a2cf7e3021627449058",
"text": "While cancer treatments are constantly advancing, there is still a real risk of relapse after potentially curative treatments. At the risk of adverse side effects, certain adjuvant treatments can be given to patients that are at high risk of recurrence. The challenge, however, is in finding the best tradeoff between these two extremes. Patients that are given more potent treatments, such as chemotherapy, radiation, or systemic treatment, can suffer unnecessary consequences, especially if the cancer does not return. Predictive modeling of recurrence can help inform patients and practitioners on a case-by-case basis, personalized for each patient. For large-scale predictive models to be built, structured data must be captured for a wide range of diverse patients. This paper explores current methods for building cancer recurrence risk models using structured clinical patient data.",
"title": ""
},
{
"docid": "cc52bb9210f400a42b0b8374dde374ab",
"text": "It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.",
"title": ""
},
{
"docid": "3f1967d87d14a1ee652760929ed217d0",
"text": "Existing location-based social networks (LBSNs), e.g. Foursquare, depend mainly on GPS or network-based localization to infer users' locations. However, GPS is unavailable indoors and network-based localization provides coarse-grained accuracy. This limits the accuracy of current LBSNs in indoor environments, where people spend 89% of their time. This in turn affects the user experience, in terms of the accuracy of the ranked list of venues, especially for the small-screens of mobile devices; misses business opportunities; and leads to reduced venues coverage.\n In this paper, we present CheckInside: a system that can provide a fine-grained indoor location-based social network. CheckInside leverages the crowd-sensed data collected from users' mobile devices during the check-in operation and knowledge extracted from current LBSNs to associate a place with its name and semantic fingerprint. This semantic fingerprint is used to obtain a more accurate list of nearby places as well as automatically detect new places with similar signatures. A novel algorithm for handling incorrect check-ins and inferring a semantically-enriched floorplan is proposed as well as an algorithm for enhancing the system performance based on the user implicit feedback.\n Evaluation of CheckInside in four malls over the course of six weeks with 20 participants shows that it can provide the actual user location within the top five venues 99% of the time. This is compared to 17% only in the case of current LBSNs. In addition, it can increase the coverage of current LBSNs by more than 25%.",
"title": ""
},
{
"docid": "4177265c82598af21ba423f11d5e1640",
"text": "Existing self-report measures of schizotypal personality assess only one to three of the nine traits of schizotypal personality disorder. This study describes the development of the Schizotypal Personality Questionnaire (SPQ), a self-report scale modeled on DSM-III-R criteria for schizotypal personality disorder and containing subscales for all nine schizotypal traits. Two samples of normal subjects (n = 302 and n = 195) were used to test replicability of findings. The SPQ was found to have high sampling validity, high internal reliability (0.91), test-retest reliability (0.82), convergent validity (0.59 to 0.81), discriminant validity, and criterion validity (0.63, 0.68), findings which were replicated across samples. Fifty-five percent of subjects scoring in the top 10 percent of SPQ scores had a clinical diagnosis of schizotypal personality disorder. Thus, the SPQ may be useful in screening for schizotypal personality disorder in the general population and also in researching the correlates of individual schizotypal traits.",
"title": ""
}
] |
scidocsrr
|
1742ed9f6563c18fec89920566dca20f
|
Says Who...? Identification of Expert versus Layman Critics' Reviews of Documentary Films
|
[
{
"docid": "5f366ed9a90448be28c1ec9249b4ec96",
"text": "With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.",
"title": ""
}
] |
[
{
"docid": "349caca78b6d21b5f8853b41a8201429",
"text": "OBJECTIVE\nTo evaluate the effectiveness of a functional thumb orthosis on the dominant hand of patients with rheumatoid arthritis and boutonniere thumb.\n\n\nMETHODS\nForty patients with rheumatoid arthritis and boutonniere deformity of the thumb were randomly distributed into two groups. The intervention group used the orthosis daily and the control group used the orthosis only during the evaluation. Participants were evaluated at baseline as well as after 45 and 90 days. Assessments were preformed using the O'Connor Dexterity Test, Jamar dynamometer, pinch gauge, goniometry and the Health Assessment Questionnaire. A visual analogue scale was used to assess thumb pain in the metacarpophalangeal joint.\n\n\nRESULTS\nPatients in the intervention group experienced a statistically significant reduction in pain. The thumb orthosis did not disrupt grip and pinch strength, function, Health Assessment Questionnaire score or dexterity in either group.\n\n\nCONCLUSION\nThe use of thumb orthosis for type I and type II boutonniere deformities was effective in relieving pain.",
"title": ""
},
{
"docid": "2ddf38f09b92f5137f0a741d4a6e3004",
"text": "Supply chain management must adopt different and more innovative strategies that support a better response to customer needs in an uncertain environment. Supply chains must be more agile and be more capable of coping with disturbances, meaning that supply chains must be more resilient. The simultaneous deployment of agile and resilient approaches will enhance supply chain performance and competitiveness. Accordingly, the main objective of this paper is to propose a conceptual framework for the analysis of relationships between agile and resilient approaches, supply chain competitiveness and performance. Operational and economic performance measures are proposed to facilitate the monitoring of the influence of these practices on supply chain performance. The influence of the proposed agile and resilient practices on supply chain competitiveness is also examined in terms of time to market, product quality and customer service.",
"title": ""
},
{
"docid": "44665a3d2979031aca85010b9ad1ec90",
"text": "Studies in humans and non-human primates have provided evidence for storage of working memory contents in multiple regions ranging from sensory to parietal and prefrontal cortex. We discuss potential explanations for these distributed representations: (i) features in sensory regions versus prefrontal cortex differ in the level of abstractness and generalizability; and (ii) features in prefrontal cortex reflect representations that are transformed for guidance of upcoming behavioral actions. We propose that the propensity to produce persistent activity is a general feature of cortical networks. Future studies may have to shift focus from asking where working memory can be observed in the brain to how a range of specialized brain areas together transform sensory information into a delayed behavioral response.",
"title": ""
},
{
"docid": "49c8cd55ffc5de2fe6064837be2f9816",
"text": "L-theanine acid is an amino acid in tea which affects mental state directly. Along with other most popular tea types; white, green, and black tea, Oolong tea also has sufficient L-theanine to relax the human brain. It apparently can reduce the concern, blood pressure, dissolve the fat in the arteries, and especially slow aging by substances against free radicals. Therefore, this research study about the effect of L-theanine in Oolong Tea on human brain's attention focused on meditation during book reading state rely on each person by using electroencephalograph (EEG) and K-means clustering. An electrophysiological monitoring will properly measure the voltage fluctuation of Alpha rhythm for the understanding of higher attention processes of human brain precisely. K-means clustering investigates and defines that the group of converted waves data has a variable effective level rely on each classified group, which female with lower BMI has a higher effect on L-theanine than male apparently. In conclusion, the results promise the L-theanine significantly affects on meditation by increasing in Alpha waves on each person that beneficially supports production proven of Oolong tea in the future.",
"title": ""
},
{
"docid": "72a400edb24cafdc092e583777a19d21",
"text": "In this paper we propose a publicly available static hand pose database called OUHANDS and protocols for training and evaluating hand pose classification and hand detection methods. A comparison between the OUHANDS database and existing databases is given. Baseline results for both of the protocols are presented.",
"title": ""
},
{
"docid": "8e6debae3b3d3394e87e671a14f8819e",
"text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.",
"title": ""
},
{
"docid": "7cf7a419cf681e9deea42d77e0e9cec2",
"text": "Industrial organizations use Energy Management Systems (EMS) to monitor, control, and optimize their energy consumption. Industrial EMS are complex and expensive systems due to the unique requirements of performance, reliability, and interoperability. Moreover, industry is facing challenges with current EMS implementations such as cross-site monitoring of energy consumption and CO2 emissions, integration between energy and production data, and meaningful energy efficiency benchmarking. Additionally, big data has emerged because of recent advances in field instrumentation that led to the generation of large quantities of machine data, with much more detail and higher sampling rates. This created a challenge for real-time analytics. In order to address all these needs and challenges, we propose a cloud-native industrial EMS solution with cloud computing capabilities. Through this innovative approach we expect to generate useful knowledge in a shorter time period, enabling organizations to react quicker to changes of events and detect hidden patterns that compromise efficiency.",
"title": ""
},
{
"docid": "69201195326d4e8c5cac61d817e4c1f2",
"text": "This paper focuses on the evaluation of theoretical and numerical aspects related to an original DC microgrid power architecture for efficient charging of plug-in electric vehicles (PEVs). The proposed DC microgrid is based on photovoltaic array (PVA) generation, electrochemical storage, and grid connection; it is assumed that PEVs have a direct access to their DC charger input. As opposed to conventional power architecture designs, the PVA is coupled directly on the DC link without a static converter, which implies no DC voltage stabilization, increasing energy efficiency, and reducing control complexity. Based on a real-time rule-based algorithm, the proposed power management allows self-consumption according to PVA power production and storage constraints, and the public grid is seen only as back-up. The first phase of modeling aims to evaluate the main energy flows within the proposed DC microgrid architecture and to identify the control structure and the power management strategies. For this, an original model is obtained by applying the Energetic Macroscopic Representation formalism, which allows deducing the control design using Maximum Control Structure. The second phase of simulation is based on the numerical characterization of the DC microgrid components and the energy management strategies, which consider the power source requirements, charging times of different PEVs, electrochemical storage ageing, and grid power limitations for injection mode. The simulation results show the validity of the model and the feasibility of the proposed DC microgrid power architecture which presents good performance in terms of total efficiency and simplified control. OPEN ACCESS Energies 2015, 8 4336",
"title": ""
},
{
"docid": "34e2eafd055e097e167afe7cb244f99b",
"text": "This paper describes the functional verification effort during a specific hardware development program that included three of the largest ASICs designed at Nortel. These devices marked a transition point in methodology as verification took front and centre on the critical path of the ASIC schedule. Both the simulation and emulation strategies are presented. The simulation methodology introduced new techniques such as ASIC sub-system level behavioural modeling, large multi-chip simulations, and random pattern simulations. The emulation strategy was based on a plan that consisted of integrating parts of the real software on the emulated system. This paper describes how these technologies were deployed, analyzes the bugs that were found and highlights the bottlenecks in functional verification as systems become more complex.",
"title": ""
},
{
"docid": "d6602316a4b1062c177b719fc4985084",
"text": "Agricultural residues, such as lignocellulosic materials (LM), are the most attractive renewable bioenergy sources and are abundantly found in nature. Anaerobic digestion has been extensively studied for the effective utilization of LM for biogas production. Experimental investigation of physiochemical changes that occur during pretreatment is needed for developing mechanistic and effective models that can be employed for the rational design of pretreatment processes. Various-cutting edge pretreatment technologies (physical, chemical and biological) are being tested on the pilot scale. These different pretreatment methods are widely described in this paper, among them, microaerobic pretreatment (MP) has gained attention as a potential pretreatment method for the degradation of LM, which just requires a limited amount of oxygen (or air) supplied directly during the pretreatment step. MP involves microbial communities under mild conditions (temperature and pressure), uses fewer enzymes and less energy for methane production, and is probably the most promising and environmentally friendly technique in the long run. Moreover, it is technically and economically feasible to use microorganisms instead of expensive chemicals, biological enzymes or mechanical equipment. The information provided in this paper, will endow readers with the background knowledge necessary for finding a promising solution to methane production.",
"title": ""
},
{
"docid": "85f126fe22e74e3f5b1f1ad3adec0036",
"text": "Debate is open as to whether social media communities resemble real-life communities, and to what extent. We contribute to this discussion by testing whether established sociological theories of real-life networks hold in Twitter. In particular, for 228,359 Twitter profiles, we compute network metrics (e.g., reciprocity, structural holes, simmelian ties) that the sociological literature has found to be related to parts of one’s social world (i.e., to topics, geography and emotions), and test whether these real-life associations still hold in Twitter. We find that, much like individuals in real-life communities, social brokers (those who span structural holes) are opinion leaders who tweet about diverse topics, have geographically wide networks, and express not only positive but also negative emotions. Furthermore, Twitter users who express positive (negative) emotions cluster together, to the extent of having a correlation coefficient between one’s emotions and those of friends as high as 0.45. Understanding Twitter’s social dynamics does not only have theoretical implications for studies of social networks but also has practical implications, including the design of self-reflecting user interfaces that make people aware of their emotions, spam detection tools, and effective marketing campaigns.",
"title": ""
},
{
"docid": "718261996473ce3c4c1d86719fd2ea1c",
"text": "Volcanic risk assessment using probabilistic models is increasingly desired for risk management, particularly for loss forecasting, critical infrastructure management, land-use planning and evacuation planning. Over the past decades this has motivated the development of comprehensive probabilistic hazard models. However, volcanic vulnerability models of equivalent sophistication have lagged behind hazard modelling because of the lack of evidence, data and, until recently, minimal demand. There is an increasingly urgent need for development of quantitative volcanic vulnerability models, including vulnerability and fragility functions, which provide robust quantitative relationships between volcanic impact (damage and disruption) and hazard intensity. The functions available to date predominantly quantify tephra fall impacts to buildings, driven by life safety concerns. We present a framework for establishing quantitative relationships between volcanic impact and hazard intensity, specifically through the derivation of vulnerability and fragility functions. We use tephra thickness and impacts to key infrastructure sectors as examples to demonstrate our framework. Our framework incorporates impact data sources, different impact intensity scales, preparation and fitting of data, uncertainty analysis and documentation. The primary data sources are post-eruption impact assessments, supplemented by laboratory experiments and expert judgment, with the latter drawing upon a wealth of semi-quantitative and qualitative studies. Different data processing and function fitting techniques can be used to derive functions; however, due to the small datasets currently available, simplified approaches are discussed. We stress that documentation of data processing, assumptions and limitations is the most important aspect of function derivation; documentation provides transparency and allows others to update functions more easily. Following our standardised approach, a volcanic risk scientist can derive a fragility or vulnerability function, which then can be easily compared to existing functions and updated as new data become available. To demonstrate how to apply our framework, we derive fragility and vulnerability functions for discrete tephra fall impacts to electricity supply, water supply, wastewater and transport networks. These functions present the probability of an infrastructure site or network component equalling or exceeding one of four impact states as a function of tephra thickness.",
"title": ""
},
{
"docid": "2a823b6ce1a761e5c2f50f7cd3cb11d7",
"text": "Retrieval practice has been shown to enhance later recall of information reviewed through testing, whereas final-test measures involving making inferences from the learned information have produced mixed results. In four experiments, we examined whether the benefits of retrieval practice could transfer to deductive inferences. Participants studied a set of related premises and then reviewed these premises either by rereading or by taking fill-in-the-blank tests. As was expected, the testing condition produced better final-test recall of the premises. However, performance on multiple-choice inference questions showed no enhancement from retrieval practice.",
"title": ""
},
{
"docid": "c62dfcc83ca24450ea1a7e12a17ac93e",
"text": "Lymphedema and lipedema are chronic progressive disorders for which no causal therapy exists so far. Many general practitioners will rarely see these disorders with the consequence that diagnosis is often delayed. The pathophysiological basis is edematization of the tissues. Lymphedema involves an impairment of lymph drainage with resultant fluid build-up. Lipedema arises from an orthostatic predisposition to edema in pathologically increased subcutaneous tissue. Treatment includes complex physical decongestion by manual lymph drainage and absolutely uncompromising compression therapy whether it is by bandage in the intensive phase to reduce edema or with a flat knit compression stocking to maintain volume.",
"title": ""
},
{
"docid": "28152cab5f477d9620edaab440467de2",
"text": "The ever-increasing density in cloud computing parties, i.e. users, services, providers and data centres, has led to a significant exponential growth in: data produced and transferred among the cloud computing parties; network traffic; and the energy consumed by the cloud computing massive infrastructure, which is required to respond quickly and effectively to users requests. Transferring big data volume among the aforementioned parties requires a high bandwidth connection, which consumes larger amounts of energy than just processing and storing big data on cloud data centres, and hence producing high carbon dioxide emissions. This power consumption is highly significant when transferring big data into a data centre located relatively far from the users geographical location. Thus, it became high-necessity to locate the lowest energy consumption route between the user and the designated data centre, while making sure the users requirements, e.g. response time, are met. The main contribution of this paper is GreeDi, a network-based routing algorithm to find the most energy efficient path to the cloud data centre for processing and storing big data. The algorithm is, first, formalised by the situation calculus. The linear, goal and dynamic programming approaches used to model the algorithm. The algorithm is then evaluated against the baseline shortest path algorithm with minimum number of nodes traversed, using a real Italian ISP physical network topology.",
"title": ""
},
{
"docid": "896fa229bd0ffe9ef6da9fbe0e0866e6",
"text": "In this paper, a cascaded current-voltage control strategy is proposed for inverters to simultaneously improve the power quality of the inverter local load voltage and the current exchanged with the grid. It also enables seamless transfer of the operation mode from stand-alone to grid-connected or vice versa. The control scheme includes an inner voltage loop and an outer current loop, with both controllers designed using the H∞ repetitive control strategy. This leads to a very low total harmonic distortion in both the inverter local load voltage and the current exchanged with the grid at the same time. The proposed control strategy can be used to single-phase inverters and three-phase four-wire inverters. It enables grid-connected inverters to inject balanced clean currents to the grid even when the local loads (if any) are unbalanced and/or nonlinear. Experiments under different scenarios, with comparisons made to the current repetitive controller replaced with a current proportional-resonant controller, are presented to demonstrate the excellent performance of the proposed strategy.",
"title": ""
},
{
"docid": "8c3ec9f28a21a5b1fb7b5b64bed2c49f",
"text": "While struggling to succeed in today’s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it – is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations – public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice. Keywords—Digital strategy, digital technologies, digital transformation, literature review.",
"title": ""
},
{
"docid": "1c3b044d572509e14b11d2ec7cb6a566",
"text": "Animal models point towards a key role of brain-derived neurotrophic factor (BDNF), insulin-like growth factor-I (IGF-I) and vascular endothelial growth factor (VEGF) in mediating exercise-induced structural and functional changes in the hippocampus. Recently, also platelet derived growth factor-C (PDGF-C) has been shown to promote blood vessel growth and neuronal survival. Moreover, reductions of these neurotrophic and angiogenic factors in old age have been related to hippocampal atrophy, decreased vascularization and cognitive decline. In a 3-month aerobic exercise study, forty healthy older humans (60 to 77years) were pseudo-randomly assigned to either an aerobic exercise group (indoor treadmill, n=21) or to a control group (indoor progressive-muscle relaxation/stretching, n=19). As reported recently, we found evidence for fitness-related perfusion changes of the aged human hippocampus that were closely linked to changes in episodic memory function. Here, we test whether peripheral levels of BDNF, IGF-I, VEGF or PDGF-C are related to changes in hippocampal blood flow, volume and memory performance. Growth factor levels were not significantly affected by exercise, and their changes were not related to changes in fitness or perfusion. However, changes in IGF-I levels were positively correlated with hippocampal volume changes (derived by manual volumetry and voxel-based morphometry) and late verbal recall performance, a relationship that seemed to be independent of fitness, perfusion or their changes over time. These preliminary findings link IGF-I levels to hippocampal volume changes and putatively hippocampus-dependent memory changes that seem to occur over time independently of exercise. We discuss methodological shortcomings of our study and potential differences in the temporal dynamics of how IGF-1, VEGF and BDNF may be affected by exercise and to what extent these differences may have led to the negative findings reported here.",
"title": ""
},
{
"docid": "7c666c07fffbd63e17470a74535d4c53",
"text": "A review of the diverse roles of entropy and the second law in computationalthermo– uid dynamics is presented. Entropy computations are related to numerical error, convergence criteria, time-step limitations, and other signi cant aspects of computational uid ow and heat transfer. The importance of the second law as a tool for estimating error bounds and the overall scheme’s robustness is described. As computational methods become more reliable and accurate, emerging applications involving the second law in the design of engineering thermal uid systems are described. Sample numerical results are presented and discussed for a multitude of applications in compressible ows, as well as problems with phase change heat transfer. Advantages and disadvantages of different entropy-based methods are discussed, as well as areas of importance suggested for future research.",
"title": ""
},
{
"docid": "58da9f4a32fe0ea42d12718ff825b9b2",
"text": "Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously.",
"title": ""
}
] |
scidocsrr
|
312d593cbc1dcc3054d435b97ee9b95c
|
Fast and Space-Efficient Entity Linking for Queries
|
[
{
"docid": "bb85695b909f2c1e2274fc423ce1defc",
"text": "Understanding the intent behind a user's query can help search engine to automatically route the query to some corresponding vertical search engines to obtain particularly relevant contents, thus, greatly improving user satisfaction. There are three major challenges to the query intent classification problem: (1) Intent representation; (2) Domain coverage and (3) Semantic interpretation. Current approaches to predict the user's intent mainly utilize machine learning techniques. However, it is difficult and often requires many human efforts to meet all these challenges by the statistical machine learning approaches. In this paper, we propose a general methodology to the problem of query intent classification. With very little human effort, our method can discover large quantities of intent concepts by leveraging Wikipedia, one of the best human knowledge base. The Wikipedia concepts are used as the intent representation space, thus, each intent domain is represented as a set of Wikipedia articles and categories. The intent of any input query is identified through mapping the query into the Wikipedia representation space. Compared with previous approaches, our proposed method can achieve much better coverage to classify queries in an intent domain even through the number of seed intent examples is very small. Moreover, the method is very general and can be easily applied to various intent domains. We demonstrate the effectiveness of this method in three different applications, i.e., travel, job, and person name. In each of the three cases, only a couple of seed intent queries are provided. We perform the quantitative evaluations in comparison with two baseline methods, and the experimental results shows that our method significantly outperforms other methods in each intent domain.",
"title": ""
},
{
"docid": "6308acaaf504f358de89c75b12b141fc",
"text": "A seed-based framework for textual information extraction allows for weakly supervised extraction of named entities from anonymized Web search queries. The extraction is guided by a small set of seed named entities, without any need for handcrafted extraction patterns or domain-specific knowledge, allowing for the acquisition of named entities pertaining to various classes of interest to Web search users. Inherently noisy search queries are shown to be a highly valuable, albeit little explored, resource for Web-based named entity discovery.",
"title": ""
}
] |
[
{
"docid": "364eb800261105453f36b005ba1faf68",
"text": "This article presents empirically-based large-scale propagation path loss models for fifth-generation cellular network planning in the millimeter-wave spectrum, based on real-world measurements at 28 GHz and 38 GHz in New York City and Austin, Texas, respectively. We consider industry-standard path loss models used for today's microwave bands, and modify them to fit the propagation data measured in these millimeter-wave bands for cellular planning. Network simulations with the proposed models using a commercial planning tool show that roughly three times more base stations are required to accommodate 5G networks (cell radii up to 200 m) compared to existing 3G and 4G systems (cell radii of 500 m to 1 km) when performing path loss simulations based on arbitrary pointing angles of directional antennas. However, when directional antennas are pointed in the single best directions at the base station and mobile, coverage range is substantially improved with little increase in interference, thereby reducing the required number of 5G base stations. Capacity gains for random pointing angles are shown to be 20 times greater than today's fourth-generation Long Term Evolution networks, and can be further improved when using directional antennas pointed in the strongest transmit and receive directions with the help of beam combining techniques.",
"title": ""
},
{
"docid": "a9e27b52ed31b47c23b1281c28556487",
"text": "Nuclear receptors are integrators of hormonal and nutritional signals, mediating changes to metabolic pathways within the body. Given that modulation of lipid and glucose metabolism has been linked to diseases including type 2 diabetes, obesity and atherosclerosis, a greater understanding of pathways that regulate metabolism in physiology and disease is crucial. The liver X receptors (LXRs) and the farnesoid X receptors (FXRs) are activated by oxysterols and bile acids, respectively. Mounting evidence indicates that these nuclear receptors have essential roles, not only in the regulation of cholesterol and bile acid metabolism but also in the integration of sterol, fatty acid and glucose metabolism.",
"title": ""
},
{
"docid": "1f5bcb6bc3fde7bc294240ce652ae4ab",
"text": "Rock climbing has increased in popularity as both a recreational physical activity and a competitive sport. Climbing is physiologically unique in requiring sustained and intermittent isometric forearm muscle contractions for upward propulsion. The determinants of climbing performance are not clear but may be attributed to trainable variables rather than specific anthropometric characteristics.",
"title": ""
},
{
"docid": "f085832faf1a2921eedd3d00e8e592db",
"text": "There are billions of photographs on the Internet, comprising the largest and most diverse photo collection ever assembled. How can computer vision researchers exploit this imagery? This paper explores this question from the standpoint of 3D scene modeling and visualization. We present structure-from-motion and image-based rendering algorithms that operate on hundreds of images downloaded as a result of keyword-based image search queries like “Notre Dame” or “Trevi Fountain.” This approach, which we call Photo Tourism, has enabled reconstructions of numerous well-known world sites. This paper presents these algorithms and results as a first step towards 3D modeling of the world’s well-photographed sites, cities, and landscapes from Internet imagery, and discusses key open problems and challenges for the research community.",
"title": ""
},
{
"docid": "7b98d56c2ebe5dcfb0c4b8a95ca1fba1",
"text": "Over the past three or four years there has been some controversy regarding the applicability of intrusion detection systems (IDS) to the forensic evidence collection process. Two points of view, essentially, have emerged. One perspective views forensic evidence collection and preservation in the case of a computer or network security incident to be inappropriate for an intrusion detection system. Another perspective submits that the IDS is the most likely candidate for collecting forensically pristine evidentiary data in real or near real time. This extended abstract describes, briefly, the framework for a research project intended to explore the applicability of intrusion detection systems to the evidence collection and management process. The project will review the performance and forensic acceptability of several types of intrusion detection systems in a laboratory environment. 1.0 Background and Problem Statement Intrusion detection, as a discipline, is fairly immature. Most of the serious work in intrusion detection is being carried on in the academic, commercial and government research communities. Commercially available examples of successful intrusion detection systems are limited, although the state of the art is progressing rapidly. However, as new approaches to intrusion detection are introduced, there is one question that seems to emerge continuously: should we be using intrusion detection systems to gather forensic evidence in the case of a detected penetration or abuse attempt. The whole concept of mixing investigation with detection of intrusion or abuse attempts begs a number of questions. First, can an IDS perform adequately if it also has to manage evidentiary data appropriately to meet legal standards? Second, what is required to automate the management of data from an evidentiary perspective? Third, what measures need to be added to an IDS to ensure that it not only can perform as an IDS (including performance requirements for the type of system in which it is implemented), but that it can manage evidence appropriately? It is not appropriate to ask any system to do double duty, performing additional tasks which may or may not be related to its primary function, at the expense of the results of its primary mission. This idea – that of combining evidence gathering with system protection – has generated considerable discussion over recent years. There is reasonable conjecture as to whether the presence of an IDS during an attack provides an appropriate evidence gathering mechanism. There appears to be general agreement, informed or otherwise, in the courts that such is the case. Today, in the absence of an alternative, the IDS probably is the best source of information about an attack. Whether that information is forensically pristine or not is an entirely different question. Sommer [SO98], however, reports that the NSTAC Network Group Intrusion Detection Subgroup found in December 1997 that: • “Current intrusion detection systems are not designed to collect and protect the integrity of the type of information required to conduct law enforcement investigations.” • “There is a lack of guidance to employees as to how to respond to intrusions and capture the information required to conduct a law enforcement investigation. The subgroup discussed the need to develop guidelines and training materials for end users that will make them aware of what information law enforcement requires and what procedures they use to collect evidence on an intrusion.” This finding implies strongly that there is a disconnect between the use of intrusion detection systems and the collection of forensically appropriate evidence during an intrusion attempt. On the other hand, Yuill et al [YU99] propose that an intrusion detection system can collect enough information during an on-going attack to profile, if not identify, the attacker. The ability of an IDS to gather significant information about an attack in progress without materially affecting the primary mission of the intrusion detection system suggests that an IDS could be deployed that would provide both detection/response and forensically pristine evidence in the case of a security incident. 1.1 Problem Statement Fundamentally, this project seeks to answer the question: “Is it practical and appropriate to combine intrusion detection and response with forensic management of collected data within a single IDS in today’s networks?”. The issue we will address in this research is three-fold. First, can an IDS gather useful forensic evidence during an attack without impacting its primary mission of detect and respond? Second, what is required to provide an acceptable case file of forensic information? And, finally, in a practical implementation, can an IDS be implemented that will accomplish both its primary mission and, at the same time, collect and manage forensically pure evidence that can be used in a legal setting? There are several difficulties in addressing these issues. First, the theoretical requirements of an IDS in terms of performing its primary mission may be at odds with the requirements of collecting and preserving forensic evidence. The primary mission of an IDS is to detect and respond to security incidents. The definition of a security incident should be, at least in part, determined by the organization’s security policy. Therefore, the detailed definition of the IDS’ primary mission is partially determined by the security policy, not by some overarching standard or generic procedure. The result is that there can be a wide disparity among requirements for an IDS from organization to organization. That contrasts significantly with the relatively static set of requirements for developing and managing evidence for use in a legal proceeding. A second difficulty is that the IDS, by design, does not manage its information in the sense that a forensics system does. There is a requirement within a forensic system (automated or not) for, among other things, the maintenance of a chain of custody whereby all evidence can be accounted for and its integrity attested to from the time of its collection to the time of its use in a legal proceeding. The third difficulty deals with the architecture of the IDS. The ability of a program to perform widely disparate tasks (in this case detection and response as well as forensic management of data) implies an architecture that may or may not be present currently in an IDS. Thus, there develops the need for a standard architecture for intrusion detection systems that also are capable of forensic data management.",
"title": ""
},
{
"docid": "67dedca1dbdf5845b32c74e17fc42eb6",
"text": "How much trust a user places in a recommender is crucial to the uptake of the recommendations. Although prior work established various factors that build and sustain user trust, their comparative impact has not been studied in depth. This paper presents the results of a crowdsourced study examining the impact of various recommendation interfaces and content selection strategies on user trust. It evaluates the subjective ranking of nine key factors of trust grouped into three dimensions and examines the differences observed with respect to users' personality traits.",
"title": ""
},
{
"docid": "ff0395e9146ab7a3416cf911f42fcf7f",
"text": "Financial Time Series analysis and prediction is one of the interesting areas in which past data could be used to anticipate and predict data and information about future. There are many artificial intelligence approaches used in the prediction of time series, such as Artificial Neural Networks (ANN) and Hidden Markov Models (HMM). In this paper HMM and HMM approaches for predicting financial time series are presented. ANN and HMM are used to predict time series that consists of highest and lowest Forex index series as input variable. Both of ANN and HMM are trained on the past dataset of the chosen currencies (such as EURO/ USD which is used in this paper). The trained ANN and HMM are used to search for the variable of interest behavioral data pattern from the past dataset. The obtained results was compared with real values from Forex (Foreign Exchange) market database [1]. The power and predictive ability of the two models are evaluated on the basis of Mean Square Error (MSE). The Experimental results obtained are encouraging, and it demonstrate that ANN and HMM can closely predict the currency market, with a small different in predicting performance.",
"title": ""
},
{
"docid": "d6dfa1f279a5df160814e1d378162c02",
"text": "Understanding and forecasting mobile traffic of large scale cellular networks is extremely valuable for service providers to control and manage the explosive mobile data, such as network planning, load balancing, and data pricing mechanisms. This paper targets at extracting and modeling traffic patterns of 9,000 cellular towers deployed in a metropolitan city. To achieve this goal, we design, implement, and evaluate a time series analysis approach that is able to decompose large scale mobile traffic into regularity and randomness components. Then, we use time series prediction to forecast the traffic patterns based on the regularity components. Our study verifies the effectiveness of our utilized time series decomposition method, and shows the geographical distribution of the regularity and randomness component. Moreover, we reveal that high predictability of the regularity component can be achieved, and demonstrate that the prediction of randomness component of mobile traffic data is impossible.",
"title": ""
},
{
"docid": "56eaf0e788e1f1b2c82afee3bfca7b09",
"text": "Automated depression detection is inherently a multimodal problem. Therefore, it is critical that researchers investigate fusion techniques for multimodal design. This paper presents the first ever comprehensive study of fusion techniques for depression detection. In addition, we present novel linguistically-motivated fusion techniques, which we find outperform existing approaches.",
"title": ""
},
{
"docid": "0a0e4219aa1e20886e69cb1421719c4e",
"text": "A wearable two-antenna system to be integrated on a life jacket and connected to Personal Locator Beacons (PLBs) of the Cospas-Sarsat system is presented. Each radiating element is a folded meandered dipole resonating at 406 MHz and includes a planar reflector realized by a metallic foil. The folded dipole and the metallic foil are attached on the opposite sides of the floating elements of the life jacket itself, so resulting in a mechanically stable antenna. The metallic foil improves antenna radiation properties even when the latter is close to the sea surface, shields the human body from EM radiation and makes the radiating system less sensitive to the human body movements. Prototypes have been realized and a measurement campaign has been carried out. The antennas show satisfactory performance also when the life jacket is worn by a user. The proposed radiating elements are intended for the use in a two-antenna scheme in which the transmitter can switch between them in order to meet Cospas-Sarsat system specifications. Indeed, the two antennas provide complementary radiation patterns so that Cospas-Sarsat requirements (satellite constellation coverage and EIRP profile) are fully satisfied.",
"title": ""
},
{
"docid": "da7a2d40d2740e52ac7388fa23f1c797",
"text": "The use of business intelligence tools and other means to generate queries has led to great variety in the size of join queries. While most queries are reasonably small, join queries with up to a hundred relations are not that exotic anymore, and the distribution of query sizes has an incredible long tail. The largest real-world query that we are aware of accesses more than 4,000 relations. This large spread makes query optimization very challenging. Join ordering is known to be NP-hard, which means that we cannot hope to solve such large problems exactly. On the other hand most queries are much smaller, and there is no reason to sacrifice optimality there. This paper introduces an adaptive optimization framework that is able to solve most common join queries exactly, while simultaneously scaling to queries with thousands of joins. A key component there is a novel search space linearization technique that leads to near-optimal execution plans for large classes of queries. In addition, we describe implementation techniques that are necessary to scale join ordering algorithms to these extremely large queries. Extensive experiments with over 10 different approaches show that the new adaptive approach proposed here performs excellent over a huge spectrum of query sizes, and produces optimal or near-optimal solutions for most common queries.",
"title": ""
},
{
"docid": "a57bdfa9c48a76d704258f96874ea700",
"text": "BACKGROUND\nPrevious state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text \"feature engineering\" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word \"embeddings\".\n\n\nOBJECTIVES\n(i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets.\n\n\nMETHODS\nTwo deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models.\n\n\nRESULTS\nWe have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset.\n\n\nCONCLUSIONS\nWe present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary.",
"title": ""
},
{
"docid": "3ad47c45135498f6ed94004e28028f6e",
"text": "This paper describes the theory and implementation of Bayesian networks in the context of automatic speech recognition. Bayesian networks provide a succinct and expressive graphical language for factoring joint probability distributions, and we begin by presenting the structures that are appropriate for doing speech recognition training and decoding. This approach is notable because it expresses all the details of a speech recognition system in a uniform way using only the concepts of random variables and conditional probabilities. A powerful set of computational routines complements the representational utility of Bayesian networks, and the second part of this paper describes these algorithms in detail. We present a novel view of inference in general networks – where inference is done via a change-of-variables that renders the network tree-structured and amenable to a very simple form of inference. We present the technique in terms of straightforward dynamic programming recursions analogous to HMM a–b computation, and then extend it to handle deterministic constraints amongst variables in an extremely efficient manner. The paper concludes with a sequence of experimental results that show the range of effects that can be modeled, and that significant reductions in error-rate can be expected from intelligently factored state representations. 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "adabd3971fa0abe5c60fcf7a8bb3f80c",
"text": "The present paper describes the development of a query focused multi-document automatic summarization. A graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts having related topical features from the graph using edge scores. Next, query dependent weights for each sentence are added to the edge score of the sentence and accumulated with the corresponding cluster score. Top ranked sentence of each cluster is identified and compressed using a dependency parser. The compressed sentences are included in the output summary. The inter-document cluster is revisited in order until the length of the summary is less than the maximum limit. The summarizer has been tested on the standard TAC 2008 test data sets of the Update Summarization Track. Evaluation of the summarizer yielded accuracy scores of 0.10317 (ROUGE-2) and 0.13998 (ROUGE–SU-4).",
"title": ""
},
{
"docid": "3410a896fda40d30959682f773716736",
"text": "In this paper, we propose a CSMA-based medium access control protocol for multihop wireless networks that uses multiple channels and a dynamic channel selection method. The proposed protocol uses one control channel and N data channels, where N is independent of the number of nodes in the network. The source uses an exchange of control packets on the control channel to decide on the best channel to send the data packet on. Channel selection is based on maximizing the signal-to-interference plus noise ratio at the receiver. We present performance evaluations obtained from simulations that demonstrate the effectiveness of the proposed protocol.",
"title": ""
},
{
"docid": "eb7582d78766ce274ba899ad2219931f",
"text": "BACKGROUND\nPrecise determination of breast volume facilitates reconstructive procedures and helps in the planning of tissue removal for breast reduction surgery. Various methods currently used to measure breast size are limited by technical drawbacks and unreliable volume determinations. The purpose of this study was to develop a formula to predict breast volume based on straightforward anthropomorphic measurements.\n\n\nMETHODS\nOne hundred one women participated in this study. Eleven anthropomorphic measurements were obtained on 202 breasts. Breast volumes were determined using a water displacement technique. Multiple stepwise linear regression was used to determine predictive variables and a unifying formula.\n\n\nRESULTS\nMean patient age was 37.7 years, with a mean body mass index of 31.8. Mean breast volumes on the right and left sides were 1328 and 1305 cc, respectively (range, 330 to 2600 cc). The final regression model incorporated the variables of breast base circumference in a standing position and a vertical measurement from the inframammary fold to a point representing the projection of the fold onto the anterior surface of the breast. The derived formula showed an adjusted R of 0.89, indicating that almost 90 percent of the variation in breast size was explained by the model.\n\n\nCONCLUSION\nSurgeons may find this formula a practical and relatively accurate method of determining breast volume.",
"title": ""
},
{
"docid": "923320f7061b9141f2a322a8ff54b0e1",
"text": "Our goal in this paper is to develop a practical framework for obtaining a uniform sample of users in an online social network (OSN) by crawling its social graph. Such a sample allows to estimate any user property and some topological properties as well. To this end, first, we consider and compare several candidate crawling techniques. Two approaches that can produce approximately uniform samples are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground truth.\" In contrast, using Breadth-First-Search (BFS) or an unadjusted Random Walk (RW) leads to substantially biased results. Second, and in addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these diagnostics can be used to effectively determine when a random walk sample is of adequate size and quality. Third, as a case study, we apply the above methods to Facebook and we collect the first, to the best of our knowledge, representative sample of Facebook users. We make it publicly available and employ it to characterize several key properties of Facebook.",
"title": ""
},
{
"docid": "b4462bf06bac13af9e40023019619a78",
"text": "Successful schools ensure that all students master basic skills such as reading and math and have strong backgrounds in other subject areas, including science, history, and foreign language. Recently, however, educators and parents have begun to support a broader educational agenda – one that enhances teachers’ and students’ social and emotional skills. Research indicates that social and emotional skills are associated with success in many areas of life, including effective teaching, student learning, quality relationships, and academic performance. Moreover, a recent meta-analysis of over 300 studies showed that programs designed to enhance social and emotional learning significantly improve students’ social and emotional competencies as well as academic performance. Incorporating social and emotional learning programs into school districts can be challenging, as programs must address a variety of topics in order to be successful. One organization, the Collaborative for Academic, Social, and Emotional Learning (CASEL), provides leadership for researchers, educators, and policy makers to advance the science and practice of school-based social and emotional learning programs. According to CASEL, initiatives to integrate programs into schools should include training on social and emotional skills for both teachers and students, and should receive backing from all levels of the district, including the superintendent, school principals, and teachers. Additionally, programs should be field-tested, evidence-based, and founded on sound",
"title": ""
},
{
"docid": "858c0aab25ea4ba619034a97bf05e82b",
"text": "This paper presents the detection of centerline crossing in abnormal driving using a CapsNet. The benefit of the CapsNet is that the capsule contains all the data about the status of objects and recognizes objects as vectors; hence, these can be used to classify driving as normal or abnormal. The datasets use the Creative Commons Licenses from YouTube to obtain traffic accident footages and six time-flow images composed of data with our quantitative basis. A comparison of our proposed architecture with the CNN model showed that our method produces better results.",
"title": ""
}
] |
scidocsrr
|
ee802d782e9a88c98e0f97c06da87cd7
|
Fast Decoding in Sequence Models using Discrete Latent Variables
|
[
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
},
{
"docid": "5e601792447020020aa02ee539b3a2cf",
"text": "The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT. In this paper, we give a more systematic treatment by summarizing the relevant source information through a convolutional architecture guided by the target information. With different guiding signals during decoding, our specifically designed convolution+gating architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target language words, are fed to a deep neural network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English translation tasks show that the proposed model can achieve significant improvements over the previous NNJM by up to +1.01 BLEU points on average.",
"title": ""
}
] |
[
{
"docid": "5d36e3685f024ffcb7a9856f49c4e717",
"text": "§ Action (term-pair) Representation • Dependency Paths between x, y • Wx: Word Embedding of x • Wy: Word Embedding of y • F(x, y): Surface (Ends with, Contains, etc.), Frequency (pattern-based co-occur info), and Generality (edge not too general or narrow) Features § Performance Study on End-to-End Taxonomy Induction: • WordNet (533/144/144 taxonomies for training, validation, and test set, size (10, 50], depth=4, animals, daily necessities, etc.) § Compared methods: • TAXI [3]: pattern-based method that ranked 1st in the SemEval-2016 Task 13 competition • HypeNET [4]: state-of-the-art hypernymy detection method • HypeNET + MST (maximum spanning tree): post-processing of HypeNET to prune the hypernym graph into a tree • Bansal et al. (2014) [1]: state-of-the-art taxonomy induction method • SubSeq [2]: state-of-the-art results on the SemEval-2016 Task 13 • Taxo-RL (RE, with virtual root embedding), Taxo-RL (NR, with new root addition), Taxo-RL (NR) + FG (with frequency and generality features) • Taxo-RL (partial, allows partial taxonomy), Taxo-RL (full, has to use all terms in the vocabulary)",
"title": ""
},
{
"docid": "01cbe4a4f8cfb9e00bc19290462f38f2",
"text": "In 2008, the bilateral Japan-Philippines Economic Partnership Agreement took effect. Contained within this regional free trade agreement are unique provisions allowing exchange of Filipino nurses and healthcare workers to work abroad in Japan. Japan's increasing need for healthcare workers due to its aging demographic and the Philippines need for economic development could have led to shared benefits under the Japan-Philippines Economic Partnership Agreement. However, 4 years following program implementation, results have been disappointing, e.g., only 7% of candidates passing the programs requirements since 2009. These disappointing results represent a policy failure within the current Japan-Philippines Economic Partnership Agreement framework, and point to the need for reform. Hence, amending the current Japan-Philippines Economic Partnership Agreement structure by potentially adopting a USA based approach to licensure examinations and implementing necessary institutional and governance reform measures may be necessary to ensure beneficial healthcare worker migration for both countries.",
"title": ""
},
{
"docid": "68c1cf9be287d2ccbe8c9c2ed675b39e",
"text": "The primary task of the peripheral vasculature (PV) is to supply the organs and extremities with blood, which delivers oxygen and nutrients, and to remove metabolic waste products. In addition, peripheral perfusion provides the basis of local immune response, such as wound healing and inflammation, and furthermore plays an important role in the regulation of body temperature. To adequately serve its many purposes, blood flow in the PV needs to be under constant tight regulation, both on a systemic level through nervous and hormonal control, as well as by local factors, such as metabolic tissue demand and hydrodynamic parameters. As a matter of fact, the body does not retain sufficient blood volume to fill the entire vascular space, and only 25% of the capillary bed is in use during resting state. The importance of microvascular control is clearly illustrated by the disastrous effects of uncontrolled blood pooling in the extremities, such as occurring during certain types of shock. Peripheral vascular disease (PVD) is the general name for a host of pathologic conditions of disturbed PV function. Peripheral vascular disease includes occlusive diseases of the arteries and the veins. An example is peripheral arterial occlusive disease (PAOD), which is the result of a buildup of plaque on the inside of the arterial walls, inhibiting proper blood supply to the organs. Symptoms include pain and cramping in extremities, as well as fatigue; ultimately, PAOD threatens limb vitality. The PAOD is often indicative of atherosclerosis of the heart and brain, and is therefore associated with an increased risk of myocardial infarction or cerebrovascular accident (stroke). Venous occlusive disease is the forming of blood clots in the veins, usually in the legs. Clots pose a risk of breaking free and traveling toward the lungs, where they can cause pulmonary embolism. In the legs, thromboses interfere with the functioning of the venous valves, causing blood pooling in the leg (postthrombotic syndrome) that leads to swelling and pain. Other causes of disturbances in peripheral perfusion include pathologies of the autoregulation of the microvasculature, such as in Reynaud’s disease or as a result of diabetes. To monitor vascular function, and to diagnose and monitor PVD, it is important to be able to measure and evaluate basic vascular parameters, such as arterial and venous blood flow, arterial blood pressure, and vascular compliance. Many peripheral vascular parameters can be assessed with invasive or minimally invasive procedures. Examples are the use of arterial catheters for blood pressure monitoring and the use of contrast agents in vascular X ray imaging for the detection of blood clots. Although they are sensitive and accurate, invasive methods tend to be more cumbersome to use, and they generally bear a greater risk of adverse effects compared to noninvasive techniques. These factors, in combination with their usually higher cost, limit the use of invasive techniques as screening tools. Another drawback is their restricted use in clinical research because of ethical considerations. Although many of the drawbacks of invasive techniques are overcome by noninvasive methods, the latter typically are more challenging because they are indirect measures, that is, they rely on external measurements to deduce internal physiologic parameters. Noninvasive techniques often make use of physical and physiologic models, and one has to be mindful of imperfections in the measurements and the models, and their impact on the accuracy of results. Noninvasive methods therefore require careful validation and comparison to accepted, direct measures, which is the reason why these methods typically undergo long development cycles. Even though the genesis of many noninvasive techniques reaches back as far as the late nineteenth century, it was the technological advances of the second half of the twentieth century in such fields as micromechanics, microelectronics, and computing technology that led to the development of practical implementations. The field of noninvasive vascular measurements has undergone a developmental explosion over the last two decades, and it is still very much a field of ongoing research and development. This article describes the most important and most frequently used methods for noninvasive assessment of 234 PERIPHERAL VASCULAR NONINVASIVE MEASUREMENTS",
"title": ""
},
{
"docid": "89e8df51a72309dc99789f90e922d1c5",
"text": "Information is traditionally confined to paper or digitally to a screen. In this paper, we introduce WUW, a wearable gestural interface, which attempts to bring information out into the tangible world. By using a tiny projector and a camera mounted on a hat or coupled in a pendant like wearable device, WUW sees what the user sees and visually augments surfaces or physical objects the user is interacting with. WUW projects information onto surfaces, walls, and physical objects around us, and lets the user interact with the projected information through natural hand gestures, arm movements or interaction with the object itself.",
"title": ""
},
{
"docid": "65fac26fc29ff492eb5a3e43f58ecfb2",
"text": "The introduction of new anticancer drugs into the clinic is often hampered by a lack of qualified biomarkers. Method validation is indispensable to successful biomarker qualification and is also a regulatory requirement. Recently, the fit-for-purpose approach has been developed to promote flexible yet rigorous biomarker method validation, although its full implications are often overlooked. This review aims to clarify many of the scientific and regulatory issues surrounding biomarker method validation and the analysis of samples collected from clinical trial subjects. It also strives to provide clear guidance on validation strategies for each of the five categories that define the majority of biomarker assays, citing specific examples.",
"title": ""
},
{
"docid": "238f3288dc1523229c6bcc3337e233e6",
"text": "The increasing diffusion of smart devices, along with the dynamism of the mobile applications ecosystem, are boosting the production of malware for the Android platform. So far, many different methods have been developed for detecting Android malware, based on either static or dynamic analysis. The main limitations of existing methods include: low accuracy, proneness to evasion techniques, and weak validation, often limited to emulators or modified kernels. We propose an Android malware detection method, based on sequences of system calls, that overcomes these limitations. The assumption is that malicious behaviors (e.g., sending high premium rate SMS, cyphering data for ransom, botnet capabilities, and so on) are implemented by specific system calls sequences: yet, no apriori knowledge is available about which sequences are associated with which malicious behaviors, in particular in the mobile applications ecosystem where new malware and non-malware applications continuously arise. Hence, we use Machine Learning to automatically learn these associations (a sort of \"fingerprint\" of the malware); then we exploit them to actually detect malware. Experimentation on 20000 execution traces of 2000 applications (1000 of them being malware belonging to different malware families), performed on a real device, shows promising results: we obtain a detection accuracy of 97%. Moreover, we show that the proposed method can cope with the dynamism of the mobile apps ecosystem, since it can detect unknown malware.",
"title": ""
},
{
"docid": "4138f62dfaefe49dd974379561fb6fea",
"text": "For a set of 1D vectors, standard singular value decomposition (SVD) is frequently applied. For a set of 2D objects such as images or weather maps, we form 2DSVD, which computes principal eigenvectors of rowrow and column-column covariance matrices, exactly as in the standard SVD. We study optimality properties of 2DSVD as low-rank approximation and show that it provides a framework unifying two recent approaches. Experiments on images and weather maps illustrate the usefulness of 2DSVD.",
"title": ""
},
{
"docid": "7cecfd37e44b26a67bee8e9c7dd74246",
"text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.",
"title": ""
},
{
"docid": "f32ff72da2f90ed0e5279815b0fb10e0",
"text": "We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.",
"title": ""
},
{
"docid": "e8b5fcac441c46e46b67ffbdd4b043e6",
"text": "We present DroidSafe, a static information flow analysis tool that reports potential leaks of sensitive information in Android applications. DroidSafe combines a comprehensive, accurate, and precise model of the Android runtime with static analysis design decisions that enable the DroidSafe analyses to scale to analyze this model. This combination is enabled by accurate analysis stubs, a technique that enables the effective analysis of code whose complete semantics lies outside the scope of Java, and by a combination of analyses that together can statically resolve communication targets identified by dynamically constructed values such as strings and class designators. Our experimental results demonstrate that 1) DroidSafe achieves unprecedented precision and accuracy for Android information flow analysis (as measured on a standard previously published set of benchmark applications) and 2) DroidSafe detects all malicious information flow leaks inserted into 24 real-world Android applications by three independent, hostile Red-Team organizations. The previous state-of-the art analysis, in contrast, detects less than 10% of these malicious flows.",
"title": ""
},
{
"docid": "61e16a9e53c2140d7c39694f83b603ac",
"text": "Object detection in videos is an important task in computer vision for various applications such as object tracking, video summarization and video search. Although great progress has been made in improving the accuracy of object detection in recent years due to the rise of deep neural networks, the state-of-the-art algorithms are highly computationally intensive. In order to address this challenge, we make two important observations in the context of videos: (i) Objects often occupy only a small fraction of the area in each video frame, and (ii) There is a high likelihood of strong temporal correlation between consecutive frames. Based on these observations, we propose Pack and Detect (PaD), an approach to reduce the computational requirements of object detection in videos. In PaD, only selected video frames called anchor frames are processed at full size. In the frames that lie between anchor frames (inter-anchor frames), regions of interest (ROIs) are identified based on the detections in the previous frame. We propose an algorithm to pack the ROIs of each inter-anchor frame together into a reduced-size frame. The computational requirements of the detector are reduced due to the lower size of the input. In order to maintain the accuracy of object detection, the proposed algorithm expands the ROIs greedily to provide additional background around each object to the detector. PaD can use any underlying neural network architecture to process the full-size and reduced-size frames. Experiments using the ImageNet video object detection dataset indicate that PaD can potentially reduce the number of FLOPS required for a frame by 4×. This leads to an overall increase in throughput of 1.25× on a 2.1 GHz Intel Xeon server with a NVIDIA Titan X GPU at the cost of 1.1% drop in accuracy.",
"title": ""
},
{
"docid": "a17e1bf423195ff66d73456f931fa5a1",
"text": "We propose a dialogue state tracker based on long short term memory (LSTM) neural networks. LSTM is an extension of a recurrent neural network (RNN), which can better consider distant dependencies in sequential input. We construct a LSTM network that receives utterances of dialogue participants as input, and outputs the dialogue state of the current utterance. The input utterances are separated into vectors of words with their orders, which are further converted to word embeddings to avoid sparsity problems. In experiments, we combined this system with the baseline system of the dialogue state tracking challenge (DSTC), and achieved improved dialogue state tracking accuracy.",
"title": ""
},
{
"docid": "9779a328b54e79a34191cec812ded633",
"text": "We present a novel approach to computational modeling of social interactions based on modeling of essential social interaction predicates (ESIPs) such as joint attention and entrainment. Based on sound social psychological theory and methodology, we collect a new “Tower Game” dataset consisting of audio-visual capture of dyadic interactions labeled with the ESIPs. We expect this dataset to provide a new avenue for research in computational social interaction modeling. We propose a novel joint Discriminative Conditional Restricted Boltzmann Machine (DCRBM) model that combines a discriminative component with the generative power of CRBMs. Such a combination enables us to uncover actionable constituents of the ESIPs in two steps. First, we train the DCRBM model on the labeled data and get accurate (76%-49% across various ESIPs) detection of the predicates. Second, we exploit the generative capability of DCRBMs to activate the trained model so as to generate the lower-level data corresponding to the specific ESIP that closely matches the actual training data (with mean square error 0.01-0.1 for generating 100 frames). We are thus able to decompose the ESIPs into their constituent actionable behaviors. Such a purely computational determination of how to establish an ESIP such as engagement is unprecedented.",
"title": ""
},
{
"docid": "7b5d2e7f1475997a49ed9fa820d565fe",
"text": "PURPOSE\nImplementations of health information technologies are notoriously difficult, which is due to a range of inter-related technical, social and organizational factors that need to be considered. In the light of an apparent lack of empirically based integrated accounts surrounding these issues, this interpretative review aims to provide an overview and extract potentially generalizable findings across settings.\n\n\nMETHODS\nWe conducted a systematic search and critique of the empirical literature published between 1997 and 2010. In doing so, we searched a range of medical databases to identify review papers that related to the implementation and adoption of eHealth applications in organizational settings. We qualitatively synthesized this literature extracting data relating to technologies, contexts, stakeholders, and their inter-relationships.\n\n\nRESULTS\nFrom a total body of 121 systematic reviews, we identified 13 systematic reviews encompassing organizational issues surrounding health information technology implementations. By and large, the evidence indicates that there are a range of technical, social and organizational considerations that need to be deliberated when attempting to ensure that technological innovations are useful for both individuals and organizational processes. However, these dimensions are inter-related, requiring a careful balancing act of strategic implementation decisions in order to ensure that unintended consequences resulting from technology introduction do not pose a threat to patients.\n\n\nCONCLUSIONS\nOrganizational issues surrounding technology implementations in healthcare settings are crucially important, but have as yet not received adequate research attention. This may in part be due to the subjective nature of factors, but also due to a lack of coordinated efforts toward more theoretically-informed work. Our findings may be used as the basis for the development of best practice guidelines in this area.",
"title": ""
},
{
"docid": "2d67465fbc2799f815237a05905b8d7a",
"text": "This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions.",
"title": ""
},
{
"docid": "e55f8ad65250902a53b1bbfe6f16d26c",
"text": "Automatic key phrase extraction has many important applications including but not limited to summarization, cataloging/indexing, feature extraction for clustering and classification, and data mining. This paper presents a simple, yet effective algorithm (KP-Miner) for achieving this task. The result of an experiment carried out to investigate the effectiveness of this algorithm is also presented. In this experiment the devised algorithm is applied to six different datasets consisting of 481 documents. The results are then compared to two existing sophisticated machine learning based automatic keyphrase extraction systems. The results of this experiment show that the devised algorithm is comparable to both systems",
"title": ""
},
{
"docid": "17953a3e86d3a4396cbd8a911c477f07",
"text": "We introduce Deep Semantic Embedding (DSE), a supervised learning algorithm which computes semantic representation for text documents by respecting their similarity to a given query. Unlike other methods that use singlelayer learning machines, DSE maps word inputs into a lowdimensional semantic space with deep neural network, and achieves a highly nonlinear embedding to model the human perception of text semantics. Through discriminative finetuning of the deep neural network, DSE is able to encode the relative similarity between relevant/irrelevant document pairs in training data, and hence learn a reliable ranking score for a query-document pair. We present test results on datasets including scientific publications and user-generated knowledge base.",
"title": ""
},
{
"docid": "20c3addef683da760967df0c1e83f8e3",
"text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.",
"title": ""
},
{
"docid": "9fc8d85122f1cf22e63ac2401531e448",
"text": "Recognizing multiple labels of images is a fundamental but challenging task in computer vision, and remarkable progress has been attained by localizing semantic-aware image regions and predicting their labels with deep convolutional neural networks. The step of hypothesis regions (region proposals) localization in these existing multi-label image recognition pipelines, however, usually takes redundant computation cost, e.g., generating hundreds of meaningless proposals with nondiscriminative information and extracting their features, and the spatial contextual dependency modeling among the localized regions are often ignored or over-simplified. To resolve these issues, this paper proposes a recurrent attention reinforcement learning framework to iteratively discover a sequence of attentional and informative regions that are related to different semantic objects and further predict label scores conditioned on these regions. Besides, our method explicitly models longterm dependencies among these attentional regions that help to capture semantic label co-occurrence and thus facilitate multilabel recognition. Extensive experiments and comparisons on two large-scale benchmarks (i.e., PASCAL VOC and MSCOCO) show that our model achieves superior performance over existing state-of-the-art methods in both performance and efficiency as well as explicitly identifying image-level semantic labels to specific object regions.",
"title": ""
},
{
"docid": "b13ccc915f81eca45048ffe9d5da5d4f",
"text": "Mobile robots are increasingly being deployed in the real world in response to a heightened demand for applications such as transportation, delivery and inspection. The motion planning systems for these robots are expected to have consistent performance across the wide range of scenarios that they encounter. While state-of-the-art planners, with provable worst-case guarantees, can be employed to solve these planning problems, their finite time performance varies across scenarios. This thesis proposes that the planning module for a robot must adapt its search strategy to the distribution of planning problems encountered to achieve real-time performance. We address three principal challenges of this problem. Firstly, we show that even when the planning problem distribution is fixed, designing a nonadaptive planner can be challenging as the performance of planning strategies fluctuates with small changes in the environment. We characterize the existence of complementary strategies and propose to hedge our bets by executing a diverse ensemble of planners. Secondly, when the distribution is varying, we require a meta-planner that can automatically select such an ensemble from a library of black-box planners. We show that greedily training a list of predictors to focus on failure cases leads to an effective meta-planner. For situations where we have no training data, we show that we can learn an ensemble on-the-fly by adopting algorithms from online paging theory. Thirdly, in the interest of efficiency, we require a white-box planner that directly adapts its search strategy during a planning cycle. We propose an efficient procedure for training adaptive search heuristics in a data-driven imitation learning framework. We also draw a novel connection to Bayesian active learning, and propose algorithms to adaptively evaluate edges of a graph. Our approach leads to the synthesis of a robust real-time planning module that allows a UAV to navigate seamlessly across environments and speed-regimes. We evaluate our framework on a spectrum of planning problems and show closed-loop results on 3 UAV platforms a full-scale autonomous helicopter, a large scale hexarotor and a small quadrotor. While the thesis was motivated by mobile robots, we have shown that the individual algorithms are broadly applicable to other problem domains such as informative path planning and manipulation planning. We also establish novel connections between the disparate fields of motion planning and active learning, imitation learning and online paging which opens doors to several new research problems.",
"title": ""
}
] |
scidocsrr
|
3380e8cbbf97e3956b2b03a019a97acc
|
Fast Context Adaptation via Meta-Learning
|
[
{
"docid": "9ca12c5f314d077093753dc0f3ff9cd5",
"text": "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning — answering image-related questions which require a multi-step, high-level process — a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-theart error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.",
"title": ""
},
{
"docid": "e28ab50c2d03402686cc9a465e1231e7",
"text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.",
"title": ""
}
] |
[
{
"docid": "5475df204bca627e73b077594af29d47",
"text": "Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.",
"title": ""
},
{
"docid": "fc3af1e7ebc13605938d8f8238d9c8bd",
"text": "Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and/or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1% AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.",
"title": ""
},
{
"docid": "ce501e6b012aa9356b59842d50ecf9b6",
"text": "We describe an algorithm, IsoRank, for global alignment of two protein-protein interaction (PPI) networks. IsoRank aims to maximize the overall match between the two networks; in contrast, much of previous work has focused on the local alignment problem— identifying many possible alignments, each corresponding to a local region of similarity. IsoRank is guided by the intuition that a protein should be matched with a protein in the other network if and only if the neighbors of the two proteins can also be well matched. We encode this intuition as an eigenvalue problem, in a manner analogous to Google’s PageRank method. We use IsoRank to compute the first known global alignment between the S. cerevisiae and D. melanogaster PPI networks. The common subgraph has 1420 edges and describes conserved functional components between the two species. Comparisons of our results with those of a well-known algorithm for local network alignment indicate that the globally optimized alignment resolves ambiguity introduced by multiple local alignments. Finally, we interpret the results of global alignment to identify functional orthologs between yeast and fly; our functional ortholog prediction method is much simpler than a recently proposed approach and yet provides results that are more comprehensive.",
"title": ""
},
{
"docid": "ca331150e60e24f038f9c440b8125ddc",
"text": "Class imbalance is one of the challenges of machine learning and data mining fields. Imbalance data sets degrades the performance of data mining and machine learning techniques as the overall accuracy and decision making be biased to the majority class, which lead to misclassifying the minority class samples or furthermore treated them as noise. This paper proposes a general survey for class imbalance problem solutions and the most significant investigations recently introduced by researchers.",
"title": ""
},
{
"docid": "4418314019e47c800894de3d56f1507d",
"text": "One might interpret the locution “the phenomenological mind” as a declaration of a philosophical thesis that the mind is in some sense essentially phenomenological. Authors Gallagher & Zahavi appear to have intended it, however, to refer more to the phenomenological tradition and its methods of analysis. From the subheading of this book, one gains an impression that readers will see how the resources and perspectives from the phenomenological tradition illuminate various issues in philosophy of mind and cognitive science in particular. This impression is reinforced upon finding that many analytic philosophers’ names appear throughout the book. That appearance notwithstanding, as well as the distinctiveness of the book as an introduction, the authors do not sufficiently engage with analytic philosophy.",
"title": ""
},
{
"docid": "ec9fa7d2b0833d1b2f9fb9c7e0d3f350",
"text": "Our goal in this paper is to explore two generic approaches to disrupting dark networks: kinetic and nonkinetic. The kinetic approach involves aggressive and offensive measures to eliminate or capture network members and their supporters, while the non-kinetic approach involves the use of subtle, non-coercive means for combating dark networks. Two strategies derive from the kinetic approach: Targeting and Capacity-building. Four strategies derive from the non-kinetic approach: Institution-Building, Psychological Operations, Information Operations and Rehabilitation. We use network data from Noordin Top’s South East Asian terror network to illustrate how both kinetic and non-kinetic strategies could be pursued depending on a commander’s intent. Using this strategic framework as a backdrop, we strongly advise the use of SNA metrics in developing alterative counter-terrorism strategies that are contextdependent rather than letting SNA metrics define and drive a particular strategy.",
"title": ""
},
{
"docid": "c713e4a5536c065d8d40c1e2482557bc",
"text": "In this paper, we propose a robust and accurate method to detect fingertips of hand palm with a down-looking camera mounted on an eyeglass for the utilization of hand gestures for user interaction between human and computers. To ensure consistent performance under unconstrained environments, we propose a novel method to precisely locate fingertips by combing both statistical information of palm edge distribution and structure information of convex null analysis on palm contour. Briefly, first SVM (support vector machine) with a statistical nine-bin based HOG (histogram of oriented gradient) features is introduced for robust hand detection from video stream. Then, binary image regions are segmented out by an adaptive Cg-Cr model on detected hands. With the prior information of hand contour, it takes a global optimization approach of convex hull analysis to locate hand fingertip. The experimental results have demonstrated that the proposed approach performs well because it can well detect all hand fingertips even under some extreme environments.",
"title": ""
},
{
"docid": "93f9fbfcce5fc90a10f31d971a358a8c",
"text": "The purpose of this study is to find a theoretically grounded, practically applicable and useful granularity level of an algorithmically constructed publication-level classification of research publications (ACPLC). The level addressed is the level of research topics. The methodology we propose uses synthesis papers and their reference articles to construct a baseline classification. A dataset of about 31 million publications, and their mutual citations relations, is used to obtain several ACPLCs of different granularity. Each ACPLC is compared to the baseline classification and the best performing ACPLC is identified. The results of two case studies show that the topics of the cases are closely associated with different classes of the identified ACPLC, and that these classes tend to treat only one topic. Further, the class size variation is moderate, and only a small proportion of the publications belong to very small classes. For these reasons, we conclude that the proposed methodology is suitable to determine the topic granularity level of an ACPLC and that the ACPLC identified by this methodology is useful for bibliometric analyses.",
"title": ""
},
{
"docid": "efa21b9d1cd973fc068074b94b60bf08",
"text": "BACKGROUND\nDermoscopy is useful in evaluating skin tumours, but its applicability also extends into the field of inflammatory skin disorders. Discoid lupus erythematosus (DLE) represents the most common subtype of cutaneous lupus erythematosus. While dermoscopy and videodermoscopy have been shown to aid the differentiation of scalp DLE from other causes of scarring alopecia, limited data exist concerning dermoscopic criteria of DLE in other locations, such as the face, trunk and extremities.\n\n\nOBJECTIVE\nTo describe the dermoscopic criteria observed in a series of patients with DLE located on areas other than the scalp, and to correlate them to the underlying histopathological alterations.\n\n\nMETHODS\nDLE lesions located on the face, trunk and extremities were dermoscopically and histopathologically examined. Selection of the dermoscopic variables included in the evaluation process was based on data in the available literature on DLE of the scalp and on our preliminary observations. Analysis of data was done with SPSS analysis software.\n\n\nRESULTS\nFifty-five lesions from 37 patients with DLE were included in the study. Perifollicular whitish halo, follicular keratotic plugs and telangiectasias were the most common dermoscopic criteria. Statistical analysis revealed excellent correlation between dermoscopic and histopathological findings. Notably, a time-related alteration of dermoscopic features was observed.\n\n\nCONCLUSIONS\nThe present study provides new insights into the dermoscopic variability of DLE located on the face, trunk and extremities.",
"title": ""
},
{
"docid": "bf7335b263742fee9ca2c943e5533d1e",
"text": "Smartphone users increasingly download and install third-party applications from official application repositories. Attackers may use this centralized application delivery architecture as a security and privacy attack vector. This risk increases since application vetting mechanisms are often not in place and the user is delegated to authorize which functionality and protected resources are accessible by third-party applications. In this paper, we mount a survey to explore the security awareness of smartphone users who download applications from official application repositories (e.g. Google Play, Apple’s App Store, etc.). The survey findings suggest a security complacency, as the majority of users trust the app repository, security controls are not enabled or not added, and users disregard security during application selection and installation. As a response to this security complacency, we built a prediction model to indentify users who trust the app repository. Then, the model is assessed, evaluated, and proved to be statistically significant and efficient.",
"title": ""
},
{
"docid": "867a6923a650bdb1d1ec4f04cda37713",
"text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.",
"title": ""
},
{
"docid": "ebb6f9ab7918edc2b0746ee8ee244f4a",
"text": "P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y Pervasive Computing: A Paradigm for the 21st Century In 1991, Mark Weiser, then chief technology officer for Xerox’s Palo Alto Research Center, described a vision for 21st century computing that countered the ubiquity of personal computers. “The most profound technologies are those that disappear,” he wrote. “They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Computing has since mobilized itself beyond the desktop PC. Significant hardware developments—as well as advances in location sensors, wireless communications, and global networking—have advanced Weiser’s vision toward technical and economic viability. Moreover, the Web has diffused some of the psychological barriers that he also thought would have to disappear. However, the integration of information technology into our lives still falls short of Weiser’s concluding vision:",
"title": ""
},
{
"docid": "007791833b15bd3367c11bb17b7abf82",
"text": "When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.",
"title": ""
},
{
"docid": "afa7d0e5c19fea77e1bcb4fce39fbc93",
"text": "Highly Autonomous Driving (HAD) systems rely on deep neural networks for the visual perception of the driving environment. Such networks are train on large manually annotated databases. In this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. The proposed generative framework, coined Generative One-Shot Learning (GOL), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated as Pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. GOL has been evaluated on environment perception challenges encountered in autonomous vision.",
"title": ""
},
{
"docid": "0c67afcb351c53c1b9e2b4bcf3b0dc08",
"text": "The Scrum methodology is an agile software development process that works as a project management wrapper around existing engineering practices to iteratively and incrementally develop software. With Scrum, for a developer to receive credit for his or her work, he or she must demonstrate the new functionality provided by a feature at the end of each short iteration during an iteration review session. Such a short-term focus without the checks and balances of sound engineering practices may lead a team to neglect quality. In this paper we present the experiences of three teams at Microsoft using Scrum with an additional nine sound engineering practices. Our results indicate that these teams were able to improve quality, productivity, and estimation accuracy through the combination of Scrum and nine engineering practices.",
"title": ""
},
{
"docid": "7ec6540b44b23a0380dcb848239ccac4",
"text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.",
"title": ""
},
{
"docid": "4faa5fd523361d472fc0bea8508c58f8",
"text": "This paper reviews the current state of laser scanning from airborne and terrestrial platforms for geometric reconstruction of object shape and size. The current performance figures of sensor systems are presented in an overview. Next, their calibration and the orientation of the acquired point clouds is discussed. For airborne deployment this is usually one step, whereas in the terrestrial case laboratory calibration and registration of point clouds are (still) two distinct, independent steps. As laser scanning is an active measurement technology, the interaction of the emitted energy with the object surface has influences on the range measurement. This has to be considered in order to explain geometric phenomena in the data. While the problems, e.g. multiple scattering, are understood well, there is currently a lack of remedies. Then, in analogy to the processing chain, segmentation approaches for laser scanning data are reviewed. Segmentation is a task relevant for almost all applications. Likewise, DTM (digital terrain model) reconstruction is relevant for many applications of airborne laser scanning, and is therefore discussed, too. This paper reviews the main processing steps necessary for many applications of laser scanning.",
"title": ""
},
{
"docid": "6fd89ac5ec4cfd0f6c28e01c8d94ff7a",
"text": "This paper describes the development of a student attendance system based on Radio Frequency Identification (RFID) technology. The existing conventional attendance system requires students to manually sign the attendance sheet every time they attend a class. As common as it seems, such system lacks of automation, where a number of problems may arise. This include the time unnecessarily consumed by the students to find and sign their name on the attendance sheet, some students may mistakenly or purposely signed another student's name and the attendance sheet may got lost. Having a system that can automatically capture student's attendance by flashing their student card at the RFID reader can really save all the mentioned troubles. This is the main motive of our system and in addition having an online system accessible anywhere and anytime can greatly help the lecturers to keep track of their students' attendance. Looking at a bigger picture, deploying the system throughout the academic faculty will benefit the academic management as students' attendance to classes is one of the key factor in improving the quality of teaching and monitoring their students' performance. Besides, this system provides valuable online facilities for easy record maintenance offered not only to lecturers but also to related academic management staffs especially for the purpose of students' progress monitoring.",
"title": ""
},
{
"docid": "c2a2e9903859a6a9f9b3db5696cb37ff",
"text": "Depth estimation from a single image is a fundamental problem in computer vision. In this paper, we propose a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction. Specifically, we adopt an efficient linear propagation model, where the propagation is performed with a manner of recurrent convolutional operation, and the affinity among neighboring pixels is learned through a deep convolutional neural network (CNN). We apply the designed CSPN to two depth estimation tasks given a single image: (1) Refine the depth output from existing state-of-the-art (SOTA) methods; (2) Convert sparse depth samples to a dense depth map by embedding the depth samples within the propagation procedure. The second task is inspired by the availability of LiDAR that provides sparse but accurate depth measurements. We experimented the proposed CSPN over the popular NYU v2 [1] and KITTI [2] datasets, where we show that our proposed approach improves not only quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5× faster) of depth maps than previous SOTA methods.",
"title": ""
}
] |
scidocsrr
|
d000772a5efdb3234e2dfd38c11e903b
|
Contracting, equal, and expanding learning schedules: the optimal distribution of learning sessions depends on retention interval.
|
[
{
"docid": "3ade96c73db1f06d7e0c1f48a0b33387",
"text": "To achieve enduring retention, people must usually study information on multiple occasions. How does the timing of study events affect retention? Prior research has examined this issue only in a spotty fashion, usually with very short time intervals. In a study aimed at characterizing spacing effects over significant durations, more than 1,350 individuals were taught a set of facts and--after a gap of up to 3.5 months--given a review. A final test was administered at a further delay of up to 1 year. At any given test delay, an increase in the interstudy gap at first increased, and then gradually reduced, final test performance. The optimal gap increased as test delay increased. However, when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. The interaction of gap and test delay implies that many educational practices are highly inefficient.",
"title": ""
},
{
"docid": "ab05a100cfdb072f65f7dad85b4c5aea",
"text": "Expanding retrieval practice refers to the idea that gradually increasing the spacing interval between repeated tests ought to promote optimal long-term retention. Belief in the superiority of this technique is widespread, but empirical support is scarce. In addition, virtually all research on expanding retrieval has examined the learning of word pairs in paired-associate tasks. We report two experiments in which we examined the learning of text materials with expanding and equally spaced retrieval practice schedules. Subjects studied brief texts and recalled them in an initial learning phase. We manipulated the spacing of the repeated recall tests and examined final recall 1 week later. Overall we found that (1) repeated testing enhanced retention more than did taking a single test, (2) testing with feedback (restudying the passages) produced better retention than testing without feedback, but most importantly (3) there were no differences between expanding and equally spaced schedules of retrieval practice. Repeated retrieval enhanced long-term retention, but how the repeated tests were spaced did not matter.",
"title": ""
},
{
"docid": "04a4996eb5be0d321037cac5cb3c1ad6",
"text": "Repeated retrieval enhances long-term retention, and spaced repetition also enhances retention. A question with practical and theoretical significance is whether there are particular schedules of spaced retrieval (e.g., gradually expanding the interval between tests) that produce the best learning. In the present experiment, subjects studied and were tested on items until they could recall each one. They then practiced recalling the items on 3 repeated tests that were distributed according to one of several spacing schedules. Increasing the absolute (total) spacing of repeated tests produced large effects on long-term retention: Repeated retrieval with long intervals between each test produced a 200% improvement in long-term retention relative to repeated retrieval with no spacing between tests. However, there was no evidence that a particular relative spacing schedule (expanding, equal, or contracting) was inherently superior to another. Although expanding schedules afforded a pattern of increasing retrieval difficulty across repeated tests, this did not translate into gains in long-term retention. Repeated spaced retrieval had powerful effects on retention, but the relative schedule of repeated tests had no discernible impact.",
"title": ""
}
] |
[
{
"docid": "f8c7f0fc1fb365d874766f6d1da2215c",
"text": "Different works have shown that the combination of multiple loss functions is beneficial when training deep neural networks for a variety of prediction tasks. Generally, such multi-loss approaches are implemented via a weighted multi-loss objective function in which each term encodes a different desired inference criterion. The importance of each term is often set using empirically tuned hyper-parameters. In this work, we analyze the importance of the relative weighting between the different terms of a multi-loss function and propose to leverage the model’s uncertainty with respect to each loss as an automatically learned weighting parameter. We consider the application of colon gland analysis from histopathology images for which various multi-loss functions have been proposed. We show improvements in classification and segmentation accuracy when using the proposed uncertainty driven multi-loss function.",
"title": ""
},
{
"docid": "c87b75a335df334c3ae8eb38b7a872cf",
"text": "Image quality is important not only for the viewing experience, but also for the performance of image processing algorithms. Image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. In this paper, we first analyze the factors that affect two-dimensional (2D) and three-dimensional (3D) image quality, and then provide an up-to-date overview on IQA for each main factor. The main factors that affect 2D image quality are fidelity and aesthetics. Another main factor that affects stereoscopic 3D image quality is visual comfort. We also describe the IQA databases and give the experimental results on representative IQA metrics. Finally, we discuss the challenges for IQA, including the influence of different factors on each other, the performance of IQA metrics in real applications, and the combination of quality assessment, restoration, and enhancement.",
"title": ""
},
{
"docid": "261930bf1b06e5c1e8cc47598e7e8a30",
"text": "Psychological First Aid (PFA) is the recommended immediate psychosocial response during crises. As PFA is now widely implemented in crises worldwide, there are increasing calls to evaluate its effectiveness. World Vision used PFA as a fundamental component of their emergency response following the 2014 conflict in Gaza. Anecdotal reports from Gaza suggest a range of benefits for those who received PFA. Though not intending to undertake rigorous research, World Vision explored learnings about PFA in Gaza through Focus Group Discussions with PFA providers, Gazan women, men and children and a Key Informant Interview with a PFA trainer. The qualitative analyses aimed to determine if PFA helped individuals to feel safe, calm, connected to social supports, hopeful and efficacious - factors suggested by the disaster literature to promote coping and recovery (Hobfoll et al., 2007). Results show positive psychosocial benefits for children, women and men receiving PFA, confirming that PFA contributed to: safety, reduced distress, ability to engage in calming practices and to support each other, and a greater sense of control and hopefulness irrespective of their adverse circumstances. The data shows that PFA formed an important part of a continuum of care to meet psychosocial needs in Gaza and served as a gateway for addressing additional psychosocial support needs. A \"whole-of-family\" approach to PFA showed particularly strong impacts and strengthened relationships. Of note, the findings from World Vision's implementation of PFA in Gaza suggests that future PFA research go beyond a narrow focus on clinical outcomes, to a wider examination of psychosocial, familial and community-based outcomes.",
"title": ""
},
{
"docid": "ab47d6b0ae971a5cf0a24f1934fbee63",
"text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"title": ""
},
{
"docid": "cdced5f45620aa620cde9a937692a823",
"text": "Due to a rapid advancement in the electronic commerce technology, the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a hidden Markov model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature.",
"title": ""
},
{
"docid": "baa0bf8fe429c4fe8bfb7ebf78a1ed94",
"text": "The weakly supervised object localization (WSOL) is to locate the objects in an image while only image-level labels are available during the training procedure. In this work, the Selective Feature Category Mapping (SFCM) method is proposed, which introduces the Feature Category Mapping (FCM) and the widely-used selective search method to solve the WSOL task. Our FCM replaces layers after the specific layer in the state-of-the-art CNNs with a set of kernels and learns the weighted pooling for previous feature maps. It is trained with only image-level labels and then map the feature maps to their corresponding categories in the test phase. Together with selective search method, the location of each object is finally obtained. Extensive experimental evaluation on ILSVRC2012 and PASCAL VOC2007 benchmarks shows that SFCM is simple but very effective, and it is able to achieve outstanding classification performance and outperform the state-of-the-art methods in the WSOL task.",
"title": ""
},
{
"docid": "aaab50242d8d40e62491956773fa0cfb",
"text": "Grammatical Evolution (GE) is a population-based evolutionary algorithm, where a formal grammar is used in the genotype to phenotype mapping process. PonyGE2 is an open source implementation of GE in Python, developed at UCD's Natural Computing Research and Applications group. It is intended as an advertisement and a starting-point for those new to GE, a reference for students and researchers, a rapid-prototyping medium for our own experiments, and a Python workout. As well as providing the characteristic genotype to phenotype mapping of GE, a search algorithm engine is also provided. A number of sample problems and tutorials on how to use and adapt PonyGE2 have been developed.",
"title": ""
},
{
"docid": "5e86e48f73283ac321abee7a9f084bec",
"text": "Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural sequence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, applications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-ofthe-art on several well-studied tasks.1",
"title": ""
},
{
"docid": "9bc90b182e3acd0fd0cfa10a7abc32f8",
"text": "The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.",
"title": ""
},
{
"docid": "0a4f5a46948310cfce44a8749cd479df",
"text": "This paper presents a tutorial introduction to contemporary cryptography. The basic information theoretic and computational properties of classical and modern cryptographic systems are presented, followed by cryptanalytic examination of several important systems and an examination of the application of cryptography to the security of timesharing systems and computer networks. The paper concludes with a guide to the cryptographic literature.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "a7c661ce625c60ef7a1ff498795b9020",
"text": "Median filtering technique is often used to remove additive white, salt and pepper noise from a signal or a source image. This filtering method is essential for the processing of digital data representing analog signals in real time. The median filter considers each pixel in the image in turn and looks at its nearby neighbors to determine whether or not it is representative of its surroundings. It replaces the pixel value with the median of neighboring pixel values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. We have used graphics processing units (GPUs) to implement the post-processing, performed by NVIDIA Compute Unified Device Architecture (CUDA). Such a system is faster than the CPU version, or other traditional computing, for processing medical applications such as echography or Doppler. This paper shows the effect of the Median Filtering and a comparison of the performance of the CPU and GPU in terms of response time.",
"title": ""
},
{
"docid": "971a0e51042e949214fd75ab6203e36a",
"text": "This paper presents an automatic recognition method for color text characters extracted from scene images, which is robust to strong distortions, complex background, low resolution and non uniform lightning. Based on a specific architecture of convolutional neural networks, the proposed system automatically learns how to recognize characters without making any assumptions, without applying any preprocessing or post-processing and without using tunable parameters. For this purpose, we use a training set of scene text images extracted from the ICDAR 2003 public training database. The proposed method is compared to recent character recognition techniques for scene images based on the ICDAR 2003 public samples dataset in order to contribute to the state-of-the-art method comparison efforts initiated in ICDAR 2003. Experimental results show an encouraging average recognition rate of 84.53%, ranging from 93.47% for clear images to 67.86% for seriously distorted images.",
"title": ""
},
{
"docid": "cab1abfa4e945b3892fa19f3fa030992",
"text": "Catheter-associated urinary tract infections (UTIs) are a significant negative outcome. There are previous studies showing advantages in removing Foleys early but no studies of the effect of using intermittent as opposed to Foley catheterization in a trauma population. This study evaluates the effectiveness of a straight catheter protocol implemented in February 2015. A retrospective chart review was performed on all patients admitted to the trauma service at a single institution who had a UTI one year before and one year after protocol implementation on February 18, 2015. The protocol involved removing Foley catheters early and using straight catheterization. Rates were compared with Fisher's exact test and continuous data were compared using student's t test. There were 1477 patients admitted to the trauma service in the control year and 1707 in the study year. The control year had a total of 43 patients with a UTI, 28 of these met inclusion criteria. The intervention year had a total of 35 patients with a UTI and 17 met inclusion criteria. The rate of patients having a UTI went from 0.019 to 0.010 (p = 0.035). In females this rate went from 0.033 to 0.009 (p = 0.007), whereas in males it went from 0.012 to 0.010 (p = 0.837). This study shows a statistically significant improvement in the rate of UTIs after implementing an intermittent catheterization protocol suggesting that this protocol could improve the rate of UTIs in other trauma centers. We use this for all trauma patients, and it is being looked at for use hospital-wide.",
"title": ""
},
{
"docid": "52cc3f8cd0609b1ceaa7bb9b01643c8d",
"text": "A 24-GHz portable FMCW radar for short-range human tracking is designed, fabricated, and tested. The complete radar system weights 17.3 grams and has a dimension of 65mm×60mm×25mm. It has an on-board chirp generator, which generates a 45.7 Hz sawtooth signal to control the VCO. A 1.8GHz bandwidth ranging from 22.8 GHz to 24.6 GHz is transmitted. A pair of Vivaldi antennas with a bandwidth of 3.8 GHz, ranging from 22.5 GHz to 26.3 GHz, are implemented on the same board with the RF transceiver. A six-port structure is employed to down-convert the RF signal to baseband. Measurement result has validated its promising ability to for short-range human tracking.",
"title": ""
},
{
"docid": "ef84f7f53b60cf38972ff1eb04d0f6a5",
"text": "OBJECTIVE\nThe purpose of this prospective study was to evaluate the efficacy and safety of screw fixation without bone fusion for unstable thoracolumbar and lumbar burst fracture.\n\n\nMETHODS\nNine patients younger than 40 years underwent screw fixation without bone fusion, following postural reduction using a soft roll at the involved vertebra, in cases of burst fracture. Their motor power was intact in spite of severe canal compromise. The surgical procedure included postural reduction for 3 days and screw fixations at one level above, one level below and at the fractured level itself. The patients underwent removal of implants 12 months after the initial operation, due to possibility of implant failure. Imaging and clinical findings, including canal encroachment, vertebral height, clinical outcome, and complications were analyzed.\n\n\nRESULTS\nPrior to surgery, the mean pain score (visual analogue scale) was 8.2, which decreased to 2.2 at 12 months after screw fixation. None of the patients complained of worsening of pain during 6 months after implant removal. All patients were graded as having excellent or good outcomes at 6 months after implant removal. The proportion of canal compromise at the fractured level improved from 55% to 35% at 12 months after surgery. The mean preoperative vertebral height loss was 45.3%, which improved to 20.6% at 6 months after implant removal. There were no neurological deficits related to neural injury. The improved vertebral height and canal compromise were maintained at 6 months after implant removal.\n\n\nCONCLUSION\nShort segment pedicle screw fixation, including fractured level itself, without bone fusion following postural reduction can be an effective and safe operative technique in the management of selected young patients suffering from unstable burst fracture.",
"title": ""
},
{
"docid": "6a0f60881dddc5624787261e0470b571",
"text": "Title of Dissertation: AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES Marco David Adelfio, Doctor of Philosophy, 2015 Dissertation directed by: Professor Hanan Samet Department of Computer Science Data tables on the Web hold large quantities of information, but are difficult to search, browse, and merge using existing systems. This dissertation presents a collection of techniques for extracting, processing, and querying tables that contain geographic data, by harnessing the coherence of table structures for retrieval tasks. Data tables, including spreadsheets, HTML tables, and those found in rich document formats, are the standard way of communicating structured data for typical computer users. Notably, geographic tables (i.e., those containing names of locations) constitute a large fraction of publicly-available data tables and are ripe for exposure to Internet users who are increasingly comfortable interacting with geographic data using web-based maps. Of particular interest is the creation of a large repository of geographic data tables that would enable novel queries such as “find vacation itineraries geographically similar to mine” for use in trip planning or “find demographic datasets that cover regions X, Y, and Z” for sociological research. In support of these goals, this dissertation identifies several methods for using the structure and context of data tables to improve the interpretation of the contents, even in the presence of ambiguity. First, a method for identifying functional components of data tables is presented, capitalizing on techniques for sequence labeling that are used in natural language processing. Next, a novel automated method for converting place references to physical latitude/longitude values, a process known as geotagging, is applied to tables with high accuracy. A classification procedure for identifying a specific class of geographic table, the travel itinerary, is also described, which borrows inspiration from optimization techniques for the traveling salesman problem (TSP). Finally, methods for querying spatially similar tables are introduced and several mechanisms for visualizing and interacting with the extracted geographic data are explored. AUTOMATED STRUCTURAL AND SPATIAL COMPREHENSION OF DATA TABLES",
"title": ""
},
{
"docid": "18ad179d4817cb391ac332dcbfe13788",
"text": "Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline — our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyperparameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported.",
"title": ""
},
{
"docid": "c71cfc228764fc96e7e747e119445939",
"text": "This review discusses and summarizes the recent developments and advances in the use of biodegradable materials for bone repair purposes. The choice between using degradable and non-degradable devices for orthopedic and maxillofacial applications must be carefully weighed. Traditional biodegradable devices for osteosynthesis have been successful in low or mild load bearing applications. However, continuing research and recent developments in the field of material science has resulted in development of biomaterials with improved strength and mechanical properties. For this purpose, biodegradable materials, including polymers, ceramics and magnesium alloys have attracted much attention for osteologic repair and applications. The next generation of biodegradable materials would benefit from recent knowledge gained regarding cell material interactions, with better control of interfacing between the material and the surrounding bone tissue. The next generations of biodegradable materials for bone repair and regeneration applications require better control of interfacing between the material and the surrounding bone tissue. Also, the mechanical properties and degradation/resorption profiles of these materials require further improvement to broaden their use and achieve better clinical results.",
"title": ""
},
{
"docid": "83ccee768c29428ea8a575b2e6faab7d",
"text": "Audio-based cough detection has become more pervasive in recent years because of its utility in evaluating treatments and the potential to impact the quality of life for individuals with chronic cough. We critically examine the current state of the art in cough detection, concluding that existing approaches expose private audio recordings of users and bystanders. We present a novel algorithm for detecting coughs from the audio stream of a mobile phone. Our system allows cough sounds to be reconstructed from the feature set, but prevents speech from being reconstructed intelligibly. We evaluate our algorithm on data collected in the wild and report an average true positive rate of 92% and false positive rate of 0.5%. We also present the results of two psychoacoustic experiments which characterize the tradeoff between the fidelity of reconstructed cough sounds and the intelligibility of reconstructed speech.",
"title": ""
}
] |
scidocsrr
|
eb01f7efb807309b1d0b746aaf729ae1
|
Comparing Fusion Models for DNN-Based Audiovisual Continuous Speech Recognition
|
[
{
"docid": "1f37b0d252de40c55eee0109c168983b",
"text": "The algorithm may be programmed without multiplication or division instructions and is eficient with respect to speed of execution and memory utilization. This paper describes an algorithm for computer control of a type of digital plotter that is now in common use with digital computers .' The plotter under consideration is capable of executing, in response to an appropriate pulse, any one of the eight linear movements shown in Figure 1. Thus, the plotter can move linearly from a point on a mesh to any adjacent point on the mesh. A typical mesh size is 1/100th of an inch. The data to be plotted are expressed in an (x , y) rectangular coordinate system which has been scaled with respect to the mesh; i.e., the data points lie on mesh points and consequently have integral coordinates. It is assumed that the data include a sufficient number of appropriately selected points to produce a satisfactory representation of the curve by connecting the points with line segments, as illustrated in Figure 2. In Figure 3, the line segment connecting",
"title": ""
},
{
"docid": "c2daec5b85a4e8eea614d855c6549ef0",
"text": "An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as \"place green at B 4 now\". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.",
"title": ""
}
] |
[
{
"docid": "eda305522dafe8de24ace9c17a1c02f3",
"text": "Steganography plays an important role in the field of information hiding. It is used in wide variety of applications such as internet security, authentication, copyright protection and information assurance etc. In Discrete Wavelet Transform (DWT) based steganography approaches the wavelet coefficients of the cover image are modified to embed the secret message. DWT based algorithm for image data hiding has been proposed in the recent past that embeds the secret message in CH band of cover image. This paper intends to observe the effect of embedding the secret message in different bands such as CH, CV and CD on the performance of stegano image in terms of Peak Signal to Noise Ratio (PSNR). Experimentation has been done using six different attacks. Experimental results reveal that the error block replacement with diagonal detail coefficients (CD) gives better PSNR than doing so with other coefficients.",
"title": ""
},
{
"docid": "83c87294c33601023fdd0624d2dacecc",
"text": "In modern road surveys, hanging power cables are among the most commonly-found geometric features. These cables are catenary curves that are conventionally modelled with three parameters in 2D Cartesian space. With the advent and popularity of the mobile mapping system (MMS), the 3D point clouds of hanging power cables can be captured within a short period of time. These point clouds, similarly to those of planar features, can be used for feature-based self-calibration of the system assembly errors of an MMS. However, to achieve this, a well-defined 3D equation for the catenary curve is needed. This paper proposes three 3D catenary curve models, each having different parameters. The models are examined by least squares fitting of simulated data and real data captured with an MMS. The outcome of the fitting is investigated in terms of the residuals and correlation matrices. Among the proposed models, one of them could estimate the parameters accurately and without any extreme correlation between the variables. This model can also be applied to those transmission lines captured by airborne laser scanning or any other hanging cable-like objects.",
"title": ""
},
{
"docid": "5725d1abf54de1b48f60315dab13e5d4",
"text": "Identifying the optimal set of individuals to first receive information (`seeds') in a social network is a widely-studied question in many settings, such as the diffusion of information, microfinance programs, and new technologies. Numerous studies have proposed various network-centrality based heuristics to choose seeds in a way that is likely to boost diffusion. Here we show that, for some frequently studied diffusion processes, randomly seeding S + x individuals can prompt a larger cascade than optimally targeting the best S individuals, for a small x. We prove our results for large classes of random networks, but also show that they hold in simulations over several real-world networks. This suggests that the returns to collecting and analyzing network information to identify the optimal seeds may not be economically significant. Given these findings, practitioners interested in communicating a message to a large number of people may wish to compare the cost of network-based targeting to that of slightly expanding initial outreach.",
"title": ""
},
{
"docid": "50d0b1e141bcea869352c9b96b0b2ad5",
"text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.",
"title": ""
},
{
"docid": "2747952e921f9e0c2beb524957edf2a0",
"text": "AngloGold Ashanti is an international gold mining company that has recently implemented an information security awareness program worldwide at all of their operations. Following the implementation, there was a normal business need to evaluate and measure the success and effectiveness of the program. A measuring tool that can be applied globally and that addressed AngloGold Ashanti’s unique requirements was developed and applied at the mining sites located in the West Africa region. The objective of this paper is, firstly, to give a brief overview on the measuring tool developed and, secondly to report on the application and results in the West Africa region.",
"title": ""
},
{
"docid": "ef660ba2575335fbc43c537a6306c7c5",
"text": "Environmental monitoring with mobile robots requires solving the informative path planning problem. A key challenge is how to compute a continuous path over space and time that will allow a robot to best sample the environment for an initially unknown phenomenon. To address this problem we devise a layered Bayesian Optimisation approach that uses two Gaussian Processes, one to model the phenomenon and the other to model the quality of selected paths. By using different acquisition functions over both models we tackle the exploration-exploitation trade off in a principled manner. Our method optimises sampling over continuous paths and allows us to find trajectories that maximise the reward over the path. We test our method on a large scale experiment for modelling ozone concentration in the US, and on a mobile robot modelling the changes in luminosity. Comparisons are presented against information based criteria and point-based strategies demonstrating the benefits of our method.",
"title": ""
},
{
"docid": "ab793edc212dc2a537dbcb4ac9736f9f",
"text": "Much of the abusive supervision research has focused on the supervisor– subordinate dyad when examining the effects of abusive supervision on employee outcomes. Using data from a large multisource field study, we extend this research by testing a trickle-down model of abusive supervision across 3 hierarchical levels (i.e., managers, supervisors, and employees). Drawing on social learning theory and social information processing theory, we find general support for the study hypotheses. Specifically, we find that abusive manager behavior is positively related to abusive supervisor behavior, which in turn is positively related to work group interpersonal deviance. In addition, hostile climate moderates the relationship between abusive supervisor behavior and work group interpersonal deviance such that the relationship is stronger when hostile climate is high. The results provide support for our trickle-down model in that abusive manager behavior was not only related to abusive supervisor behavior but was also associated with employees’ behavior 2 hierarchical levels below the manager.",
"title": ""
},
{
"docid": "5f8b51a4e762928ab46a3ceca6f488e7",
"text": "Variable-flux permanent-magnet machines (VFPM) are of great interest and many different machine topologies have been documented. This paper categorizes VFPM machine topologies with regard to the method of flux variation and further, in the case of hybrid excited machines with field coils, with regard to the location of the excitation sources. The different VFPM machines are reviewed and compared in terms of their torque density, complexity and their ability to vary the flux.",
"title": ""
},
{
"docid": "1b141958df645ecbc230988485386812",
"text": "For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision for learning how to select informative justifications, where justifications serve as inferential connections between the question and the correct answer while often containing little lexical overlap with either. We propose a neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection. Our approach is informed by a set of features designed to combine both learned representations and explicit features to capture the connection between questions, answers, and answer justifications. We show that with this end-to-end approach we are able to significantly improve upon a strong IR baseline in both justification ranking (+9% rated highly relevant) and answer selection (+6% P@1).",
"title": ""
},
{
"docid": "0f50b3dd947b9a04d121079e0fa8f10e",
"text": "Twitter has undoubtedly caught the attention of both the general public, and academia as a microblogging service worthy of study and attention. Twitter has several features that sets it apart from other social media/networking sites, including its 140 character limit on each user's message (tweet), and the unique combination of avenues via which information is shared: directed social network of friends and followers, where messages posted by a user is broadcast to all its followers, and the public timeline, which provides real time access to posts or tweets on specific topics for everyone. While the character limit plays a role in shaping the type of messages that are posted and shared, the dual mode of sharing information (public vs posts to one's followers) provides multiple pathways in which a posting can propagate through the user landscape via forwarding or \"Retweets\", leading us to ask the following questions: How does a message resonate and spread widely among the users on Twitter, and are the resulting cascade dynamics different due to the unique features of Twitter? What role does content of a message play in its popularity? Realizing that tweet content would play a major role in the information propagation dynamics (as borne out by the empirical results reported in this paper), we focused on patterns of information propagation on Twitter by observing the sharing and reposting of messages around a specific topic, i.e. the Iranian election.\n We know that during the 2009 post-election protests in Iran, Twitter and its large community of users played an important role in disseminating news, images, and videos worldwide and in documenting the events. We collected tweets of more than 20 million publicly accessible users on Twitter and analyzed over three million tweets related to the Iranian election posted by around 500K users during June and July of 2009. Our results provide several key insights into the dynamics of information propagation that are special to Twitter. For example, the tweet cascade size distribution is a power-law with exponent of -2.51 and more than 99% of the cascades have depth less than 3. The exponent is different from what one expects from a branching process (usually used to model information cascades) and so is the shallow depth, implying that the dynamics underlying the cascades are potentially different on Twitter. Similarly, we are able to show that while Twitter's Friends-Followers network structure plays an important role in information propagation through retweets (re-posting of another user's message), the search bar and trending topics on Twitter's front page offer other significant avenues for the spread of information outside the explicit Friends-Followers network. We found that at most 63.7% of all retweets in this case were reposts of someone the user was following directly. We also found that at least 7% of retweets are from the public posts, and potentially more than 30% of retweets are from the public timeline. In the end, we examined the context and content of the kinds of information that gained the attention of users and spread widely on Twitter. Our data indicates that the retweet probabilities are highly content dependent.",
"title": ""
},
{
"docid": "56401a83fecb64f2810c4bbc51b912fc",
"text": "This paper presents an approach to vision-based simultaneous localization and mapping (SLAM). Our approach uses the scale invariant feature transform (SIFT) as features and applies a rejection technique to concentrate on a reduced set of distinguishable, stable features. We track detected SIFT features over consecutive frames obtained by a stereo camera and select only those features that appear to be stable from different views. Whenever a feature is selected, we compute a representative feature given the previous observations. This approach is applied within a Rao-Blackwellized particle filter to make the data association easier and furthermore to reduce the number of landmarks that need to be maintained in the map. Our system has been implemented and tested on data gathered with a mobile robot in a typical office environment. Experiments presented in this paper demonstrate that our method improves the data association and in this way leads to more accurate maps",
"title": ""
},
{
"docid": "cb0d9a112e5df0aa50ac54eb28c72311",
"text": "This paper describes the development and testing of a system dynamics model of collaboration, trust building, and knowledge sharing in a complex, intergovernmental information system project. The model building and testing activity was an exploration of the feasibility of applying these modeling methods to a complex interorganizational process about which only qualitative data were available. The process to be modeled was the subject of qualitative field research studying knowledge and information sharing in interorganizational networks. This research had produced a large volume of observational and interview data and analyses about the technology project. In the course of collecting and analyzing data from this project, the researchers noted evidence of what appeared to be important feedback effects. The feedback loops appeared to influence the collaboration and knowledge sharing, critical parts of how the information system design and construction progressed. These observations led to conversations with colleagues who have extensive experience in dynamic modeling. All agreed that applying dynamic modeling methods to this process had considerable potential to yield valuable insights into collaboration. As a novel application of the methods it could yield new modeling insights as well. The modeling experience supported both propositions and was judged a success that will lead to continued exploration of these questions.",
"title": ""
},
{
"docid": "5c6c8834304264b59db5f28021ccbdb8",
"text": "We develop a dynamic optimal control model of a fashion designers challenge of maintaining brand image in the face of short-term pro
t opportunities through expanded sales that risk brand dilution in the longer-run. The key state variable is the brands reputation, and the key decision is sales volume. Depending on the brands capacity to command higher prices, one of two regimes is observed. If the price mark-ups relative to production costs are modest, then the optimal solution may simply be to exploit whatever value can be derived from the brand in the short-run and retire the brand when that capacity is fully diluted. However, if the price markups are more substantial, then an existing brand should be preserved.",
"title": ""
},
{
"docid": "bbdd4ffd6797d00c3547626959118b92",
"text": "A vision system was designed to detect multiple lanes on structured highway using an “estimate and detect” scheme. It detected the lane in which the vehicle was driving (the central lane) and estimated the possible position of two adjacent lanes. Then the detection was made based on these estimations. The vehicle was first recognized if it was driving on a straight road or in a curve using its GPS position and the OpenStreetMap digital map. The two cases were processed differently. For straight road, the central lane was detected in the original image using Hough transformation and a simplified perspective transformation was designed to make estimations. In the case of curve path, a complete perspective transformation was performed and the central lane was detected by scanning at each row in the top view image. The system was able to detected lane marks that were not distinct or even obstructed by other vehicles.",
"title": ""
},
{
"docid": "f61a7e280cffe673a9068cf33fd6f803",
"text": "Enterprise Resource Planning (ERP) systems are highly integrated enterprise-wide information systems that automate core business processes. The ERP packages of vendors such as SAP, Baan, J.D. Edwards, Peoplesoft and Intentia represent more than a standard business platform, they prescribe information blueprints of how an organisation’s business processes should operate. In this paper the scale and strategic importance of ERP systems are identified and the problem of ERP implementation is defined. A Critical Success Factors (CSFs) framework is proposed to aid managers develop an ERP implementation strategy. The framework is illustrated using two case examples from a research sample of eight companies. The case analysis highlights the critical impact of legacy systems upon the implementation process, the importance of selecting an appropriate ERP strategy and identifies the importance of Business Process Change (BPC) and software configuration in addition to factors already cited in the literature. The implications of the results for managerial practice are described and future research opportunities are outlined.",
"title": ""
},
{
"docid": "e9499206f1952f1bcbd2d7bedad1b3f8",
"text": "The Internet of Things (IoT) enables a wide range of application scenarios with potentially critical actuating and sensing tasks, e.g., in the e-health domain. For communication at the application layer, resource-constrained devices are expected to employ the constrained application protocol (CoAP) that is currently being standardized at the Internet Engineering Task Force. To protect the transmission of sensitive information, secure CoAP mandates the use of datagram transport layer security (DTLS) as the underlying security protocol for authenticated and confidential communication. DTLS, however, was originally designed for comparably powerful devices that are interconnected via reliable, high-bandwidth links. In this paper, we present Lithe-an integration of DTLS and CoAP for the IoT. With Lithe, we additionally propose a novel DTLS header compression scheme that aims to significantly reduce the energy consumption by leveraging the 6LoWPAN standard. Most importantly, our proposed DTLS header compression scheme does not compromise the end-to-end security properties provided by DTLS. Simultaneously, it considerably reduces the number of transmitted bytes while maintaining DTLS standard compliance. We evaluate our approach based on a DTLS implementation for the Contiki operating system. Our evaluation results show significant gains in terms of packet size, energy consumption, processing time, and network-wide response times when compressed DTLS is enabled.",
"title": ""
},
{
"docid": "947f17970a81ebc4e8c780b1291aa474",
"text": "Minimally invasive total hip arthroplasty (THA) is claimed to be superior to the standard technique, due to the potential reduction of soft tissue damage via a smaller and tissue-sparing approach. As a result of the lack of objective evidence of fewer muscle and tendon defects, controversy still remains as to whether minimally invasive total hip arthroplasty truly minimizes muscle and tendon damage. Therefore, the objective was to compare the influence of the surgical approach on abductor muscle trauma and to analyze the relevance to postoperative pain and functional recovery. Between June 2006 and July 2007, 44 patients with primary hip arthritis were prospectively included in the study protocol. Patients underwent cementless unilateral total hip arthroplasty either through a minimally invasive anterolateral approach (ALMI) (n = 21) or a modified direct lateral approach (mDL) (n = 16). Patients were evaluated clinically and underwent MR imaging preoperatively and at 3 and 12 months postoperatively. Clinical assessment contained clinical examination, performance of abduction test and the survey of a function score using the Harris Hip Score, a pain score using a numeric rating scale (NRS) of 0–10, as well as a satisfaction score using an NRS of 1–6. Additionally, myoglobin and creatine kinase were measured preoperatively, and 6, 24 and 96 h postoperatively. Evaluation of the MRI images included fatty atrophy (rating scale 0–4), tendon defects (present/absent) and bursal fluid collection of the abductor muscle. Muscle and tendon damage occurred in both groups, but more lateral gluteus medius tendon defects [mDL 3/12mth.: 6 (37%)/4 (25%); ALMI: 3 (14%)/2 (9%)] and muscle atrophy in the anterior part of the gluteus medius [mean-standard (12): 1.75 ± 1.8; mean-MIS (12): 0.98 ± 1.1] were found in patients with the mDL approach. The clinical outcome was also poorer compared to the ALMI group. Significantly, more Trendelenburg’s signs were evident and lower clinical scores were achieved in the mDL group. No differences in muscle and tendon damage were found for the gluteus minimus muscle. A higher serum myoglobin concentration was measured 6 and 24 h postoperatively in the mDL group (6 h: 403 ± 168 μg/l; 24 h: 304 ± 182 μg/l) compared to the ALMI group (6 h: 331 ± 143 μg/l; 24 h: 268 ± 145 μg/l). Abductor muscle and tendon damage occurred in both approaches, but the gluteus medius muscle can be spared more successfully via the minimally invasive approach and is accompanied by a better clinical outcome. Therefore, going through the intermuscular plane, without any detachment or dissection of muscle and tendons, truly minimizes perioperative soft tissue trauma. Furthermore, MRI emerges as an important imaging modality in the evaluation of muscle trauma in THA.",
"title": ""
},
{
"docid": "be6ae3d9324fec5a4a5a5e8b5f0d6e0f",
"text": "ive Summarization Improved by WordNet-based Extractive Sentences Niantao Xie, Sujian Li, Huiling Ren, and Qibin Zhai 1 MOE Key Laboratory of Computational Linguistics, Peking University, China 2 Institute of Medical Information, Chinese Academy of Medical Sciences 3 MOE Information Security Lab, School of Software & Microelectronics, Peking University, China {xieniantao,lisujian}@pku.edu.cn ren.huiling@imicams.ac.cn qibinzhai@ss.pku.edu.cn Abstract. Recently, the seq2seq abstractive summarization models have achieved good results on the CNN/Daily Mail dataset. Still, how to improve abstractive methods with extractive methods is a good research direction, since extractive methods have their potentials of exploiting various efficient features for extracting important sentences in one text. In this paper, in order to improve the semantic relevance of abstractive summaries, we adopt the WordNet based sentence ranking algorithm to extract the sentences which are most semantically to one text. Then, we design a dual attentional seq2seq framework to generate summaries with consideration of the extracted information. At the same time, we combine pointer-generator and coverage mechanisms to solve the problems of out-of-vocabulary (OOV) words and duplicate words which exist in the abstractive models. Experiments on the CNN/Daily Mail dataset show that our models achieve competitive performance with the state-of-theart ROUGE scores. Human evaluations also show that the summaries generated by our models have high semantic relevance to the original text. Recently, the seq2seq abstractive summarization models have achieved good results on the CNN/Daily Mail dataset. Still, how to improve abstractive methods with extractive methods is a good research direction, since extractive methods have their potentials of exploiting various efficient features for extracting important sentences in one text. In this paper, in order to improve the semantic relevance of abstractive summaries, we adopt the WordNet based sentence ranking algorithm to extract the sentences which are most semantically to one text. Then, we design a dual attentional seq2seq framework to generate summaries with consideration of the extracted information. At the same time, we combine pointer-generator and coverage mechanisms to solve the problems of out-of-vocabulary (OOV) words and duplicate words which exist in the abstractive models. Experiments on the CNN/Daily Mail dataset show that our models achieve competitive performance with the state-of-theart ROUGE scores. Human evaluations also show that the summaries generated by our models have high semantic relevance to the original text.",
"title": ""
},
{
"docid": "f8d703c8692b2598d2db46b2c0e6b132",
"text": "Data analytics is proving to be very useful for achieving productivity gains in manufacturing. Predictive analytics (using advanced machine learning) is particularly valuable in manufacturing, as it leads to production improvement with respect to the cost, quantity, quality and sustainability of manufactured products by anticipating changes to the manufacturing system states. Many small and medium manufacturers do not have the infrastructure, technical capability or financial means to take advantage of predictive analytics. A domain-specific language and framework for performing predictive analytics for manufacturing and production frameworks can counter this deficiency. In this paper, we survey some of the applications of predictive analytics in manufacturing and we discuss the challenges that need to be addressed. Then, we propose a core set of abstractions and a domain-specific framework for applying predictive analytics on manufacturing applications. Such a framework will allow manufacturers to take advantage of predictive analytics to improve their production.",
"title": ""
},
{
"docid": "a26717cb49e3886c2b2eaab4c9694183",
"text": "Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations.",
"title": ""
}
] |
scidocsrr
|
c6a1951bbfde0d2311e7a9f8b27ad756
|
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
|
[
{
"docid": "228f2487760407daf669676ce3677609",
"text": "The limitation of using low electron doses in non-destructive cryo-electron tomography of biological specimens can be partially offset via averaging of aligned and structurally homogeneous subsets present in tomograms. This type of sub-volume averaging is especially challenging when multiple species are present. Here, we tackle the problem of conformational separation and alignment with a \"collaborative\" approach designed to reduce the effect of the \"curse of dimensionality\" encountered in standard pair-wise comparisons. Our new approach is based on using the nuclear norm as a collaborative similarity measure for alignment of sub-volumes, and by exploiting the presence of symmetry early in the processing. We provide a strict validation of this method by analyzing mixtures of intact simian immunodeficiency viruses SIV mac239 and SIV CP-MAC. Electron microscopic images of these two virus preparations are indistinguishable except for subtle differences in conformation of the envelope glycoproteins displayed on the surface of each virus particle. By using the nuclear norm-based, collaborative alignment method presented here, we demonstrate that the genetic identity of each virus particle present in the mixture can be assigned based solely on the structural information derived from single envelope glycoproteins displayed on the virus surface.",
"title": ""
}
] |
[
{
"docid": "94ea3cbf3df14d2d8e3583cb4714c13f",
"text": "The emergence of computers as an essential tool in scientific research has shaken the very foundations of differential modeling. Indeed, the deeply-rooted abstraction of smoothness, or differentiability, seems to inherently clash with a computer's ability of storing only finite sets of numbers. While there has been a series of computational techniques that proposed discretizations of differential equations, the geometric structures they are supposed to simulate are often lost in the process.",
"title": ""
},
{
"docid": "4181650cd44ab35c2200e88bc631db4b",
"text": "Deep learning has emerged as a strong and efficient framework that can be applied to a broad spectrum of complex learning problems which were difficult to solve using the traditional machine learning techniques in the past. In the last few years, deep learning has advanced radically in such a way that it can surpass human-level performance on a number of tasks. As a consequence, deep learning is being extensively used in most of the recent day-to-day applications. However, security of deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify the output. In recent times, different types of adversaries based on their threat model leverage these vulnerabilities to compromise a deep learning system where adversaries have high incentives. Hence, it is extremely important to provide robustness to deep learning algorithms against these adversaries. However, there are only a few strong countermeasures which can be used in all types of attack scenarios to design a robust deep learning system. In this paper, we attempt to provide a detailed discussion on different types of adversarial attacks with various threat models and also elaborate the efficiency and challenges of recent countermeasures against them.",
"title": ""
},
{
"docid": "6b5e5bb3c1567be115dfd5060370b16f",
"text": "In this paper a system for processing documents that can be grouped into classes is illustrated. We have considered invoices as a case-study. The system is divided into three phases: document analysis, classification, and understanding. We illustrate the analysis and understanding phases. The system is based on knowledge constructed by means of a learning procedure. The experimental results demonstrate the reliability of our document analysis and understanding procedures. They also present evidence that it is possible to use a small learning set of invoices to obtain reliable knowledge for the understanding phase.",
"title": ""
},
{
"docid": "82e170219f7fefdc2c36eb89e44fa0f5",
"text": "The Internet of Things (IOT), the idea of getting real-world objects connected with each other, will change the ways we organize, obtain and consume information radically. Through sensor networks, agriculture can be connected to the IOT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of the connections, the agronomists will have better understanding of crop growth models and farming practices will be improved as well. This paper reports on the design of the sensor network when connecting agriculture to the IOT. Reliability, management, interoperability, low cost and commercialization are considered in the design. Finally, we share our experiences in both development and deployment.",
"title": ""
},
{
"docid": "4ae4a8a70ba97204f881b805693daa5e",
"text": "This paper discusses automatic determination of case in Arabic. This task is a major source of errors in full diacritization of Arabic. We use a gold-standard syntactic tree, and obtain an error rate of about 4.2%, with a machine learning based system outperforming a system using hand-written rules. A careful error analysis suggests that when we account for annotation errors in the gold standard, the error rate drops to 0.8%, with the hand-written rules outperforming the machine learning-based system.",
"title": ""
},
{
"docid": "6e82e635682cf87a84463f01c01a1d33",
"text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.",
"title": ""
},
{
"docid": "12d54528e781e3d1f812887b72e02eec",
"text": "CONTEXT\nIn Israel, the mean annual incidence of hepatitis A disease was 50.4 per 100 000 during 1993-1998. A 2-dose universal hepatitis A immunization program aimed at children aged 18 and 24 months (without a catch-up campaign) was started in 1999.\n\n\nOBJECTIVE\nTo observe the impact of toddlers-only universal vaccination on hepatitis A virus disease in Israel.\n\n\nDESIGN AND SETTING\nOngoing passive national surveillance of hepatitis A cases in Israel has been conducted since 1993 by the Ministry of Health. An active surveillance program in the Jerusalem district in 1999-2003 provided validation for the passive program.\n\n\nMAIN OUTCOME MEASURE\nIncidence of reported hepatitis A disease, 1993-2004.\n\n\nRESULTS\nOverall vaccine coverage in Israel in 2001-2002 was 90% for the first dose and 85% for the second dose. A decline in disease rates was observed before 1999 among the Jewish but not the non-Jewish population. After initiation of the program, a sharp decrease in disease rates was observed in both populations. The annual incidence of 2.2 to 2.5 per 100 000 during 2002-2004 represents a 95% or greater reduction for each year with respect to the mean incidence during 1993-1998 (P<.001). For children aged 1 through 4 years, a 98.2% reduction in disease was observed in 2002-2004, compared with the prevaccination period (P<.001). However, a sharp decline was also observed in all other age groups (84.3% [<1 year], 96.5% [5-9 years], 95.2% [10-14 years], 91.3% [15-44 years], 90.6% [45-64 years], and 77.3% [>or=65 years]). Among the Jewish population in the Jerusalem district, in whom the active surveillance program was successfully conducted, a more than 90% reduction of disease was demonstrated. Of the 433 cases reported nationwide in 2002-2004 in whom vaccination status could be ascertained, 424 (97.9%) received no vaccine and none received 2 doses.\n\n\nCONCLUSION\nThis universal toddlers-only immunization program in Israel demonstrated not only high effectiveness of hepatitis A vaccination but also marked herd protection, challenging the need for catch-up hepatitis A vaccination programs.",
"title": ""
},
{
"docid": "7d7f0968d5c6010542f76273dfd7a353",
"text": "Numerous single image blind deblurring algorithms have been proposed to restore latent sharp images under camera motion. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real blurred images. It is thus unclear how these algorithms would perform on images acquired \"in the wild\" and how we could gauge the progress in the field. In this paper, we aim to bridge this gap. We present the first comprehensive perceptual study and analysis of single image blind deblurring using real-world blurred images. First, we collect a dataset of real blurred images and a dataset of synthetically blurred images. Using these datasets, we conduct a large-scale user study to quantify the performance of several representative state-of-the-art blind deblurring algorithms. Second, we systematically analyze subject preferences, including the level of agreement, significance tests of score differences, and rationales for preferring one method over another. Third, we study the correlation between human subjective scores and several full-reference and noreference image quality metrics. Our evaluation and analysis indicate the performance gap between synthetically blurred images and real blurred image and sheds light on future research in single image blind deblurring.",
"title": ""
},
{
"docid": "ba0e3d6cc397adb6cc9fa901aff1ff22",
"text": "Though deep learning has pushed the boundaries of classification forward, in recent years hints of the limits of standard classification have begun to emerge. Problems such as fooling, adding new classes over time, and the need to retrain learning models only for small changes to the original problem all point to a potential shortcoming in the classic classification regime, where a comprehensive a priori knowledge of the possible classes or concepts is critical. Without such knowledge, classifiers misjudge the limits of their knowledge and overgeneralization therefore becomes a serious obstacle to consistent performance. In response to these challenges, this paper extends the classic regime by reframing classification instead with the assumption that concepts present in the training set are only a sample of the hypothetical final set of concepts. To bring learning models into this new paradigm, a novel elaboration of standard architectures called the competitive overcomplete output layer (COOL) neural network is introduced. Experiments demonstrate the effectiveness of COOL by applying it to fooling, separable concept learning, one-class neural networks, and standard classification benchmarks. The results suggest that, unlike conventional classifiers, the amount of generalization in COOL networks can be tuned to match the problem.",
"title": ""
},
{
"docid": "fdbcf90ffeebf9aab41833df0fff23e6",
"text": "(Under the direction of Anselmo Lastra) For image synthesis in computer graphics, two major approaches for representing a surface's appearance are texture mapping, which provides spatial detail, such as wallpaper, or wood grain; and the 4D bi-directional reflectance distribution function (BRDF) which provides angular detail, telling how light reflects off surfaces. I combine these two modes of variation to form the 6D spatial bi-directional reflectance distribution function (SBRDF). My compact SBRDF representation simply stores BRDF coefficients at each pixel of a map. I propose SBRDFs as a surface appearance representation for computer graphics and present a complete system for their use. I acquire SBRDFs of real surfaces using a device that simultaneously measures the BRDF of every point on a material. The system has the novel ability to measure anisotropy (direction of threads, scratches, or grain) uniquely at each surface point. I fit BRDF parameters using an efficient nonlinear optimization approach specific to BRDFs. SBRDFs can be rendered using graphics hardware. My approach yields significantly more detailed, general surface appearance than existing techniques for a competitive rendering cost. I also propose an SBRDF rendering method for global illumination using prefiltered environment maps. This improves on existing prefiltered environment map techniques by decoupling the BRDF from the environment maps, so a single set of maps may be used to illuminate the unique BRDFs at each surface point. I demonstrate my results using measured surfaces including gilded wallpaper, plant leaves, upholstery fabrics, wrinkled gift-wrapping paper and glossy book covers. iv To Tiffany, who has worked harder and sacrificed more for this than have I. ACKNOWLEDGMENTS I appreciate the time, guidance and example of Anselmo Lastra, my advisor. I'm grateful to Steve Molnar for being my mentor throughout graduate school. I'm grateful to the other members of my committee, Henry Fuchs, Gary Bishop, and Lars Nyland for helping and teaching me and creating an environment that allows research to be done successfully and pleasantly. I am grateful for the effort and collaboration of Ben Cloward, who masterfully modeled the Carolina Inn lobby, patiently worked with my software, and taught me much of how artists use computer graphics. I appreciate the collaboration of Wolfgang Heidrich, who worked hard on this project and helped me get up to speed on shading with graphics hardware. I'm thankful to Steve Westin, for patiently teaching me a great deal about surface appearance and light measurement. I'm grateful for …",
"title": ""
},
{
"docid": "f7c73bc585a76e999ec00cf9b8c3be82",
"text": "The paper describes the enrichment of OntoSenseNeta verbcentric lexical resource for Indian Languages. This resource contains a newly developed Telugu-Telugu dictionary. It is important because native speakers can better annotate the senses when both the word and its meaning are in Telugu. Hence efforts are made to develop a soft copy of Telugu dictionary. Our resource also has manually annotated gold standard corpus consisting 8483 verbs, 253 adverbs and 1673 adjectives. Annotations are done by native speakers according to defined annotation guidelines. In this paper, we provide an overview of the annotation procedure and present the validation of our resource through inter-annotator agreement. Concepts of sense-class and sense-type are discussed. Additionally, we discuss the potential of lexical sense-annotated corpora in improving word sense disambiguation (WSD) tasks. Telugu WordNet is crowd-sourced for annotation of individual words in synsets and is compared with the developed sense-annotated lexicon (OntoSenseNet) to examine the improvement. Also, we present a special categorization (spatio-temporal classification) of adjectives.",
"title": ""
},
{
"docid": "1984fa68b41a0018a02fbcb0cf3737f4",
"text": "In this letter, a new miniaturized broadband 3-dB branchline coupler at the center frequency of 880 MHz is proposed. The proposed coupler is designed by means of integrated miniaturization method, which consists of shunt capacitors, fractal geometry and equivalent miniaturized stubs, to achieve 82% size reduction compared with the referenced coupler. The return loss and isolation are both under −20 dB over a 19% relative bandwidth. The proposed coupler is simulated, fabricated and measured. The measured results agree well with the simulated ones.",
"title": ""
},
{
"docid": "1465b6c38296dfc46f8725dca5179cf1",
"text": "A brief introduction is given to the actual mechanics of simulated annealing, and a simple example from an IC layout is used to illustrate how these ideas can be applied. The complexities and tradeoffs involved in attacking a realistically complex design problem are illustrated by dissecting two very different annealing algorithms for VLSI chip floorplanning. Several current research problems aimed at determining more precisely how and why annealing algorithms work are examined. Some philosophical issues raised by the introduction of annealing are discussed.<<ETX>>",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
},
{
"docid": "f5e38c2f59aeb23b951dbe17ffe5729c",
"text": "The microblogging service Twitter is in the process of being appropriated for conversational interaction and is starting to be used for collaboration, as well. In an attempt to determine how well Twitter supports user-to-user exchanges, what people are using Twitter for, and what usage or design modifications would make it (more) usable as a tool for collaboration, this study analyzes a corpus of naturally-occurring public Twitter messages (tweets), focusing on the functions and uses of the @ sign and the coherence of exchanges. The findings reveal a surprising degree of conversationality, facilitated especially by the use of @ as a marker of addressivity, and shed light on the limitations of Twitter's current design for collaborative use.",
"title": ""
},
{
"docid": "f018db7f20245180d74e4eb07b99e8d3",
"text": "Particle filters can become quite inefficient when being applied to a high-dimensional state space since a prohibitively large number of samples may be required to approximate the underlying density functions with desired accuracy. In this paper, by proposing an adaptive Rao-Blackwellized particle filter for tracking in surveillance, we show how to exploit the analytical relationship among state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, the distributions of the linear variables are updated analytically using a Kalman filter which is associated with each particle in a particle filtering framework. Experiments and detailed performance analysis using both simulated data and real video sequences reveal that the proposed method results in more accurate tracking than a regular particle filter",
"title": ""
},
{
"docid": "2ae1dfeae3c6b8a1ca032198f2989aef",
"text": "This study enhances the existing literature on online trust by integrating the consumers’ product evaluations model and technology adoption model in e-commerce environments. In this study, we investigate how perceived value influences the perceptions of online trust among online buyers and their willingness to repurchase from the same website. This study proposes a research model that compares the relative importance of perceived value and online trust to perceived usefulness in influencing consumers’ repurchase intention. The proposed model is tested using data collected from online consumers of e-commerce. The findings show that although trust and ecommerce adoption components are critical in influencing repurchase intention, product evaluation factors are also important in determining repurchase intention. Perceived quality is influenced by the perceptions of competitive price and website reputation, which in turn influences perceived value; and perceived value, website reputation, and perceived risk influence online trust, which in turn influence repurchase intention. The findings also indicate that the effect of perceived usefulness on repurchase intention is not significant whereas perceived value and online trust are the major determinants of repurchase intention. Major theoretical contributions and practical implications are discussed.",
"title": ""
},
{
"docid": "2c1689a9a6d257f9e2ce8f33a1e30cb9",
"text": "This study examined the use of neural word embeddings for clinical abbreviation disambiguation, a special case of word sense disambiguation (WSD). We investigated three different methods for deriving word embeddings from a large unlabeled clinical corpus: one existing method called Surrounding based embedding feature (SBE), and two newly developed methods: Left-Right surrounding based embedding feature (LR_SBE) and MAX surrounding based embedding feature (MAX_SBE). We then added these word embeddings as additional features to a Support Vector Machines (SVM) based WSD system. Evaluation using the clinical abbreviation datasets from both the Vanderbilt University and the University of Minnesota showed that neural word embedding features improved the performance of the SVMbased clinical abbreviation disambiguation system. More specifically, the new MAX_SBE method outperformed the other two methods and achieved the state-of-the-art performance on both clinical abbreviation datasets.",
"title": ""
},
{
"docid": "5f49c93d7007f0f14f1410ce7805b29a",
"text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.",
"title": ""
},
{
"docid": "05c93e5ddb9cb3e7abd3a1ea38bc32dc",
"text": "BACKGROUND\nThis national study focused on posttreatment outcomes of community treatments of cocaine dependence. Relapse to weekly (or more frequent) cocaine use in the first year after discharge from 3 major treatment modalities was examined in relation to patient problem severity at admission to the treatment program and length of stay.\n\n\nMETHODS\nWe studied 1605 cocaine-dependent patients from 11 cities located throughout the United States using a naturalistic, nonexperimental evaluation design. They were sequentially admitted from November 1991 to December 1993 to 55 community-based treatment programs in the national Drug Abuse Treatment Outcome Studies. Included were 542 patients admitted to 19 long-term residential programs, 458 patients admitted to 24 outpatient drug-free programs, and 605 patients admitted to 12 short-term inpatient programs.\n\n\nRESULTS\nOf 1605 patients, 377 (23.5%) reported weekly cocaine use in the year following treatment (dropping from 73.1% in the year before admission). An additional 18.0% had returned to another drug treatment program. Higher severity of patient problems at program intake and shorter stays in treatment (<90 days) were related to higher cocaine relapse rates.\n\n\nCONCLUSIONS\nPatients with the most severe problems were more likely to enter long-term residential programs, and better outcomes were reported by those treated 90 days or longer. Dimensions of psychosocial problem severity and length of stay are, therefore, important considerations in the treatment of cocaine dependence. Cocaine relapse rates for patients with few problems at program intake were most favorable across all treatment conditions, but better outcomes for patients with medium- to high-level problems were dependent on longer treatment stays.",
"title": ""
}
] |
scidocsrr
|
391c259b1f55f531dd6b1c0ef4c1c06c
|
Is chess the drosophila of artificial intelligence? A social history of an algorithm.
|
[
{
"docid": "178d4712ef7dfa7a770ce1ebb702b24c",
"text": "In this article we present an overview on the state of the art in games solved in the domain of twoperson zero-sum games with perfect information. The results are summarized and some predictions for the near future are given. The aim of the article is to determine which game characteristics are predominant when the solution of a game is the main target. First, it is concluded that decision complexity is more important than state-space complexity as a determining factor. Second, we conclude that there is a trade-off between knowledge-based methods and brute-force methods. It is shown that knowledge-based methods are more appropriate for solving games with a low decision complexity, while brute-force methods are more appropriate for solving games with a low state-space complexity. Third, we found that there is a clear correlation between the first-player’s initiative and the necessary effort to solve a game. In particular, threat-space-based search methods are sometimes able to exploit the initiative to prove a win. Finally, the most important results of the research involved, the development of new intelligent search methods, are described. 2001 Published by Elsevier Science B.V.",
"title": ""
}
] |
[
{
"docid": "306a833c0130678e1b2ece7e8b824d5e",
"text": "In many natural languages, there are clear syntactic and/or intonational differences between declarative sentences, which are primarily used to provide information, and interrogative sentences, which are primarily used to request information. Most logical frameworks restrict their attention to the former. Those that are concerned with both usually assume a logical language that makes a clear syntactic distinction between declaratives and interrogatives, and usually assign different types of semantic values to these two types of sentences. A different approach has been taken in recent work on inquisitive semantics. This approach does not take the basic syntactic distinction between declaratives and interrogatives as its starting point, but rather a new notion of meaning that captures both informative and inquisitive content in an integrated way. The standard way to treat the logical connectives in this approach is to associate them with the basic algebraic operations on these new types of meanings. For instance, conjunction and disjunction are treated as meet and join operators, just as in classical logic. This gives rise to a hybrid system, where sentences can be both informative and inquisitive at the same time, and there is no clearcut division between declaratives and interrogatives. It may seem that these two general approaches in the existing literature are quite incompatible. The main aim of this paper is to show that this is not the case. We develop an inquisitive semantics for a logical language that has a clearcut division between declaratives and interrogatives. We show that this language coincides in expressive power with the hybrid language that is standardly assumed in inquisitive semantics, we establish a sound and complete axiomatization for the associated logic, and we consider a natural enrichment of the system with presuppositional interrogatives.",
"title": ""
},
{
"docid": "7034f49fe75a9152a4e6849d2aacdc0b",
"text": "Sustained hepatic inflammation is an important factor in progression of chronic liver diseases, including hepatitis C or non-alcoholic steatohepatitis. Liver inflammation is regulated by chemokines, which regulate the migration and activities of hepatocytes, Kupffer cells, hepatic stellate cells, endothelial cells, and circulating immune cells. However, the effects of the different chemokines and their receptors vary during pathogenesis of different liver diseases. During development of chronic viral hepatitis, CCL5 and CXCL10 regulate the cytopathic versus antiviral immune responses of T cells and natural killer cells. During development of nonalcoholic steatohepatitis, CCL2 and its receptor are up-regulated in the liver, where they promote macrophage accumulation, inflammation, fibrosis, and steatosis, as well as in adipose tissue. CCL2 signaling thereby links hepatic and systemic inflammation related to metabolic disorders and insulin resistance. Several chemokine signaling pathways also promote hepatic fibrosis. Recent studies have shown that other chemokines and immune cells have anti-inflammatory and antifibrotic activities. Chemokines and their receptors can also contribute to the pathogenesis of hepatocellular carcinoma, promoting proliferation of cancer cells, the inflammatory microenvironment of the tumor, evasion of the immune response, and angiogenesis. We review the roles of different chemokines in the pathogenesis of liver diseases and their potential use as biomarkers or therapeutic targets.",
"title": ""
},
{
"docid": "602f75c04e490d05b82bcb47219ec3da",
"text": "One common form of tampering in digital audio signals is known as splicing, where sections from one audio is inserted to another audio. In this paper, we propose an effective splicing detection method for audios. Our method achieves this by detecting abnormal differences in the local noise levels in an audio signal. This estimation of local noise levels is based on an observed property of audio signals that they tend to have kurtosis close to a constant in the band-pass filtered domain. We demonstrate the efficacy and robustness of the proposed method using both synthetic and realistic audio splicing forgeries.",
"title": ""
},
{
"docid": "651e1c0385dd55e04bb2fe90f0e6dd24",
"text": "Pollution has been recognized as the major threat to sustainability of river in Malaysia. Some of the limitations of existing methods for river monitoring are cost of deployment, non-real-time monitoring, and low resolution both in time and space. To overcome these limitations, a smart river monitoring solution is proposed for river water quality in Malaysia. The proposed method incorporates unmanned aerial vehicle (UAV), internet of things (IoT), low power wide area (LPWA) and data analytic (DA). A setup of the proposed method and preliminary results are presented. The proposed method is expected to deliver an efficient and real-time solution for river monitoring in Malaysia.",
"title": ""
},
{
"docid": "610ec093f08d62548925918d6e64b923",
"text": "Word embeddings encode semantic meanings of words into low-dimension word vectors. In most word embeddings, one cannot interpret the meanings of specific dimensions of those word vectors. Nonnegative matrix factorization (NMF) has been proposed to learn interpretable word embeddings via non-negative constraints. However, NMF methods suffer from scale and memory issue because they have to maintain a global matrix for learning. To alleviate this challenge, we propose online learning of interpretable word embeddings from streaming text data. Experiments show that our model consistently outperforms the state-of-the-art word embedding methods in both representation ability and interpretability. The source code of this paper can be obtained from http: //github.com/skTim/OIWE.",
"title": ""
},
{
"docid": "eff903cb53fc7f7e9719a2372d517ab3",
"text": "The freshwater angelfishes (Pterophyllum) are South American cichlids that have become very popular among aquarists, yet scarce information on their culture and aquarium husbandry exists. We studied Pterophyllum scalare to analyze dietary effects on fecundity, growth, and survival of eggs and larvae during 135 days. Three diets were used: A) decapsulated cysts of Artemia, B) commercial dry fish food, and C) a mix diet of the rotifer Brachionus plicatilis and the cladoceran Daphnia magna. The initial larval density was 100 organisms in each 40 L aquarium. With diet A, larvae reached a maximum weight of 3.80 g, a total length of 6.3 cm, and a height of 5.8 cm; with diet B: 2.80 g, 4.81 cm, and 4.79 cm, and with diet C: 3.00 g, 5.15 cm, and 5.10 cm, respectively. Significant differences were observed between diet A, and diet B and C, but no significantly differences were observed between diets B and C. Fecundity varied from 234 to 1,082 eggs in 20 and 50 g females, respectively. Egg survival ranged from 87.4% up to 100%, and larvae survival (80 larvae/40 L aquarium) from 50% to 66.3% using diet B and A, respectively. Live food was better for growing fish than the commercial balanced food diet. Fecundity and survival are important factors in planning a good production of angelfish.",
"title": ""
},
{
"docid": "d563b025b084b53c30afba4211870f2d",
"text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.",
"title": ""
},
{
"docid": "45ea01d82897401058492bc2f88369b3",
"text": "Reduction in greenhouse gas emissions from transportation is essential in combating global warming and climate change. Eco-routing enables drivers to use the most eco-friendly routes and is effective in reducing vehicle emissions. The EcoTour system assigns eco-weights to a road network based on GPS and fuel consumption data collected from vehicles to enable ecorouting. Given an arbitrary source-destination pair in Denmark, EcoTour returns the shortest route, the fastest route, and the eco-route, along with statistics for the three routes. EcoTour also serves as a testbed for exploring advanced solutions to a range of challenges related to eco-routing.",
"title": ""
},
{
"docid": "2e6fcd8781e2f4cd7944ce0732e38d7c",
"text": "Hashing has been widely used for approximate nearest neighbor (ANN) search in big data applications because of its low storage cost and fast retrieval speed. The goal of hashing is to map the data points from the original space into a binary-code space where the similarity (neighborhood structure) in the original space is preserved. By directly exploiting the similarity to guide the hashing code learning procedure, graph hashing has attracted much attention. However, most existing graph hashing methods cannot achieve satisfactory performance in real applications due to the high complexity for graph modeling. In this paper, we propose a novel method, called scalable graph hashing with feature transformation (SGH), for large-scale graph hashing. Through feature transformation, we can effectively approximate the whole graph without explicitly computing the similarity graph matrix, based on which a sequential learning method is proposed to learn the hash functions in a bit-wise manner. Experiments on two datasets with one million data points show that our SGH method can outperform the state-of-the-art methods in terms of both accuracy and scalability.",
"title": ""
},
{
"docid": "b81825b9d529d0b8c1b7d68b6e9758fb",
"text": "We present KADABRA, a new algorithm to approximate betweenness centrality in directed and undirected graphs, which significantly outperforms all previous approaches on real-world complex networks. The efficiency of the new algorithm relies on two new theoretical contributions, of independent interest. The first contribution focuses on sampling shortest paths, a subroutine used by most algorithms that approximate betweenness centrality. We show that, on realistic random graph models, we can perform this task in time |E| 1 2 +o(1) with high probability, obtaining a significant speedup with respect to the Θ(|E|) worst-case performance. We experimentally show that this new technique achieves similar speedups on real-world complex networks, as well. The second contribution is a new rigorous application of the adaptive sampling technique. This approach decreases the total number of shortest paths that need to be sampled to compute all betweenness centralities with a given absolute error, and it also handles more general problems, such as computing the k most central nodes. Furthermore, our analysis is general, and it might be extended to other settings, as well.",
"title": ""
},
{
"docid": "7448b45dd5809618c3b6bb667cb1004f",
"text": "We first provide criteria for assessing informed consent online. Then we examine how cookie technology and Web browser designs have responded to concerns about informed consent. Specifically, we document relevant design changes in Netscape Navigator and Internet Explorer over a 5-year period, starting in 1995. Our retrospective analyses leads us to conclude that while cookie technology has improved over time regarding informed consent, some startling problems remain. We specify six of these problems and offer design remedies. This work fits within the emerging field of Value-Sensitive Design.",
"title": ""
},
{
"docid": "44017678b3da8c8f4271a9832280201e",
"text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "a9a3c033b6467464b1f926ed9119a1cc",
"text": "Media mix modeling is a statistical analysis on historical data to measure the return on investment (ROI) on advertising and other marketing activities. Current practice usually utilizes data aggregated at a national level, which often suffers from small sample size and insufficient variation in the media spend. When sub-national data is available, we propose a geo-level Bayesian hierarchical media mix model (GBHMMM), and demonstrate that the method generally provides estimates with tighter credible intervals compared to a model with national level data alone. This reduction in error is due to having more observations and useful variability in media spend, which can protect advertisers from unsound reallocation decisions. Under some weak conditions, the geo-level model can reduce ad targeting bias. When geo-level data is not available for all the media channels, the geo-level model estimates generally deteriorate as more media variables are imputed using the national level data.",
"title": ""
},
{
"docid": "8bd9a5cf3ca49ad8dd38750410a462b0",
"text": "Most regional anesthesia in breast surgeries is performed as postoperative pain management under general anesthesia, and not as the primary anesthesia. Regional anesthesia has very few cardiovascular or pulmonary side-effects, as compared with general anesthesia. Pectoral nerve block is a relatively new technique, with fewer complications than other regional anesthesia. We performed Pecs I and Pec II block simultaneously as primary anesthesia under moderate sedation with dexmedetomidine for breast conserving surgery in a 49-year-old female patient with invasive ductal carcinoma. Block was uneventful and showed no complications. Thus, Pecs block with sedation could be an alternative to general anesthesia for breast surgeries.",
"title": ""
},
{
"docid": "13c7c1cbce6067185412cba7477519bf",
"text": "For any space mission, safety and reliability are the most important issues. To tackle this problem, we have studied anomaly detection and fault diagnosis methods for spacecraft systems based on machine learning (ML) and data mining (DM) technology. In these methods, the knowledge or model which is necessary for monitoring a spacecraft system is (semi-)automatically acquired from the spacecraft telemetry data. In this paper, we first overview the anomaly detection/diagnosis problem in the spacecraft systems and conventional techniques such as limit-check, expert systems and model-based diagnosis. Then we explain the concept of ML/DM-based approach to this problem, and introduce several anomaly detection/diagnosis methods which have been developed by us",
"title": ""
},
{
"docid": "b95de5287e9f65eff25d2550d4c71c19",
"text": "The syntax of application layer protocols carries valuable information for network intrusion detection. Hence, the majority of modern IDS perform some form of protocol analysis to refine their signatures with application layer context. Protocol analysis, however, has been mainly used for misuse detection, which limits its application for the detection of unknown and novel attacks. In this contribution we address the issue of incorporating application layer context into anomaly-based intrusion detection. We extend a payload-based anomaly detection method by incorporating structural information obtained from a protocol analyzer. The basis for our extension is computation of similarity between attributed tokens derived from a protocol grammar. The enhanced anomaly detection method is evaluated in experiments on detection of web attacks, yielding an improvement of detection accuracy of 49%. While byte-level anomaly detection is sufficient for detection of buffer overflow attacks, identification of recent attacks such as SQL and PHP code injection strongly depends on the availability of application layer context.",
"title": ""
},
{
"docid": "a8a71b2adae5a1aee0bc20c004c2baa2",
"text": "To stay competitive within the market, organizations need to understand the skills and competencies of their human resources in order to best utilize them. This paper focuses on the problem of modeling human resources in a dynamic environment, and presents a formal ontology for representing, inferring, and validating skills and competencies over time.",
"title": ""
},
{
"docid": "adc51e9fdbbb89c9a47b55bb8823c7fe",
"text": "State-of-the-art model counters are based on exhaustive DPLL algorithms, and have been successfully used in probabilistic reasoning, one of the key problems in AI. In this article, we present a new exhaustive DPLL algorithm with a formal semantics, a proof of correctness, and a modular design. The modular design is based on the separation of the core model counting algorithm from SAT solving techniques. We also show that the trace of our algorithm belongs to the language of Sentential Decision Diagrams (SDDs), which is a subset of Decision-DNNFs, the trace of existing state-of-the-art model counters. Still, our experimental analysis shows comparable results against state-of-the-art model counters. Furthermore, we obtain the first top-down SDD compiler, and show orders-of-magnitude improvements in SDD construction time against the existing bottom-up SDD compiler.",
"title": ""
},
{
"docid": "35470a422cdb3a287d45797e39c04637",
"text": "In this paper, we propose a method to recognize food images which include multiple food items considering co-occurrence statistics of food items. The proposed method employs a manifold ranking method which has been applied to image retrieval successfully in the literature. In the experiments, we prepared co-occurrence matrices of 100 food items using various kinds of data sources including Web texts, Web food blogs and our own food database, and evaluated the final results obtained by applying manifold ranking. As results, it has been proved that co-occurrence statistics obtained from a food photo database is very helpful to improve the classification rate within the top ten candidates.",
"title": ""
}
] |
scidocsrr
|
35381eab28443c122155525660e6bc23
|
AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine
|
[
{
"docid": "fbc9572bc079b98ffe68cc461f33859d",
"text": "Dialog state tracking is a key component of many modern dialog systems, most of which are designed with a single, welldefined domain in mind. This paper shows that dialog data drawn from different dialog domains can be used to train a general belief tracking model which can operate across all of these domains, exhibiting superior performance to each of the domainspecific models. We propose a training procedure which uses out-of-domain data to initialise belief tracking models for entirely new domains. This procedure leads to improvements in belief tracking performance regardless of the amount of in-domain data available for training the model.",
"title": ""
}
] |
[
{
"docid": "81fd8d4c38a65c5d0df0c849e8c080fc",
"text": "The paper presents two types of one cycle current control method for Triple Active Bridge(TAB) phase-shifted DC-DC converter integrating Renewable Energy Source(RES), Energy Storage System(ESS) and a output dc bus. The main objective of the current control methods is to control the transformer current in each cycle so that dc transients are eliminated during phase angle change from one cycle to the next cycle. In the proposed current control methods, the transformer currents are sampled within a switching cycle and the phase shift angles for the next switching cycle are generated based on sampled current values and current references. The discussed one cycle control methods also provide an inherent power decoupling feature for the three port phase shifted triple active bridge converter. Two different methods, (a) sampling and updating twice in a switching cycle and (b) sampling and updating once in a switching cycle, are explained in this paper. The current control methods are experimentally verified using digital implementation technique on a laboratory made hardware prototype.",
"title": ""
},
{
"docid": "3f3ba8970ad046686a4c0fe11820da07",
"text": "Agriculture contributes to a major portion of India's GDP. Two major issues in modern agriculture are water scarcity and high labor costs. These issues can be resolved using agriculture task automation, which encourages precision agriculture. Considering abundance of sunlight in India, this paper discusses the design and development of an IoT based solar powered Agribot that automates irrigation task and enables remote farm monitoring. The Agribot is developed using an Arduino microcontroller. It harvests solar power when not performing irrigation. While executing the task of irrigation, it moves along a pre-determined path of a given farm, and senses soil moisture content and temperature at regular points. At each sensing point, data acquired from multiple sensors is processed locally to decide the necessity of irrigation and accordingly farm is watered. Further, Agribot acts as an IoT device and transmits the data collected from multiple sensors to a remote server using Wi-Fi link. At the remote server, raw data is processed using signal processing operations such as filtering, compression and prediction. Accordingly, the analyzed data statistics are displayed using an interactive interface, as per user request.",
"title": ""
},
{
"docid": "ad1cf5892f7737944ba23cd2e44a7150",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "bbf764205f770481b787e76db5a3b614",
"text": "A∗ is a popular path-finding algorithm, but it can only be applied to those domains where a good heuristic function is known. Inspired by recent methods combining Deep Neural Networks (DNNs) and trees, this study demonstrates how to train a heuristic represented by a DNN and combine it with A∗ . This new algorithm which we call א∗ can be used efficiently in domains where the input to the heuristic could be processed by a neural network. We compare א∗ to N-Step Deep QLearning (DQN Mnih et al. 2013) in a driving simulation with pixel-based input, and demonstrate significantly better performance in this scenario.",
"title": ""
},
{
"docid": "e1001ebf3a30bcb2599fae6dae8f83e9",
"text": "The notion of the \"stakeholders\" of the firm has drawn ever-increasing attention since Freeman published his seminal book on Strategic Management: A Stakeholder Approach in 1984. In the understanding of most scholars in the field, stakeholder theory is not a special theory on a firm's constituencies but sets out to replace today's prevailing neoclassical economic concept of the firm. As such, it is seen as the superior theory of the firm. Though stakeholder theory explicitly is a theory on the firm, that is, on a private sector entity, some scholars try to apply it to public sector organizations, and, in particular, to e-government settings. This paper summarizes stakeholder theory, discusses its premises and justifications, compares its tracks, sheds light on recent attempts to join the two tracks, and discusses the benefits and limits of its practical applicability to the public sector using the case of a recent e-government initiative in New York State.",
"title": ""
},
{
"docid": "4510492476ae812905d22b567cfe1716",
"text": "Different language markers can be used to reveal the differences between structures of truthful and deceptive (fake) news. Two experiments are held: the first one is based on lexics level markers, the second one on discourse level is based on rhetorical relations categories (frequencies). Corpus consists of 174 truthful and deceptive news stories in Russian. Support Vector Machines and Random Forest Classifier were used for text classification. The best results for lexical markers we got by using Support Vector Ma-chines with rbf kernel (f-measure 0.65). The model could be developed and be used as a preliminary filter for fake news detection.",
"title": ""
},
{
"docid": "82d62feaa0c88789c44bbdc745ab21dc",
"text": "This paper proposes a new approach to solve the problem of real-time vision-based hand gesture recognition with the combination of statistical and syntactic analyses. The fundamental idea is to divide the recognition problem into two levels according to the hierarchical property of hand gestures. The lower level of the approach implements the posture detection with a statistical method based on Haar-like features and the AdaBoost learning algorithm. With this method, a group of hand postures can be detected in real time with high recognition accuracy. The higher level of the approach implements the hand gesture recognition using the syntactic analysis based on a stochastic context-free grammar. The postures that are detected by the lower level are converted into a sequence of terminal strings according to the grammar. Based on the probability that is associated with each production rule, given an input string, the corresponding gesture can be identified by looking for the production rule that has the highest probability of generating the input string.",
"title": ""
},
{
"docid": "73b150681d7de50ada8e046a3027085f",
"text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.",
"title": ""
},
{
"docid": "9d26e9a4b5694588c7957067d9586df5",
"text": "Converters for telecom DC/DC power supply applications often require an output voltage somewhere within a wide range of input voltages. While the design of traditional converters will come with a heavy penalty in terms of component stresses and losses, and with the restrictions on the output voltage. Besides that, the high efficiency around the nominal input is another restriction for traditional converters. A controlling scheme for the four switch buck-boost converter is proposed to achieve high efficiency within the line range and the highest efficiency around the nominal input. A 48 V(36-75 V) input 12 V@25 A output two-stage prototype composed of the proposed converter and a full bridge converter is built in the lab. The experimental results verified the analysis.",
"title": ""
},
{
"docid": "6cd7a2c4767aeefd3a1cd8815e7d8c39",
"text": "We propose a new type of materialized view called a partially materialized view. A partially materialized view only materializes some of the rows, for example, the most frequently accessed rows, which reduces storage space and view maintenance effort. One or more control tables are associated with the view and define which rows are currently materialized. As a result, one can easily change which rows of the view are stored and maintained. We show how to extend view matching and maintenance algorithms to partially materialized views and outline several potential applications of the new view type. Experimental results in Microsoft SQL Server show that compared with fully materialized views, partially materialized views have lower storage requirements, better buffer pool efficiency, better query performance, and significantly lower maintenance costs.",
"title": ""
},
{
"docid": "d8ab6b84d093ae9eaa4735cda3a98728",
"text": "This paper presents a high-efficiency GaN Doherty power amplifier (PA) with 100-MHz instantaneous bandwidth for 3.5-GHz long-term-evolution (LTE)-advanced application. A modified load modulation network, employing an enlarged peaking amplifier to carrier amplifier power ratio and moderately increased load impedance of the carrier amplifier, is proposed for enhancing efficiency and achieving improved load modulation. To increase the power ratio and alleviate the influence of slight impedance mismatch, a proposed load impedance strategy and corresponding stepped-impedance matching network are adopted for high-efficiency and wideband operation. By tuning the carrier offset line, the inconsistency of efficiency, gain, and output power in the operation band can be alleviated. Measurement results show that the Doherty PA has a drain efficiency of approximately 40% with gain fluctuation less than 0.5 dB at 9-dB back-off power, and maximum efficiency of about 60% at saturation in the signal band of 3.4-3.5 GHz. By using the digital pre-distortion (DPD) technique, the Doherty PA achieves adjacent channel leakage ratio of about -48 dBc at an average output power of 40.4 dBm with efficiency of 42.5%, when driven by 100-MHz LTE-advanced signal. To the best of the authors' knowledge, this is the first high-performance result of linearization using conventional DPD technique with 100-MHz bandwidth signals for the GaN Doherty PA at 3.5-GHz frequency band thus far.",
"title": ""
},
{
"docid": "00b0d7caab0f6cf03356b70c4970532c",
"text": "The objective of this work is to provide a set of most significant content descriptive feature parameters to identify and classify the kidney disorders with ultrasound scan. The ultrasound images are initially pre-processed to preserve the pixels of interest prior to feature extraction. In total 28 features are extracted, the analysis of features value shows that 13 features are highly significant in discrimination. This resultant feature vector is used to train the multilayer back propagation network. The network is tested with the unknown samples. The outcome of multi-layer back propagation network is verified with medical experts and this confirms classification efficiency of 90.47%, 86.66%, and 85.71% for the classes considered respectively. The study shows that feature extraction after pre-processing followed by ANN based classification significantly enhance objective diagnosis and provides the possibility of developing computer-aided diagnosis system",
"title": ""
},
{
"docid": "e29c6d0c4d5b82d7e968ab48d076a7ba",
"text": "In recent years, a large number of researchers are endeavoring to develop wireless sensing and related applications as Wi-Fi devices become ubiquitous. As a significant research branch, gesture recognition has become one of the research hotspots. In this paper, we propose WiCatch, a novel device free gesture recognition system which utilizes the channel state information to recognize the motion of hands. First of all, with the aim of catching the weak signals reflected from hands, a novel data fusion-based interference elimination algorithm is proposed to diminish the interference caused by signals reflected from stationary objects and the direct signal from transmitter to receiver. Second, the system catches the signals reflected from moving hands and rebuilds the motion locus of the gesture by constructing the virtual antenna array based on signal samples in time domain. Finally, we adopt support vector machines to complete the classification. The extensive experimental results demonstrate that the WiCatch can achieves a recognition accuracy over 0.96. Furthermore, the WiCatch can be applied to two-hand gesture recognition and reach a recognition accuracy of 0.95.",
"title": ""
},
{
"docid": "cd286f4dfd11ee4585436d34bb756867",
"text": "BACKGROUND\nTargeted therapies have markedly changed the treatment of cancer over the past 10 years. However, almost all tumors acquire resistance to systemic treatment as a result of tumor heterogeneity, clonal evolution, and selection. Although genotyping is the most currently used method for categorizing tumors for clinical decisions, tumor tissues provide only a snapshot, or are often difficult to obtain. To overcome these issues, methods are needed for a rapid, cost-effective, and noninvasive identification of biomarkers at various time points during the course of disease. Because cell-free circulating tumor DNA (ctDNA) is a potential surrogate for the entire tumor genome, the use of ctDNA as a liquid biopsy may help to obtain the genetic follow-up data that are urgently needed.\n\n\nCONTENT\nThis review includes recent studies exploring the diagnostic, prognostic, and predictive potential of ctDNA as a liquid biopsy in cancer. In addition, it covers biological and technical aspects, including recent advances in the analytical sensitivity and accuracy of DNA analysis as well as hurdles that have to be overcome before implementation into clinical routine.\n\n\nSUMMARY\nAlthough the analysis of ctDNA is a promising area, and despite all efforts to develop suitable tools for a comprehensive analysis of tumor genomes from plasma DNA, the liquid biopsy is not yet routinely used as a clinical application. Harmonization of preanalytical and analytical procedures is needed to provide clinical standards to validate the liquid biopsy as a clinical biomarker in well-designed and sufficiently powered multicenter studies.",
"title": ""
},
{
"docid": "a0fe4a04b2fd17f7df86cd2768fdf80c",
"text": "Line labelling has been used to determine whether a two-dimensional (2D) line drawing object is a possible or impossible representation of a three-dimensional (3D) solid object. However, the results are not sufficiently robust because the existing line labelling methods do not have any validation method to verify their own result. In this research paper, the concept of graph colouring is applied to a validation technique for a labelled 2D line drawing. As a result, a graph colouring algorithm for validating labelled 2D line drawings is presented. A high-level programming language, MATLAB R2009a, and two primitive 2D line drawing classes, prism and pyramid are used to show how the algorithms can be implemented. The proposed algorithm also shows that the minimum number of colours needed to colour the labelled 2D line drawing object is equal to 3 for prisms and 1 n − for pyramids, where n is the number of vertices (junctions) in the pyramid objects.",
"title": ""
},
{
"docid": "759ceec0060ffb1a9da5d04e2ee1cb50",
"text": "Advanced computing systems embed spintronic devices to improve the leakage performance of conventional CMOS systems. High speed, low power, and infinite endurance are important properties of magnetic tunnel junction (MTJ), a spintronic device, which assures its use in memories and logic circuits. This paper presents a PentaMTJ-based logic gate, which provides easy cascading, self-referencing, less voltage headroom problem in precharge sense amplifier and low area overhead contrary to existing MTJ-based gates. PentaMTJ is used here because it provides guaranteed disturbance free reading and increased tolerance to process variations along with compatibility with CMOS process. The logic gate is validated by simulation at the 45-nm technology node using a VerilogA model of the PentaMTJ.",
"title": ""
},
{
"docid": "06044ef2950f169eba39687cd3e723c1",
"text": "Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is neovascularisation, the growth of abnormal new vessels. This paper describes an automated method for the detection of new vessels in retinal images. Two vessel segmentation approaches are applied, using the standard line operator and a novel modified line operator. The latter is designed to reduce false responses to non-vessel edges. Both generated binary vessel maps hold vital information which must be processed separately. This is achieved with a dual classification system. Local morphology features are measured from each binary vessel map to produce two separate feature sets. Independent classification is performed for each feature set using a support vector machine (SVM) classifier. The system then combines these individual classification outcomes to produce a final decision. Sensitivity and specificity results using a dataset of 60 images are 0.862 and 0.944 respectively on a per patch basis and 1.00 and 0.90 respectively on a per image basis.",
"title": ""
},
{
"docid": "3292b81ad4fe83c2aa634766f9751318",
"text": "Artificial bee colony (ABC) is a swarm optimization algorithmwhich has been shown to be more effective than the other population based algorithms such as genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). Since it was invented, it has received significant interest from researchers studying in different fields because of having fewer control parameters, high global search ability and ease of implementation. Although ABC is good at exploration, the main drawback is its poor exploitation which results in an issue on convergence speed in some cases. Inspired by particle swarm optimization, we propose a modified ABC algorithm called VABC, to overcome this insufficiency by applying a new search equation in the onlooker phase, which uses the PSO search strategy to guide the search for candidate solutions. The experimental results tested on numerical benchmark functions show that the VABC has good performance compared with PSO and ABC. Moreover, the performance of the proposed algorithm is also compared with those of state-of-the-art hybrid methods and the results demonstrate that the proposed method has a higher convergence speed and better search ability for almost all functions. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9294034c69854d75579299ff8572166c",
"text": "Machine learning has already been exploited as a useful tool for detecting malicious executable files. Data retrieved from malware samples, such as header fields, instruction sequences, or even raw bytes, is leveraged to learn models that discriminate between benign and malicious software. However, it has also been shown that machine learning and deep neural networks can be fooled by evasion attacks (also known as adversarial examples), i.e., small changes to the input data that cause misclassification at test time. In this work, we investigate the vulnerability of malware detection methods that use deep networks to learn from raw bytes. We propose a gradient-based attack that is capable of evading a recently-proposed deep network suited to this purpose by only changing few specific bytes at the end of each mal ware sample, while preserving its intrusive functionality. Promising results show that our adversarial malware binaries evade the targeted network with high probability, even though less than 1 % of their bytes are modified.",
"title": ""
},
{
"docid": "d468dd481b4c48465fedb89b12f8ef0e",
"text": "Owing to its exceptional ability to efficiently promote plant growth, protection and stress tolerance, a mycorrhiza like endophytic Agaricomycetes fungus Piriformospora indica has received a great attention over the last few decades. P. indica is an axenically cultiviable fungus which exhibits its versatility for colonizing/hosting a broad range of plant species through directly manipulating plant hormone-signaling pathway during the course of mutualism. P. indica-root colonization leads to a better plant performance in all respect, including enhanced root proliferation by indole-3-acetic acid production which in turn results into better nutrient-acquisition and subsequently to improved crop growth and productivity. Additionally, P. indica can induce both local and systemic resistance to fungal and viral plant diseases through signal transduction. P. indica-mediated stimulation in antioxidant defense system components and expressing stress-related genes can confer crop/plant stress tolerance. Therefore, P. indica can biotize micropropagated plantlets and also help these plants to overcome transplantation shock. Nevertheless, it can also be involved in a more complex symbiotic relationship, such as tripartite symbiosis and can enhance population dynamic of plant growth promoting rhizobacteria. In brief, P. indica can be utilized as a plant promoter, bio-fertilizer, bioprotector, bioregulator, and biotization agent. The outcome of the recent literature appraised herein will help us to understand the physiological and molecular bases of mechanisms underlying P. indica-crop plant mutual relationship. Together, the discussion will be functional to comprehend the usefulness of crop plant-P. indica association in both achieving new insights into crop protection/improvement as well as in sustainable agriculture production.",
"title": ""
}
] |
scidocsrr
|
523726f25660be73f8a546e519941e57
|
CASAS: A Smart Home in a Box
|
[
{
"docid": "099ced7b083a6610305587a17392cb5d",
"text": "In activity recognition, one major challenge is how to reduce the labeling effort one needs to make when recognizing a new set of activities. In this paper, we analyze the possibility of transferring knowledge from the available labeled data on a set of existing activities in one domain to help recognize the activities in another different but related domain. We found that such a knowledge transfer process is possible, provided that the recognized activities from the two domains are related in some way. We develop a bridge between the activities in two domains by learning a similarity function via Web search, under the condition that the sensor readings are from the same feature space. Based on the learned similarity measure, our algorithm interprets the data from the source domain as ‘‘pseudo training data’’ in the target domain with different confidence levels, which are in turn fed into supervised learning algorithms for training the classifier. We show that after using this transfer learning approach, the performance of activity recognition in the new domain is increased several fold as compared to when no knowledge transfer is done. Our algorithm is evaluated on several real-world datasets to demonstrate its effectiveness. In the experiments, our algorithm could achieve a 60% accuracy most of the time with no or very few training data in the target domain, which easily outperforms the supervised learning methods. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "c5d7a62efc3a50caf28ff6d725a65227",
"text": "This paper examines and reviews research methods applied within the field of mobile human-computer interaction. The purpose is to provide a snapshot of current practice for studying mobile HCI to identify shortcomings in the way research is conducted and to propose opportunities for future approaches. 102 publications on mobile human-computer interaction research were categorized in a matrix relating their research methods and purpose. The matrix revealed a number of significant trends with a clear bias towards building systems and evaluating them only in laboratory settings, if at all. Also, gaps in the distribution of research approaches and purposes were identified; action research, case studies, field studies and basic research being applied very infrequently. Consequently, we argue that the bias towards building systems and a lack of research for understanding design and use limits the development of cumulative knowledge on mobile human computer interaction. This in turn inhibits future development of the research field as a whole.",
"title": ""
},
{
"docid": "a7c6c8cb92f8cb35c3826b5dc5a86f03",
"text": "Software Defined Satellite Network (SDSN) is a novel framework which brings Software Defined Network (SDN) technologies in the satellite networks. It has great potential to achieve effective and flexible management in the satellite networks. However, the frequent handovers will lead to an increase in the flow table size in SDSN. Due to the limited flow table space, a lot of flows will be dropped if the flow table is full during the handover. This is a burning issue to be solved for mobility management in SDSN. In this paper, we propose a heuristic Timeout Strategy-based Mobility Management algorithm for SDSN, named TSMM. TSMM aims to reduce the drop-flows during handover by considering two key points, the limited flow table space and satellite link handover. We implement TSMM mechanism and conduct contrast experiments. The experimental results verify the good performance in terms of transmission quality, an 8.2%-9.9% decrease in drop-flow rate, and a 6.9%–11.18% decrease in flow table size during the handover.",
"title": ""
},
{
"docid": "664a2f7213b27087970305544d83d78f",
"text": "We give a new construction of overconvergent modular forms of arbitrary weights, defining them in terms of functions on certain affinoid subsets of Scholze’s infinite-level modular curve. These affinoid subsets, and a certain canonical coordinate on them, play a role in our construction which is strongly analogous with the role of the upper half-plane and its coordinate ‘z’ in the classical analytic theory of modular forms. As one application of these ideas, we define and study an overconvergent Eichler-Shimura map in the context of compact Shimura curves over Q, proving stronger analogues of results of Andreatta-Iovita-Stevens.",
"title": ""
},
{
"docid": "f6669d0b53dd0ca789219874d35bf14e",
"text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.",
"title": ""
},
{
"docid": "cb4855d39d21bd525bd929b551dede7e",
"text": "There is an established and growing body of evidence highlighting that music can influence behavior across a range of diverse domains (Miell, MacDonald, & Hargreaves 2005). One area of interest is the monitoring of \"internal timing mechanisms\", with features such as tempo, liking, perceived affective nature and everyday listening contexts implicated as important (North & Hargreaves, 2008). The current study addresses these issues by comparing the effects of self-selected and experimenter-selected music (fast and slow) on actual and perceived performance of a driving game activity. Seventy participants completed three laps of a driving game in seven sound conditions: (1) silence; (2) car sounds; (3) car sounds with self-selected music, and car sounds with experimenter-selected music; (4) high-arousal (70 bpm); (5) high-arousal (130 bpm); (6) low-arousal (70 bpm); and (7) low-arousal (130 bpm) music. Six performance measures (time, accuracy, speed, and retrospective perception of these), and four experience measures (perceived distraction, liking, appropriateness and enjoyment) were taken. Exposure to self-selected music resulted in overestimation of elapsed time and inaccuracy, while benefiting accuracy and experience. In contrast, exposure to experimenter-selected music resulted in poorest performance and experience. Increasing the tempo of experimenter-selected music resulted in faster performance and increased inaccuracy for high-arousal music, but did not impact experience. It is suggested that personal meaning and subjective associations connected to self-selected music promoted increased engagement with the activity, overriding detrimental effects attributed to unfamiliar, less liked and less appropriate experimenter-selected music.",
"title": ""
},
{
"docid": "0801ef431c6e4dab6158029262a3bf82",
"text": "A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions. Here we introduce a cognitive model capable of constructing humanlike questions. Our approach treats questions as formal programs that, when executed on the state of the world, output an answer. The model specifies a probability distribution over a complex, compositional space of programs, favoring concise programs that help the agent learn in the current context. We evaluate our approach by modeling the types of open-ended questions generated by humans who were attempting to learn about an ambiguous situation in a game. We find that our model predicts what questions people will ask, and can creatively produce novel questions that were not present in the training set. In addition, we compare a number of model variants, finding that both question informativeness and complexity are important for producing human-like questions.",
"title": ""
},
{
"docid": "a3d10348d5f6e51fefb3f642098be32e",
"text": "We propose a Convolutional Neural Network (CNN) based algorithm – StuffNet – for object detection. In addition to the standard convolutional features trained for region proposal and object detection [33], StuffNet uses convolutional features trained for segmentation of objects and 'stuff' (amorphous categories such as ground and water). Through experiments on Pascal VOC 2010, we show the importance of features learnt from stuff segmentation for improving object detection performance. StuffNet improves performance from 18.8% mAP to 23.9% mAP for small objects. We also devise a method to train StuffNet on datasets that do not have stuff segmentation labels. Through experiments on Pascal VOC 2007 and 2012, we demonstrate the effectiveness of this method and show that StuffNet also significantly improves object detection performance on such datasets.",
"title": ""
},
{
"docid": "36434d9a36dceb2b3c838f9d8d3ba56f",
"text": "Long-duration missions challenge ground robot systems with respect to energy storage and efficient conversion to power on demand. Ground robot systems can contain multiple power sources such as fuel cell, battery and/or ultra capacitor. This paper presents a hybrid systems framework for collectively modeling the dynamics and switching between these different power components. The hybrid system allows modeling power source on/off switching and different regimes of operation, together with continuous parameters such as state of charge, temperature, and power output. We apply this modeling framework to a fuel cell/battery power system applicable to unmanned ground vehicles such as Packbot or TALON. A simulation comparison of different control strategies is presented. These strategies are compared based on maximizing energy efficiency and meeting thermal constraints.",
"title": ""
},
{
"docid": "b7f53aa4b1e68f05bee2205dd55b975a",
"text": "We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients w.r.t. the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.",
"title": ""
},
{
"docid": "095dbdc1ac804487235cdd0aeffe8233",
"text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.",
"title": ""
},
{
"docid": "3c55948ba5466b04c7b3c1005d4f749f",
"text": "Energy harvesting is a key technique that can be used to overcome the barriers that prevent the real world deployment of wireless sensor networks (WSNs). In particular, solar energy harvesting has been commonly used to overcome this barrier. However, it should be noted that WSNs operating on solar power suffer form energy shortage during nighttimes. Therefore, to solve this problem, we exploit the use of TV broadcasts airwaves as energy sources to power wireless sensor nodes. We measured the output of a rectenna continuously for 7 days; from the results of this measurement, we showed that Radio Frequency (RF) energy can always be harvested. We developed an RF energy harvesting WSN prototype to show the effectiveness of RF energy harvesting for the usage of a WSN. We also proposed a duty cycle determination method for our system, and verified the validity of this method by implementing our system. This RF energy harvesting method is effective in a long period measurement application that do not require high power consumption.",
"title": ""
},
{
"docid": "3013a8b320cbbfc1ac8fed7c06d6996f",
"text": "Security and privacy are among the most pressing concerns that have evolved with the Internet. As networks expanded and became more open, security practices shifted to ensure protection of the ever growing Internet, its users, and data. Today, the Internet of Things (IoT) is emerging as a new type of network that connects everything to everyone, everywhere. Consequently, the margin of tolerance for security and privacy becomes narrower because a breach may lead to large-scale irreversible damage. One feature that helps alleviate the security concerns is authentication. While different authentication schemes are used in vertical network silos, a common identity and authentication scheme is needed to address the heterogeneity in IoT and to integrate the different protocols present in IoT. We propose in this paper an identity-based authentication scheme for heterogeneous IoT. The correctness of the proposed scheme is tested with the AVISPA tool and results showed that our scheme is immune to masquerade, man-in-the-middle, and replay attacks.",
"title": ""
},
{
"docid": "6a830c56c38d1b4998ad5bb5ff75472f",
"text": "With the advent of Web 2.0, the Web is acting as a platform which enables end-user content generation. As a major type of social media in Web 2.0, Web forums facilitate intensive interactions among participants. International Jihadist groups often use Web forums to promote violence and distribute propaganda materials. These Dark Web forums are heterogeneous and widely distributed. Therefore, how to access and analyze the forum messages and interactions among participants is becoming an issue. This paper presents a general framework for Web forum data integration. Specifically, a Web-based knowledge portal, the Dark Web Forums Portal, is built based on the framework. The portal incorporates the data collected from different international Jihadist forums and provides several important analysis functions, including forum browsing and searching (in single forum and across multiple forums), forum statistics analysis, multilingual translation, and social network visualization. Preliminary results of our user study show that the Dark Web Forums Portal helps users locate information quickly and effectively. Users found the forum statistics analysis, multilingual translation, and social network visualization functions of the portal to be particularly valuable.",
"title": ""
},
{
"docid": "db76ba085f43bc826f103c6dd4e2ebb5",
"text": "It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory-augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles.",
"title": ""
},
{
"docid": "81b9bc89940dfd93d4c10ad011ba6d68",
"text": "The emerging vehicular applications demand for a lot more computing and communication capacity to excel in their compute-intensive and latency-sensitive tasks. Fog computing, which focuses on moving computing resources to the edge of networks, complements cloud computing by solving the latency constraints and reducing ingress traffic to the cloud. This paper presents a visionary concept on vehicular fog computing that turns connected vehicles into mobile fog nodes and utilises mobility of vehicles for providing cost-effective and on-demand fog computing for vehicular applications. Besides system design, this paper also discusses the remained technical challenges.",
"title": ""
},
{
"docid": "da6778e5c07e3f933edcae9f07178050",
"text": "Forest fires are a major environmental issue, creating economical and ecological damage while endangering human lives. Fast detection is a key element for controlling such phenomenon. To achieve this, one alternative is to use automatic tools based on local sensors, such as provided by meteorological stations. In effect, meteorological conditions (e.g. temperature, wind) are known to influence forest fires and several fire indexes, such as the forest Fire Weather Index (FWI), use such data. In this work, we explore a Data Mining (DM) approach to predict the burned area of forest fires. Five different DM techniques, e.g. Support Vector Machines (SVM) and Random Forests, and four distinct feature selection setups (using spatial, temporal, FWI components and weather attributes), were tested on recent real-world data collected from the northeast region of Portugal. The best configuration uses a SVM and four meteorological inputs (i.e. temperature, relative humidity, rain and wind) and it is capable of predicting the burned area of small fires, which are more frequent. Such knowledge is particularly useful for improving firefighting resource management (e.g. prioritizing targets for air tankers and ground crews).",
"title": ""
},
{
"docid": "aa0bd00ca5240e462e49df3d1bd3487e",
"text": "The choice of the CMOS logic to be used for implementation of a given specification is usually dependent on the optimization and the performance constraints that the finished chip is required to meet. This paper presents a comparative study of CMOS static and dynamic logic. Effect of voltage variation on power and delay of static and dynamic CMOS logic styles studied. The performance of static logic is better than dynamic logic for designing basic logic gates like NAND and NOR. But the dynamic casecode voltage switch logic (CVSL) achieves better performance. 75% lesser power delay product is achieved than that of static CVSL. However, it observed that dynamic logic performance is better for higher fan in and complex logic circuits.",
"title": ""
},
{
"docid": "34fa7e6d5d4f1ab124e3f12462e92805",
"text": "Natural image modeling plays a key role in many vision problems such as image denoising. Image priors are widely used to regularize the denoising process, which is an ill-posed inverse problem. One category of denoising methods exploit the priors (e.g., TV, sparsity) learned from external clean images to reconstruct the given noisy image, while another category of methods exploit the internal prior (e.g., self-similarity) to reconstruct the latent image. Though the internal prior based methods have achieved impressive denoising results, the improvement of visual quality will become very difficult with the increase of noise level. In this paper, we propose to exploit image external patch prior and internal self-similarity prior jointly, and develop an external patch prior guided internal clustering algorithm for image denoising. It is known that natural image patches form multiple subspaces. By utilizing Gaussian mixture models (GMMs) learning, image similar patches can be clustered and the subspaces can be learned. The learned GMMs from clean images are then used to guide the clustering of noisy-patches of the input noisy images, followed by a low-rank approximation process to estimate the latent subspace for image recovery. Numerical experiments show that the proposed method outperforms many state-of-the-art denoising algorithms such as BM3D and WNNM.",
"title": ""
},
{
"docid": "768ed187f94163727afd011817a306c6",
"text": "Although interest regarding the role of dispositional affect in job behaviors has surged in recent years, the true magnitude of affectivity's influence remains unknown. To address this issue, the authors conducted a qualitative and quantitative review of the relationships between positive and negative affectivity (PA and NA, respectively) and various performance dimensions. A series of meta-analyses based on 57 primary studies indicated that PA and NA predicted task performance in the hypothesized directions and that the relationships were strongest for subjectively rated versus objectively rated performance. In addition, PA was related to organizational citizenship behaviors but not withdrawal behaviors, and NA was related to organizational citizenship behaviors, withdrawal behaviors, counterproductive work behaviors, and occupational injury. Mediational analyses revealed that affect operated through different mechanisms in influencing the various performance dimensions. Regression analyses documented that PA and NA uniquely predicted task performance but that extraversion and neuroticism did not, when the four were considered simultaneously. Discussion focuses on the theoretical and practical implications of these findings. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "ae9bc4e21d6e2524f09e5f5fbb9e4251",
"text": "Arvaniti, Ladd and Mennen (1998) reported a phenomenon of ‘segmental anchoring’: the beginning and end of a linguistically significant pitch movement are anchored to specific locations in segmental structure, which means that the slope and duration of the pitch movement vary according to the segmental material with which it is associated. This finding has since been replicated and extended in several languages. One possible analysis is that autosegmental tones corresponding to the beginning and end of the pitch movement show secondary association with points in structure; however, problems with this analysis have led some authors to cast doubt on the ‘hypothesis’ of segmental anchoring. I argue here that segmental anchoring is not a hypothesis expressed in terms of autosegmental phonology, but rather an empirical phonetic finding. The difficulty of describing segmental anchoring as secondary association does not disprove the ‘hypothesis’, but shows the error of using a symbolic phonological device (secondary association) to represent gradient differences of phonetic detail that should be expressed quantitatively. I propose that treating pitch movements as gestures (in the sense of Articulatory Phonology) goes some way to resolving some of the theoretical questions raised by segmental anchoring, but suggest that pitch gestures have a variety of ‘domains’ which are in need of empirical study before we can successfully integrate segmental anchoring into our understanding of speech production.",
"title": ""
}
] |
scidocsrr
|
9b6d11853f39b10b4762ff718a3dbb0b
|
Outsource Photo Sharing and Searching for Mobile Devices With Privacy Protection
|
[
{
"docid": "945cf1645df24629842c5e341c3822e7",
"text": "Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.",
"title": ""
},
{
"docid": "6d766690805f74495c5b29b889320908",
"text": "With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data - while preserving identity privacy - remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.",
"title": ""
}
] |
[
{
"docid": "5025766e66589289ccc31e60ca363842",
"text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.",
"title": ""
},
{
"docid": "089d74cd4c98cf9695a54ea068c7d957",
"text": "Wireless Body Area Networks (WBANs) have developed as an effective solution for a wide range of healthcare, military and sports applications. Most of the proposed works studied efficient data collection from individual and traditional WBANs. Cloud computing is a new computing model that is continuously evolving and spreading. This paper presents a novel cloudlet-based efficient data collection system in WBANs. The goal is to have a large scale of monitored data of WBANs to be available at the end user or to the service provider in reliable manner. A prototype of WBANs, including Virtual Machine (VM) and Virtualized Cloudlet (VC) has been proposed for simulation characterizing efficient data collection in WBANs. Using the prototype system, we provide a scalable storage and processing infrastructure for large scale WBANs system. This infrastructure will be efficiently able to handle the large size of data generated by the WBANs system, by storing these data and performing analysis operations on it. The proposed model is fully supporting for WBANs system mobility using cost effective communication technologies of WiFi and cellular which are supported by WBANs and VC systems. This is in contrast of many of available mHealth solutions that is limited for high cost communication technology, such as 3G and LTE. Performance of the proposed prototype is evaluated via an extended version of CloudSim simulator. It is shown that the average power consumption and delay of the collected data is tremendously decreased by increasing the number of VMs and VCs. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5a5c71b56cf4aa6edff8ecc57298a337",
"text": "The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E(Exp), inspired by the Z-EDM algorithm that we have recently proposed. An important property of E(Exp) is its ability to emulate the behavior of other error functions by the sole adjustment of a real-valued parameter. In other words, E(Exp) is a sort of generalized error function embodying complementary features of other functions. The experimental results show that the flexibility of the new, generalized, error function allows one to obtain the best results achievable with the other functions with a performance improvement in some cases.",
"title": ""
},
{
"docid": "d82c1a529aa8e059834bc487fcfebd24",
"text": "Web attacks are nowadays one of the major threats on the Internet, and several studies have analyzed them, providing details on how they are performed and how they spread. However, no study seems to have sufficiently analyzed the typical behavior of an attacker after a website has been",
"title": ""
},
{
"docid": "541c8e3826545980b1b4e2b41ec1f976",
"text": "The appearance of the so-called recommender systems has led to the possibility of reducing the information overload experienced by individuals searching among online resources. One of the areas of application of recommender systems is the online tourism domain where sites like TripAdvisor allow people to post reviews of various hotels to help others make a good choice when planning their trip. As the number of such reviews grows in size every day, clearly it is impractical for the individual to go through all of them. We propose the TWIN (\"Tell me What I Need\") Personality-based Recommender System that analyzes the textual content of the reviews and estimates the personality of the user according to the Big Five model to suggest the reviews written by \"twin-minded\" people. In this paper we compare a number of algorithms to select the better option for personality estimation in the task of user profile construction.",
"title": ""
},
{
"docid": "d9d754d6ef106b4c421b5a4022cd3c9a",
"text": "This paper presents the research agenda that has been proposed to develop an integrated model to explain technology adoption of SMEs in Malaysia. SMEs form over 90% of all business entities in Malaysia and they have been contributing to the development of the nation. Technology adoption has been a thorn issue among SMEs as they require big outlay which might not be available to the SMEs. Although resource has been an issue among SMEs they cannot lie low and ignore the technological advancements that are taking place at a rapid pace. With that in mind this paper proposes a model to explain the technology adoption issue among SMEs. Keywords-Technology adoption, integrated model, Small and Medium Enterprises (SME), Malaysia",
"title": ""
},
{
"docid": "c20e31ddee311a1703fb5ff3687a1215",
"text": "The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. Most works in deep learning have achieved a great success on regular input representations, but they are hard to be directly applied to classify point clouds due to the irregularity and inhomogeneity of the data. In this paper, a deep neural network with spatial pooling (DNNSP) is proposed to classify large-scale point clouds without rasterization. The DNNSP first obtains the point-based feature descriptors of all points in each point cluster. The distance minimum spanning tree-based pooling is then applied in the point feature representation to describe the spatial information among the points in the point clusters. The max pooling is next employed to aggregate the point-based features into the cluster-based features. To assure the DNNSP is invariant to the point permutation and sizes of the point clusters, the point-based feature representation is determined by the multilayer perception (MLP) and the weight sharing for each point is retained, which means that the weight of each point in the same layer is the same. In this way, the DNNSP can learn the features of points scaled from the entire regions to the centers of the point clusters, which makes the point cluster-based feature representations robust and discriminative. Finally, the cluster-based features are input to another MLP for point cloud classification. We have evaluated qualitatively and quantitatively the proposed method using several airborne laser scanning and terrestrial laser scanning point cloud data sets. The experimental results have demonstrated the effectiveness of our method in improving classification accuracy.",
"title": ""
},
{
"docid": "7c6a40af29c1bd8af4b9031ef95a92cf",
"text": "A broadband radial waveguide power amplifier has been designed and fabricated using a spatial power dividing/combining technique. A simple electromagnetic model of this power-dividing/combining structure has been developed. Analysis based on equivalent circuits gives the design formula for perfect power-dividing/ combining circuits. The measured small-signal gain of the eight-device power amplifier is 12 –16.5 dB over a broadband from 7 to 15 GHz. The measured maximum output power at 1-dB compression is 28.6 dBm at 10 GHz, with a power-combining efficiency of about 91%. Furthermore, the performance degradation of this power amplifier because of device failures has also been measured.",
"title": ""
},
{
"docid": "8553a5d062f48f47de899cc5d23e2059",
"text": "A systems approach to studying biology uses a variety of mathematical, computational, and engineering tools to holistically understand and model properties of cells, tissues, and organisms. Building from early biochemical, genetic, and physiological studies, systems biology became established through the development of genome-wide methods, high-throughput procedures, modern computational processing power, and bioinformatics. Here, we highlight a variety of systems approaches to the study of biological rhythms that occur with a 24-h period-circadian rhythms. We review how systems methods have helped to elucidate complex behaviors of the circadian clock including temperature compensation, rhythmicity, and robustness. Finally, we explain the contribution of systems biology to the transcription-translation feedback loop and posttranslational oscillator models of circadian rhythms and describe new technologies and \"-omics\" approaches to understand circadian timekeeping and neurophysiology.",
"title": ""
},
{
"docid": "ab1b9b18163d3e732a2f8fc8b4e04ab1",
"text": "We measure the knowledge flows between countries by analysing publication and citation data, arguing that not all citations are equally important. Therefore, in contrast to existing techniques that utilize absolute citation counts to quantify knowledge flows between different entities, our model employs a citation context analysis technique, using a machine-learning approach to distinguish between important and non-important citations. We use 14 novel features (including context-based, cue words-based and text-based) to train a Support Vector Machine (SVM) and Random Forest classifier on an annotated dataset of 20,527 publications downloaded from the Association for Computational Linguistics anthology (http://allenai.org/data.html). Our machine-learning models outperform existing state-of-the-art citation context approaches, with the SVM model reaching up to 61% and the Random Forest model up to a very encouraging 90% Precision–Recall Area Under the Curve, with 10-fold cross-validation. Finally, we present a case study to explain our deployed method for datasets of PLoS ONE full-text publications in the field of Computer and Information Sciences. Our results show that a significant volume of knowledge flows from the United States, based on important citations, are consumed by the international scientific community. Of the total knowledge flow from China, we find a relatively smaller proportion (only 4.11%) falling into the category of knowledge flow based on important citations, while The Netherlands and Germany show the highest proportions of knowledge flows based on important citations, at 9.06 and 7.35% respectively. Among the institutions, interestingly, the findings show that at the University of Malaya more than 10% of the knowledge produced falls into the category of important. We believe that such analyses are helpful to understand the dynamics of the relevant knowledge flows across nations and institutions.",
"title": ""
},
{
"docid": "251c2f5ebc0d2c784b01802f8cd25e89",
"text": "Reduced frequency range in vowel production is a well documented speech characteristic of individuals with psychological and neurological disorders. Affective disorders such as depression and post-traumatic stress disorder (PTSD) are known to influence motor control and in particular speech production. The assessment and documentation of reduced vowel space and reduced expressivity often either rely on subjective assessments or on analysis of speech under constrained laboratory conditions (e.g. sustained vowel production, reading tasks). These constraints render the analysis of such measures expensive and impractical. Within this work, we investigate an automatic unsupervised machine learning based approach to assess a speaker's vowel space. Our experiments are based on recordings of 253 individuals. Symptoms of depression and PTSD are assessed using standard self-assessment questionnaires and their cut-off scores. The experiments show a significantly reduced vowel space in subjects that scored positively on the questionnaires. We show the measure's statistical robustness against varying demographics of individuals and articulation rate. The reduced vowel space for subjects with symptoms of depression can be explained by the common condition of psychomotor retardation influencing articulation and motor control. These findings could potentially support treatment of affective disorders, like depression and PTSD in the future.",
"title": ""
},
{
"docid": "d96c9204c552181e4d00ed961b18c665",
"text": "We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.",
"title": ""
},
{
"docid": "be9ebd1cd6f51ed22ac04d5dd9d99202",
"text": "We present a new garbled circuit construction for two-party secure function evaluation (SFE). In our one-round protocol, XOR gates are evaluated “for free”, which results in the corresponding improvement over the best garbled circuit implementations (e.g. Fairplay [19]). We build permutation networks [26] and Universal Circuits (UC) [25] almost exclusively of XOR gates; this results in a factor of up to 4 improvement (in both computation and communication) of their SFE. We also improve integer addition and equality testing by factor of up to 2. We rely on the Random Oracle (RO) assumption. Our constructions are proven secure in the semi-honest model.",
"title": ""
},
{
"docid": "b3874f8390e284c119635e7619e7d952",
"text": "Since a vehicle logo is the clearest indicator of a vehicle manufacturer, most vehicle manufacturer recognition (VMR) methods are based on vehicle logo recognition. Logo recognition can be still a challenge due to difficulties in precisely segmenting the vehicle logo in an image and the requirement for robustness against various imaging situations simultaneously. In this paper, a convolutional neural network (CNN) system has been proposed for VMR that removes the requirement for precise logo detection and segmentation. In addition, an efficient pretraining strategy has been introduced to reduce the high computational cost of kernel training in CNN-based systems to enable improved real-world applications. A data set containing 11 500 logo images belonging to 10 manufacturers, with 10 000 for training and 1500 for testing, is generated and employed to assess the suitability of the proposed system. An average accuracy of 99.07% is obtained, demonstrating the high classification potential and robustness against various poor imaging situations.",
"title": ""
},
{
"docid": "14276adf4f5b3538f95cfd10902825ef",
"text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.",
"title": ""
},
{
"docid": "34e24b3cc63b1e774c6b6bf33a14ad9a",
"text": "As computers are becoming more powerful, the critical bottleneck in their use is often in the user interface, not in the computer processing. Research in human-computer interaction that seeks to increase the communication bandwidth between the user and the machine by using input from the user's eye movement is discussed. The speed potential, processing stages, interaction techniques, and problems associated with these eye-gaze interfaces are described.<<ETX>>",
"title": ""
},
{
"docid": "d719fb1fe0faf76c14d24f7587c5345f",
"text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †",
"title": ""
},
{
"docid": "8db35bf9fd2969c579594e726370700d",
"text": "Wireless Sensor Networks (WSNs), in recent times, have become one of the most promising network solutions with a wide variety of applications in the areas of agriculture, environment, healthcare and the military. Notwithstanding these promising applications, sensor nodes in WSNs are vulnerable to different security attacks due to their deployment in hostile and unattended areas and their resource constraints. One of such attacks is the DoS jamming attack that interferes and disrupts the normal functions of sensor nodes in a WSN by emitting radio frequency signals to jam legitimate signals to cause a denial of service. In this work we propose a step-wise approach using a statistical process control technique to detect these attacks. We deploy an exponentially weighted moving average (EWMA) to detect anomalous changes in the intensity of a jamming attack event by using the packet inter-arrival feature of the received packets from the sensor nodes. Results obtained from a trace-driven simulation show that the proposed solution can efficiently and accurately detect jamming attacks in WSNs with little or no overhead.",
"title": ""
},
{
"docid": "006e11d03b1cdf8dcf85ba3967373d8d",
"text": "Collaboration in three-dimensional space: “spatial workspace collaboration” is introduced and an approach supporting its use via a video mediated communication system is described. Verbal expression analysis is primarily focused on. Based on experiment results, movability of a focal point, sharing focal points, movability of a shared workspace, and the ability to confirm viewing intentions and movements were determined to be system requirements necessary to support spatial workspace collaboration. A newly developed SharedView system having the capability to support spatial workspace collaboration is also introduced, tested, and some experimental results described.",
"title": ""
},
{
"docid": "030b25a7c93ca38dec71b301843c7366",
"text": "Simple grippers with one or two degrees of freedom are commercially available prosthetic hands; these pinch type devices cannot grasp small cylinders and spheres because of their small degree of freedom. This paper presents the design and prototyping of underactuated five-finger prosthetic hand for grasping various objects in daily life. Underactuated mechanism enables the prosthetic hand to move fifteen compliant joints only by one ultrasonic motor. The innovative design of this prosthetic hand is the underactuated mechanism optimized to distribute grasping force like those of humans who can grasp various objects robustly. Thanks to human like force distribution, the prototype of prosthetic hand could grasp various objects in daily life and heavy objects with the maximum ejection force of 50 N that is greater than other underactuated prosthetic hands.",
"title": ""
}
] |
scidocsrr
|
5a0974404945550533eaddf005901623
|
Intelligent Critic System for Architectural Design
|
[
{
"docid": "44f41d363390f6f079f2e67067ffa36d",
"text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢",
"title": ""
}
] |
[
{
"docid": "5820a54cf9235a08fbf3d6221c42f1d0",
"text": "Restoring nasal lining is one of the essential parts during reconstruction of full-thickness defects of the nose. Without a sufficient nasal lining the whole reconstruction will fail. Nasal lining has to sufficiently cover the shaping subsurface framework. But in addition, lining must not compromise or even block nasal ventilation. This article demonstrates different possibilities of lining reconstruction. The use of composite grafts for small rim defects is described. The limits and technical components for application of skin grafts are discussed. Then the advantages and limitations of endonasal, perinasal, and hingeover flaps are demonstrated. Strategies to restore lining with one or two forehead flaps are presented. Finally, the possibilities and technical aspects to reconstruct nasal lining with a forearm flap are demonstrated. Technical details are explained by intraoperative pictures. Clinical cases are shown to illustrate the different approaches and should help to understand the process of decision making. It is concluded that although the lining cannot be seen after reconstruction of the cover it remains one of the key components for nasal reconstruction. When dealing with full-thickness nasal defects, there is no way to avoid learning how to restore nasal lining.",
"title": ""
},
{
"docid": "815b2b5d22d84810d33c1fd39761367d",
"text": "Analyzing Collaborative Learning Processes Automatically Running Head: ANALYZING COLLABORATIVE LEARNING PROCESSES AUTOMATICALLY Analyzing Collaborative Learning Processes Automatically: Exploiting the Advances of Computational Linguistics in Computer-Supported Collaborative Learning Carolyn Rosé, Yi-Chia Wang, Yue Cui, Jaime Arguello Carnegie Mellon University, USA Frank Fischer, Armin Weinberger, Karsten Stegmann University of Munich, Germany Corresponding author: Carolyn Rosé, Language Technologies Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA, 15213, Phone: ++1 (412) 268-7130, Fax: ++1 (412) 268-6298, email: cprose@cs.cmu.edu",
"title": ""
},
{
"docid": "7c98ac06ea8cb9b83673a9c300fb6f4c",
"text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.",
"title": ""
},
{
"docid": "c3584e660a8f6c88f24d3c9dd3c08913",
"text": "Conditional Random Rields (CRF) have been widely applied in image segmentations. While most studies rely on handcrafted features, we here propose to exploit a pre-trained large convolutional neural network (CNN) to generate deep features for CRF learning. The deep CNN is trained on the ImageNet dataset and transferred to image segmentations here for constructing potentials of superpixels. Then the CRF parameters are learnt using a structured support vector machine (SSVM). To fully exploit context information in inference, we construct spatially related co-occurrence pairwise potentials and incorporate them into the energy function. This prefers labelling of object pairs that frequently co-occur in a certain spatial layout and at the same time avoids implausible labellings during the inference. Extensive experiments on binary and multi-class segmentation benchmarks demonstrate the promise of the proposed method. We thus provide new baselines for the segmentation performance on the Weizmann horse, Graz-02, MSRC-21, Stanford Background and PASCAL VOC 2011 datasets.",
"title": ""
},
{
"docid": "f3f441c2cf1224746c0bfbb6ce02706d",
"text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.",
"title": ""
},
{
"docid": "bfba2d1f26b3ac66630d81ab5bf10347",
"text": "Authcoin is an alternative approach to the commonly used public key infrastructures such as central authorities and the PGP web of trust. It combines a challenge response-based validation and authentication process for domains, certificates, email accounts and public keys with the advantages of a block chain-based storage system. As a result, Authcoin does not suffer from the downsides of existing solutions and is much more resilient to sybil attacks.",
"title": ""
},
{
"docid": "afdfde71813649df64fb510e19c47b4e",
"text": "In this paper, we propose an approach for affective ranking of movie scenes based on the emotions that are actually felt by spectators. Such a ranking can be used for characterizing the affective, or emotional, content of video clips. The ranking can for instance help determine which video clip from a database elicits, for a given user, the most joy. This in turn will permit video indexing and retrieval based on affective criteria corresponding to a personalized user affective profile.\n A dataset of 64 different scenes from 8 movies was shown to eight participants. While watching, their physiological responses were recorded; namely, five peripheral physiological signals (GSR - galvanic skin resistance, EMG - electromyograms, blood pressure, respiration pattern, skin temperature) were acquired. After watching each scene, the participants were asked to self-assess their felt arousal and valence for that scene. In addition, movie scenes were analyzed in order to characterize each with various audio- and video-based features capturing the key elements of the events occurring within that scene.\n Arousal and valence levels were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based audio and video features. We show that a correlation exists between arousal- and valence-based rankings provided by the spectator's self-assessments, and rankings obtained automatically from either physiological signals or audio-video features. This demonstrates the ability of using physiological responses of participants to characterize video scenes and to rank them according to their emotional content. This further shows that audio-visual features, either individually or combined, can fairly reliably be used to predict the spectator's felt emotion for a given scene. The results also confirm that participants exhibit different affective responses to movie scenes, which emphasizes the need for the emotional profiles to be user-dependant.",
"title": ""
},
{
"docid": "e797fbf7b53214df32d5694527ce5ba3",
"text": "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model 1 employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.",
"title": ""
},
{
"docid": "100b664ee1bba4ecf2694ec4c60d4346",
"text": "This paper explores two modulation techniques for power factor corrector (PFC) based on critical conduction mode (CRM) and proposes a new modulation technique which has the benefits of CRM and allows quasi constant switching frequency also. The converter is designed for MHz range switching frequency as high frequency reduces the size of the EMI filter. However at high frequency, the switching losses become the dominant losses making soft switching a necessity. CRM allows zero current switching (ZCS) turn-on but it's not able to achieve zero voltage switching (ZVS) turn-on when the input voltage is greater than half of the output voltage. To achieve ZVS turn-on over the entire mains cycle, triangular current mode (TCM) was proposed by Marxgut, C. et al.[1] but both these methods have the drawback of variable switching frequency. The new method proposed modifies TCM so that the switching frequency is quasi constant and ZVS turn-on is also achieved over the entire mains cycle. Based on analytical loss model of Cascode GaN transistor, the efficiency of the three modulation techniques is compared. Also, the parameters of the EMI filter required are compared based on simulated noise measurement. As variable switching frequency is not preferred in three phase systems, the quasi constant frequency approach finds its benefits in three phase PFC.",
"title": ""
},
{
"docid": "c7ffe60db85e66f4b9a3c7de1f48fe3f",
"text": "Changes in body posture, musculoskeletal disorders and somatic dysfunctions are frequently observed during pregnancy especially ligament, joint and myofascial impairment. The aim of the paper is to present the use of osteopathic manipulative treatment (OMT) for back and pelvic pain in pregnancy on the basis of a review of the available literature. MEDLINE and Cochrane Library were searched in January 2014 for relevant reports, randomized controlled trials, clinical and case studies of OMT use in pregnant women. Each eligible source was verified and analyzed by two independent reviewers. OMT procedures appear to be effective and safe for pelvic and spinal pain management in the lumbosacral area in pregnant women.",
"title": ""
},
{
"docid": "9fa8ba9da6f6303278d479666916bd13",
"text": "UART (Universal Asynchronous Receiver Transmitter) is used for serial communication. It is used for long distance and low cost process for transfer of data between pc and its devices. In general a UART operated with specific baud rate. To meet the complex communication demands it is not sufficient. To overcome this difficulty a multi channel UART is proposed in this paper. And the whole design is simulated with modelsim and synthesized with Xilinx software",
"title": ""
},
{
"docid": "4c4c25aba1600869f7899e20446fd75f",
"text": "This paper presents GRAPE, a parallel system for graph computations. GRAPE differs from prior systems in its ability to parallelize existing sequential graph algorithms as a whole. Underlying GRAPE are a simple programming model and a principled approach, based on partial evaluation and incremental computation. We show that sequential graph algorithms can be \"plugged into\" GRAPE with minor changes, and get parallelized. As long as the sequential algorithms are correct, their GRAPE parallelization guarantees to terminate with correct answers under a monotonic condition. Moreover, we show that algorithms in MapReduce, BSP and PRAM can be optimally simulated on GRAPE. In addition to the ease of programming, we experimentally verify that GRAPE achieves comparable performance to the state-of-the-art graph systems, using real-life and synthetic graphs.",
"title": ""
},
{
"docid": "ea5a455bca9ff0dbb1996bd97d89dfe5",
"text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.",
"title": ""
},
{
"docid": "c40323714f74de29dd487c922c06ac70",
"text": "The increase in the number of SDN-based deployments in production networks is triggering the need to consider fault-tolerant designs of controller architectures. Commercial SDN controller solutions incorporate fault tolerance, but there has been little discussion in the SDN literature on the design of such systems and the tradeoffs involved. To fill this gap, we present a by-construction design of a fault-tolerant controller, and materialize it by proposing and formalizing a practical architecture for small to medium-sized scale networks. A central component of our particular design is a replicated shared database that stores all network state. Contrary to the more common primary-backup approaches, the proposed design guarantees a smooth transition in case of failures and avoids the need of an additional coordination service. Our preliminary results show that the performance of our solution fulfills the demands of the target networks. We hope this paper to be a first step in what we consider a necessary discussion on how to build robust SDNs.",
"title": ""
},
{
"docid": "2adde1812974f2d5d35d4c7e31ca7247",
"text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]",
"title": ""
},
{
"docid": "0683dbfa548d90b1fcbd3d793d194e6c",
"text": "Ayurvedic medicine is an ancient Indian form of healing. It is gaining popularity as part of the growing interest in New Age spirituality and in complementary and alternative medicine (CAM). There is no cure for Asthma as per the Conventional Medical Science. Ayurvedic medicines can be a potential and effective alternative for the treatment against the bronchial asthma. Ayurvedic medicines are used for the treatment of diseases globally. The present study was a review on the management of Tamaka-Shwasa based on Ayurvedic drugs including the respiratory tonics and naturally occurring bronchodilator and immune-modulators. This study result concluded that a systematic combination of herbal and allopathic medicines is required for management of asthma.",
"title": ""
},
{
"docid": "414cc87e3bdd4b070030282b0f192078",
"text": "The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared. We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.",
"title": ""
},
{
"docid": "41f4b0c55392ed3a2b59e4bbaec7566f",
"text": "Lithium-ion (Li-ion) batteries are ubiquitous sources of energy for portable electronic devices. Compared to alternative battery technologies, Li-ion batteries provide one of the best energy-to-weight ratios, exhibit no memory effect, and have low self-discharge when not in use. These beneficial properties, as well as decreasing costs, have established Li-ion batteries as a leading candidate for the next generation of automotive and aerospace applications. In the automotive sector, increasing demand for hybrid electric vehicles (HEVs), plug-in HEVs (PHEVs), and EVs has pushed manufacturers to the limits of contemporary automotive battery technology. This limitation is gradually forcing consideration of alternative battery technologies, such as Li-ion batteries, as a replacement for existing leadacid and nickel-metal-hydride batteries. Unfortunately, this replacement is a challenging task since automotive applications demand large amounts of energy and power and must operate safely, reliably, and durably at these scales. The article presents a detailed description and model of a Li-ion battery. It begins the section \"Intercalation-Based Batteries\" by providing an intuitive explanation of the fundamentals behind storing energy in a Li-ion battery. In the sections \"Modeling Approach\" and \"Li-Ion Battery Model,\" it present equations that describe a Li-ion cell's dynamic behavior. This modeling is based on using electrochemical principles to develop a physics-based model in contrast to equivalent circuit models. A goal of this article is to present the electrochemical model from a controls perspective.",
"title": ""
},
{
"docid": "eadc50aebc6b9c2fbd16f9ddb3094c00",
"text": "Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.",
"title": ""
},
{
"docid": "c60b80296d66f762b935c3c40d82a520",
"text": "Subjects The study sample was composed of 172 adults (U.S. Commissioned Corps and Air Force officers) recruited from dental clinics in military bases in Rockville, Maryland, San Antonio, Texas, and Biloxi, Mississippi. Patients were eligible for the study if they were to be treated with at least one dental composite restoration. Patients were excluded if they had received composite restorations or pit-and-fissure sealants within the last 3 months or wore removable dental appliances, such as orthodontic retainers or partial dentures. The mean age of participants was 43.9 years (standard deviation, SD, 1.1 years), and the sample was evenly distributed by gender (50.3% male, 49.7% female). Subjects were followed a maximum of 30 h after receiving the dental composite restoration, which was adequate to assess short-term changes in chemical concentrations of urine and saliva samples. The authors did not report the years of subject recruitment or data collection.",
"title": ""
}
] |
scidocsrr
|
f9b7d215e550e185353cf679080a888b
|
Interaction-aware occupancy prediction of road vehicles
|
[
{
"docid": "fb8518678126415b58f1b934235ccc79",
"text": "One significant barrier in introducing autonomous driving is the liability issue of a collision; e.g. when two autonomous vehicles collide, it is unclear which vehicle should be held accountable. To solve this issue, we view traffic rules from legal texts as requirements for autonomous vehicles. If we can prove that an autonomous vehicle always satisfies these requirements during its operation, then it cannot be held responsible in a collision. We present our approach by formalising a subset of traffic rules from the Vienna Convention on Road Traffic for highway scenarios in Isabelle/HOL.",
"title": ""
}
] |
[
{
"docid": "854d06ba08492ad68ea96c73908f81ca",
"text": "We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR100. Swapout samples from a rich set of architectures including dropout [20], stochastic depth [7] and residual architectures [5, 6] as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.",
"title": ""
},
{
"docid": "c011b2924151df9c4e90865d8ab8d856",
"text": "The growing demand for food poses major challenges to humankind. We have to safeguard both biodiversity and arable land for future agricultural food production, and we need to protect genetic diversity to safeguard ecosystem resilience. We must produce more food with less input, while deploying every effort to minimize risk. Agricultural sustainability is no longer optional but mandatory. There is still an on-going debate among researchers and in the media on the best strategy to keep pace with global population growth and increasing food demand. One strategy favors the use of genetically modified (GM) crops, while another strategy focuses on agricultural biodiversity. Here, we discuss two obstacles to sustainable agriculture solutions. The first obstacle is the claim that genetically modified crops are necessary if we are to secure food production within the next decades. This claim has no scientific support, but is rather a reflection of corporate interests. The second obstacle is the resultant shortage of research funds for agrobiodiversity solutions in comparison with funding for research in genetic modification of crops. Favoring biodiversity does not exclude any future biotechnological contributions, but favoring biotechnology threatens future biodiversity resources. An objective review of current knowledge places GM crops far down the list of potential solutions in the coming decades. We conclude that much of the research funding currently available for the development of GM crops would be much better spent in other research areas of plant science, e.g., nutrition, policy research, governance, and solutions close to local market conditions if the goal is to provide sufficient food for the world’s growing population in a sustainable way.",
"title": ""
},
{
"docid": "ee8ac41750c7d1545af54e812d7f2d9c",
"text": "The demand for more sophisticated Location-Based Services (LBS) in terms of applications variety and accuracy is tripling every year since the emergence of the smartphone a few years ago. Equally, smartphone manufacturers are mounting several wireless communication and localization technologies, inertial sensors as well as powerful processing capability, to cater to such LBS applications. A hybrid of wireless technologies is needed to provide seamless localization solutions and to improve accuracy, to reduce time to fix, and to reduce power consumption. The review of localization techniques/technologies of this emerging field is therefore important. This article reviews the recent research-oriented and commercial localization solutions on smartphones. The focus of this article is on the implementation challenges associated with utilizing these positioning solutions on Android-based smartphones. Furthermore, the taxonomy of smartphone-location techniques is highlighted with a special focus on the detail of each technique and its hybridization. The article compares the indoor localization techniques based on accuracy, utilized wireless technology, overhead, and localization technique used. The pursuit of achieving ubiquitous localization outdoors and indoors for critical LBS applications such as security and safety shall dominate future research efforts.",
"title": ""
},
{
"docid": "8ed2bb129f08657b896f5033c481db8f",
"text": "simple and fast reflectional symmetry detection algorithm has been developed in this Apaper. The algorithm employs only the original gray scale image and the gradient information of the image, and it is able to detect multiple reflectional symmetry axes of an object in the image. The directions of the symmetry axes are obtained from the gradient orientation histogram of the input gray scale image by using the Fourier method. Both synthetic and real images have been tested using the proposed algorithm.",
"title": ""
},
{
"docid": "3105a48f0b8e45857e8d48e26b258e04",
"text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.",
"title": ""
},
{
"docid": "4147fee030667122923f420ab55e38f7",
"text": "In this paper we propose a replacement algorithm, SF-LRU (second chance-frequency - least recently used) that combines the LRU (least recently used) and the LFU (least frequently used) using the second chance concept. A comprehensive comparison is made between our algorithm and both LRU and LFU algorithms. Experimental results show that the SF-LRU significantly reduces the number of cache misses compared the other two algorithms. Simulation results show that our algorithm can provide a maximum value of approximately 6.3% improvement in the miss ratio over the LRU algorithm in data cache and approximately 9.3% improvement in miss ratio in instruction cache. This performance improvement is attributed to the fact that our algorithm provides a second chance to the block that may be deleted according to LRU's rules. This is done by comparing the frequency of the block with the block next to it in the set.",
"title": ""
},
{
"docid": "5a46d347e83aec7624dde84ecdd5302c",
"text": "This paper presents a new algorithm to automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. To obtain a robust decision surface, we train a log-linear model to make the margin between the correct assignments and the false ones as large as possible. This results in a quadratic programming (QP) problem which can be efficiently solved. Experimental results show that our algorithm achieves 79.7% accuracy, about 10% higher than the state-of-the-art baseline (Kushman et al., 2014).",
"title": ""
},
{
"docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2",
"text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).",
"title": ""
},
{
"docid": "b19cbe5e99f2edb701ba22faa7406073",
"text": "There are many wireless monitoring and control applications for industrial and home markets which require longer battery life, lower data rates and less complexity than available from existing wireless standards. These standards provide higher data rates at the expense of power consumption, application complexity and cost. What these markets need, in many cases, is a standardsbased wireless technology having the performance characteristics that closely meet the requirements for reliability, security, low power and low cost. This standards-based, interoperable wireless technology will address the unique needs of low data rate wireless control and sensor-based networks.",
"title": ""
},
{
"docid": "701d822e68ed2c74670f6a7a8d06631a",
"text": "With the increasing dependence of enterprises on IT, and with the widely spreading use of e-business, IT governance is attracting increasing worldwide attention. A proper IT governance would promote enterprise performance through intelligent and efficient utilization of IT. In addition, standard IT governance practices would provide a suitable open environment for e-business that provides compatibility for inter-enterprise interaction. This review is concerned with introducing the current state of IT governance, in four main steps. First, the review identifies what is meant by IT governance, and presents the main organizations concerned with its development, namely: ISACA (Information Systems Audit and Control Association) and ITGI (Information Technology Governance Institute). Secondly, the review highlights COBIT (Control Objectives for Information and related Technologies) the widely acknowledged IT governance framework, produced by ITGI. Thirdly, the current state of COBIT use is addressed using a recent global survey. Finally, comments and recommendations on the future development of IT governance are concluded. Understanding IT governance The word governance brings attention to the more familiar word government. To Webster's dictionary [1], both are of the same meaning. The dictionary defines the word government as \"the individual or body that exercises administrative power\". The word is known to be of Greek origin, and means \"to steer\" [2]. Currently, the two words are usually used to mean two related, but different, meanings. While the word government is defined as",
"title": ""
},
{
"docid": "a532dcd3dbaf3ba784d1f5f8623b600c",
"text": "Our long term interest is in building inference algorithms capable of answering questions and producing human-readable explanations by aggregating information from multiple sources and knowledge bases. Currently information aggregation (also referred to as “multi-hop inference”) is challenging for more than two facts due to “semantic drift”, or the tendency for natural language inference algorithms to quickly move off-topic when assembling long chains of knowledge. In this paper we explore the possibility of generating large explanations with an average of six facts by automatically extracting common explanatory patterns from a corpus of manually authored elementary science explanations represented as lexically-connected explanation graphs grounded in a semi-structured knowledge base of tables. We empirically demonstrate that there are sufficient common explanatory patterns in this corpus that it is possible in principle to reconstruct unseen explanation graphs by merging multiple explanatory patterns, then adapting and/or adding to their knowledge. This may ultimately provide a mechanism to allow inference algorithms to surpass the two-fact “aggregation horizon” in practice by using common explanatory patterns as constraints to limit the search space during information aggregation.",
"title": ""
},
{
"docid": "9b96a97426917b18dab401423e777b92",
"text": "Anatomical and biophysical modeling of left atrium (LA) and proximal pulmonary veins (PPVs) is important for clinical management of several cardiac diseases. Magnetic resonance imaging (MRI) allows qualitative assessment of LA and PPVs through visualization. However, there is a strong need for an advanced image segmentation method to be applied to cardiac MRI for quantitative analysis of LA and PPVs. In this study, we address this unmet clinical need by exploring a new deep learning-based segmentation strategy for quantification of LA and PPVs with high accuracy and heightened efficiency. Our approach is based on a multi-view convolutional neural network (CNN) with an adaptive fusion strategy and a new loss function that allows fast and more accurate convergence of the backpropagation based optimization. After training our network from scratch by using more than 60K 2D MRI images (slices), we have evaluated our segmentation strategy to the STACOM 2013 cardiac segmentation challenge benchmark. Qualitative and quantitative evaluations, obtained from the segmentation challenge, indicate that the proposed method achieved the state-of-the-art sensitivity (90%), specificity (99%), precision (94%), and efficiency levels (10 seconds in GPU, and 7.5 minutes in CPU).",
"title": ""
},
{
"docid": "78c567177285309ca3100fb15d6ee113",
"text": "The ability to discover the topic of a large set of text documents using relevant keyphrases is usually regarded as a very tedious task if done by hand. Automatic keyphrase extraction from multi-document data sets or text clusters provides a very compact summary of the contents of the clusters, which often helps in locating information easily. We introduce an algorithm for topic discovery using keyphrase extraction from multi-document sets and clusters based on frequent and significant shared phrases between documents. The keyphrases extracted by the algorithm are highly accurate and fit the cluster topic. The algorithm is independent of the domain of the documents. Subjective as well as quantitative evaluation show that the algorithm outperforms keyword-based cluster-labeling algorithms, and is capable of accurately discovering the topic, and often ranking it in the top one or two extracted keyphrases.",
"title": ""
},
{
"docid": "fb3018d852c2a7baf96fb4fb1233b5e5",
"text": "The term twin spotting refers to phenotypes characterized by the spatial and temporal co-occurrence of two (or more) different nevi arranged in variable cutaneous patterns, and can be associated with extra-cutaneous anomalies. Several examples of twin spotting have been described in humans including nevus vascularis mixtus, cutis tricolor, lesions of overgrowth, and deficient growth in Proteus and Elattoproteus syndromes, epidermolytic hyperkeratosis of Brocq, and the so-called phacomatoses pigmentovascularis and pigmentokeratotica. We report on a 28-year-old man and a 15-year-old girl, who presented with a previously unrecognized association of paired cutaneous vascular nevi of the telangiectaticus and anemicus types (naevus vascularis mixtus) distributed in a mosaic pattern on the face (in both patients) and over the entire body (in the man) and a complex brain malformation (in both patients) consisting of cerebral hemiatrophy, hypoplasia of the cerebral vessels and homolateral hypertrophy of the skull and sinuses (known as Dyke-Davidoff-Masson malformation). Both patients had facial asymmetry and the young man had facial dysmorphism, seizures with EEG anomalies, hemiplegia, insulin-dependent diabetes mellitus (IDDM), autoimmune thyroiditis, a large hepatic cavernous vascular malformation, and left Legg-Calvé-Perthes disease (LCPD) [LCPD-like presentation]. Array-CGH analysis and mutation analysis of the RASA1 gene were normal in both patients.",
"title": ""
},
{
"docid": "e55b0182c47c7aba4d65fac1ad3a3fa2",
"text": "117 © 2009 EMDR International Association DOI: 10.1891/1933-3196.3.3.117 “Experiencing trauma is an essential part of being human; history is written in blood” (van der Kolk & McFarlane, 1996, p. 3). As humans, however, we do have an extraordinary ability to adapt to trauma, and resilience is our most common response (Bonanno, 2005). Nonetheless, traumatic experiences can alter one’s social, psychological, and biological equilibrium, and for years memories of the event can taint experiences in the present. Despite advances in our knowledge of posttraumatic stress disorder (PTSD) and the development of psychosocial treatments, almost half of those who engage in treatment for PTSD fail to fully recover (Bradley, Greene, Russ, Dutra, & Westen, 2005). Furthermore, no theory as yet provides an adequate account of all the complex phenomena and processes involved in PTSD, and our understanding of the mechanisms that underlie effective treatment, such as eye movement desensitization and reprocessing (EMDR) and exposure therapy remains unclear. Historical Overview of PTSD",
"title": ""
},
{
"docid": "63ab6c486aa8025c38bd5b7eadb68cfa",
"text": "The demands on a natural language understanding system used for spoken language differ somewhat from the demands of text processing. For processing spoken language, there is a tension between the system being as robust as necessary, and as constrained as possible. The robust system will a t tempt to find as sensible an interpretation as possible, even in the presence of performance errors by the speaker, or recognition errors by the speech recognizer. In contrast, in order to provide language constraints to a speech recognizer, a system should be able to detect that a recognized string is not a sentence of English, and disprefer that recognition hypothesis from the speech recognizer. If the coupling is to be tight, with parsing and recognition interleaved, then the parser should be able to enforce as many constraints as possible for partial utterances. The approach taken in Gemini is to tightly constrain language recognition to limit overgeneration, but to extend the language analysis to recognize certain characteristic patterns of spoken utterances (but not generally thought of as part of grammar) and to recognize specific types of performance errors by the speaker.",
"title": ""
},
{
"docid": "7bf8b7e4698bd0ef951879f68083fd7e",
"text": "Brain injury induced by fluid percussion in rats caused a marked elevation in extracellular glutamate and aspartate adjacent to the trauma site. This increase in excitatory amino acids was related to the severity of the injury and was associated with a reduction in cellular bioenergetic state and intracellular free magnesium. Treatment with the noncompetitive N-methyl-D-aspartate (NMDA) antagonist dextrophan or the competitive antagonist 3-(2-carboxypiperazin-4-yl)propyl-1-phosphonic acid limited the resultant neurological dysfunction; dextrorphan treatment also improved the bioenergetic state after trauma and increased the intracellular free magnesium. Thus, excitatory amino acids contribute to delayed tissue damage after brain trauma; NMDA antagonists may be of benefit in treating acute head injury.",
"title": ""
},
{
"docid": "3c83e3b5484cada8b2cfe8943c9ce5f7",
"text": "Automatic human gesture recognition from camera images is an interesting topic for developing intelligent vision systems. In this paper, we propose a convolution neural network (CNN) method to recognize hand gestures of human task activities from a camera image. To achieve the robustness performance, the skin model and the calibration of hand position and orientation are applied to obtain the training and testing data for the CNN. Since the light condition seriously affects the skin color, we adopt a Gaussian Mixture model (GMM) to train the skin model which is used to robustly filter out non-skin colors of an image. The calibration of hand position and orientation aims at translating and rotating the hand image to a neutral pose. Then the calibrated images are used to train the CNN. In our experiment, we provided a validation of the proposed method on recognizing human gestures which shows robust results with various hand positions and orientations and light conditions. Our experimental evaluation of seven subjects performing seven hand gestures with average recognition accuracies around 95.96% shows the feasibility and reliability of the proposed method.",
"title": ""
},
{
"docid": "e53678707c57dce8d2e91afa04e99aaa",
"text": "Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces.",
"title": ""
},
{
"docid": "5d82469913da465c7445359dcdbbc89b",
"text": "There is increasing interest in using synthetic aperture radar (SAR) images in automated target recognition and decision-making tasks. The success of such tasks depends on how well the reconstructed SAR images exhibit certain features of the underlying scene. Based on the observation that typical underlying scenes usually exhibit sparsity in terms of such features, this paper presents an image formation method that formulates the SAR imaging problem as a sparse signal representation problem. For problems of complex-valued nature, such as SAR, a key challenge is how to choose the dictionary and the representation scheme for effective sparse representation. Since features of the SAR reflectivity magnitude are usually of interest, the approach is designed to sparsely represent the magnitude of the complex-valued scattered field. This turns the image reconstruction problem into a joint optimisation problem over the representation of magnitude and phase of the underlying field reflectivities. The authors develop the mathematical framework for this method and propose an iterative solution for the corresponding joint optimisation problem. The experimental results demonstrate the superiority of this method over previous approaches in terms of both producing high-quality SAR images and exhibiting robustness to uncertain or limited data.",
"title": ""
}
] |
scidocsrr
|
7778be7b2bb00f830e92712c62d8e2ea
|
A Practical Guide to Sentiment Annotation: Challenges and Solutions
|
[
{
"docid": "0037e02f769ff4487b10a6453114062b",
"text": "Access to word–sentiment associations is useful for many applications, including sentiment analysis, stance detection, and linguistic analysis. However, manually assigning finegrained sentiment association scores to words has many challenges with respect to keeping annotations consistent. We apply the annotation technique of Best–Worst Scaling to obtain real-valued sentiment association scores for words and phrases in three different domains: general English, English Twitter, and Arabic Twitter. We show that on all three domains the ranking of words by sentiment remains remarkably consistent even when the annotation process is repeated with a different set of annotators. We also, for the first time, determine the minimum difference in sentiment association that is perceptible to native speakers of a language.",
"title": ""
},
{
"docid": "c8dbc63f90982e05517bbdb98ebaeeb5",
"text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.",
"title": ""
},
{
"docid": "f3cb18c15459dd7a9c657e32442bd289",
"text": "The advent of crowdsourcing has created a variety of new opportunities for improving upon traditional methods of data collection and annotation. This in turn has created intriguing new opportunities for data-driven machine learning (ML). Convenient access to crowd workers for simple data collection has further generalized to leveraging more arbitrary crowd-based human computation (von Ahn 2005) to supplement automated ML. While new potential applications of crowdsourcing continue to emerge, a variety of practical and sometimes unexpected obstacles have already limited the degree to which its promised potential can be actually realized in practice. This paper considers two particular aspects of crowdsourcing and their interplay, data quality control (QC) and ML, reflecting on where we have been, where we are, and where we might go from here.",
"title": ""
},
{
"docid": "ae5142ef32fde6096ea4e4a41ba60cb6",
"text": "Social media is playing a growing role in elections world-wide. Thus, automatically analyzing electoral tweets has applications in understanding how public sentiment is shaped, tracking public sentiment and polarization with respect to candidates and issues, understanding the impact of tweets from various entities, etc. Here, for the first time, we automatically annotate a set of 2012 US presidential election tweets for a number of attributes pertaining to sentiment, emotion, purpose, and style by crowdsourcing. Overall, more than 100,000 crowdsourced responses were obtained for 13 questions on emotions, style, and purpose. Additionally, we show through an analysis of these annotations that purpose, even though correlated with emotions, is significantly different. Finally, we describe how we developed automatic classifiers, using features from state-of-the-art sentiment analysis systems, to predict emotion and purpose labels, respectively, in new unseen tweets. These experiments establish baseline results for automatic systems on this new data.",
"title": ""
}
] |
[
{
"docid": "336c787fe3a3b81b8ee4193802499376",
"text": "In this document, a real-time fog detection system using an on-board low cost b&w camera, for a driving application, is presented. This system is based on two clues: estimation of the visibility distance, which is calculated from the camera projection equations and the blurring due to the fog. Because of the water particles floating in the air, sky light gets diffuse and, focus on the road zone, which is one of the darkest zones on the image. The apparent effect is that some part of the sky introduces in the road. Also in foggy scenes, the border strength is reduced in the upper part of the image. These two sources of information are used to make this system more robust. The final purpose of this system is to develop an automatic vision-based diagnostic system for warning ADAS of possible wrong working conditions. Some experimental results and the conclusions about this work are presented.",
"title": ""
},
{
"docid": "14fb6228827657ba6f8d35d169ad3c63",
"text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"title": ""
},
{
"docid": "80394c124d823e7639af06fd33ef99c1",
"text": "We investigate whether income inequality affects subsequent growth in a cross-country sample for 1965-90, using the models of Barro (1997), Bleaney and Nishiyama (2002) and Sachs and Warner (1997), with negative results. We then investigate the evolution of income inequality over the same period and its correlation with growth. The dominating feature is inequality convergence across countries. This convergence has been significantly faster amongst developed countries. Growth does not appear to influence the evolution of inequality over time. Outline",
"title": ""
},
{
"docid": "dfa5334f77bba5b1eeb42390fed1bca3",
"text": "Personality was studied as a conditioner of the effects of stressful life events on illness onset. Two groups of middle and upper level executives had comparably high degrees of stressful life events in the previous 3 years, as measured by the Holmes and Rahe Schedule of Recent Life Events. One group (n = 86) suffered high stress without falling ill, whereas the other (n = 75) reported becoming sick after their encounter with stressful life events. Illness was measured by the Wyler, Masuda, and Holmes Seriousness of Illness Survey. Discriminant function analysis, run on half of the subjects in each group and cross-validated on the remaining cases, supported the prediction that high stress/low illness executives show, by comparison with high stress/high illness executives, more hardiness, that is, have a stronger commitment to self, an attitude of vigorousness toward the environment, a sense of meaningfulness, and an internal locus of control.",
"title": ""
},
{
"docid": "c13cbc9d7b4098cb392ba8293b692a37",
"text": "This paper introduces the first stiffness controller for continuum robots. The control law is based on an accurate approximation of a continuum robot's coupled kinematic and static force model. To implement a desired tip stiffness, the controller drives the actuators to positions corresponding to a deflected robot configuration that produces the required tip force for the measured tip position. This approach provides several important advantages. First, it enables the use of robot deflection sensing as a means to both sense and control tip forces. Second, it enables stiffness control to be implemented by modification of existing continuum robot position controllers. The proposed controller is demonstrated experimentally in the context of a concentric tube robot. Results show that the stiffness controller achieves the desired stiffness in steady state, provides good dynamic performance, and exhibits stability during contact transitions.",
"title": ""
},
{
"docid": "6d1f374686b98106ab4221066607721b",
"text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …",
"title": ""
},
{
"docid": "ec19c40473bb1316b9390b6d7bcaae7f",
"text": "Online crowdfunding platforms like DonorsChoose.org and Kickstarter allow specific projects to get funded by targeted contributions from a large number of people. Critical for the success of crowdfunding communities is recruitment and continued engagement of donors. With donor attrition rates above 70%, a significant challenge for online crowdfunding platforms as well as traditional offline non-profit organizations is the problem of donor retention. We present a large-scale study of millions of donors and donations on DonorsChoose.org, a crowdfunding platform for education projects. Studying an online crowdfunding platform allows for an unprecedented detailed view of how people direct their donations. We explore various factors impacting donor retention which allows us to identify different groups of donors and quantify their propensity to return for subsequent donations. We find that donors are more likely to return if they had a positive interaction with the receiver of the donation. We also show that this includes appropriate and timely recognition of their support as well as detailed communication of their impact. Finally, we discuss how our findings could inform steps to improve donor retention in crowdfunding communities and non-profit organizations.",
"title": ""
},
{
"docid": "17de31cccc12b401a949ff5660d4f4c6",
"text": "In this paper we propose a system that automates the whole process of taking attendance and maintaining its records in an academic institute. Managing people is a difficult task for most of the organizations, and maintaining the attendance record is an important factor in people management. When considering academic institutes, taking the attendance of students on daily basis and maintaining the records is a major task. Manually taking the attendance and maintaining it for a long time adds to the difficulty of this task as well as wastes a lot of time. For this reason an efficient system is designed. This system takes attendance electronically with the help of a fingerprint sensor and all the records are saved on a computer server. Fingerprint sensors and LCD screens are placed at the entrance of each room. In order to mark the attendance, student has to place his/her finger on the fingerprint sensor. On identification student’s attendance record is updated in the database and he/she is notified through LCD screen. No need of all the stationary material and special personal for keeping the records. Furthermore an automated system replaces the manual system.",
"title": ""
},
{
"docid": "0f42ee3de2d64956fc8620a2afc20f48",
"text": "In 4 experiments, the authors addressed the mechanisms by which grammatical gender (in Italian and German) may come to affect meaning. In Experiments 1 (similarity judgments) and 2 (semantic substitution errors), the authors found Italian gender effects for animals but not for artifacts; Experiment 3 revealed no comparable effects in German. These results suggest that gender effects arise as a generalization from an established association between gender of nouns and sex of human referents, extending to nouns referring to sexuated entities. Across languages, such effects are found when the language allows for easy mapping between gender of nouns and sex of human referents (Italian) but not when the mapping is less transparent (German). A final experiment provided further constraints: These effects during processing arise at a lexical-semantic level rather than at a conceptual level.",
"title": ""
},
{
"docid": "f3abf5a6c20b6fff4970e1e63c0e836b",
"text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.",
"title": ""
},
{
"docid": "140d81bc2d9d125ed43946ddee94d2e4",
"text": "Cluster analysis plays an important role in decision-making process for many knowledge-based systems. There exist a wide variety of different approaches for clustering applications including the heuristic techniques, probabilistic models, and traditional hierarchical algorithms. In this paper, a novel heuristic approach based on big bang–big crunch algorithm is proposed for clustering problems. The proposed method not only takes advantage of heuristic nature to alleviate typical clustering algorithms such as k-means, but it also benefits from the memory-based scheme as compared to its similar heuristic techniques. Furthermore, the performance of the proposed algorithm is investigated based on several benchmark test functions as well as on the well-known datasets. The experimental results show the significant superiority of the proposed method over the similar algorithms.",
"title": ""
},
{
"docid": "993590032de592f4bb69d9c906ff76a8",
"text": "The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously. Two key enablers will allow the realization of the vision of 5G: very dense deployments and centralized processing. This article discusses the challenges and requirements in the design of 5G mobile networks based on these two key enablers. It discusses how cloud technologies and flexible functionality assignment in radio access networks enable network densification and centralized operation of the radio access network over heterogeneous backhaul networks. The article describes the fundamental concepts, shows how to evolve the 3GPP LTE a",
"title": ""
},
{
"docid": "2706d3b3774cf238d07c1796c1901b95",
"text": "Domestic induction appliances require power converters that feature high efficiency and accurate power control in a wide range of operating conditions. To achieve this modulation techniques play a key role to optimize the power converter operation. In this paper, a series resonant inverter featuring reverse-blocking insulated gate bipolar transistors and an optimized modulation technique are proposed. An analytical study of the converter operation is performed, and the main simulation results are shown. The proposed topology reduces both conduction and switching losses, increasing significantly the power converter efficiency. Moreover, the proposed modulation technique achieves linear output power control, improving the final appliance performance. The results derived from this analysis are tested by means of an experimental prototype, verifying the feasibility of the proposed converter and modulation technique.",
"title": ""
},
{
"docid": "3f98e2683b83a7312dc4dd6bf1f717aa",
"text": "How do comments on student writing from peers compare to those from subject-matter experts? This study examined the types of comments that reviewers produce as well as their perceived helpfulness. Comments on classmates’ papers were collected from two undergraduate and one graduate-level psychology course. The undergraduate papers in one of the courses were also commented on by an independent psychology instructor experienced in providing feedback to students on similar writing tasks. The comments produced by students at both levels were shorter than the instructor’s. The instructor’s comments were predominantly directive and rarely summative. The undergraduate peers’ comments were more mixed in type; directive and praise comments were the most frequent. Consistently, undergraduate peers found directive and praise comments helpful. The helpfulness of the directive comments was also endorsed by a writing expert.",
"title": ""
},
{
"docid": "68fe4f62d48270395ca3f257bbf8a18a",
"text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.",
"title": ""
},
{
"docid": "95513348196c70bb6242137685a6fbe5",
"text": "People speak at different levels of specificity in different situations.1 A conversational agent should have this ability and know when to be specific and when to be general. We propose an approach that gives a neural network–based conversational agent this ability. Our approach involves alternating between data distillation and model training : removing training examples that are closest to the responses most commonly produced by the model trained from the last round and then retrain the model on the remaining dataset. Dialogue generation models trained with different degrees of data distillation manifest different levels of specificity. We then train a reinforcement learning system for selecting among this pool of generation models, to choose the best level of specificity for a given input. Compared to the original generative model trained without distillation, the proposed system is capable of generating more interesting and higher-quality responses, in addition to appropriately adjusting specificity depending on the context. Our research constitutes a specific case of a broader approach involving training multiple subsystems from a single dataset distinguished by differences in a specific property one wishes to model. We show that from such a set of subsystems, one can use reinforcement learning to build a system that tailors its output to different input contexts at test time. Depending on their knowledge, interlocutors, mood, etc.",
"title": ""
},
{
"docid": "0ce82ead0954b99d811b9f50eee76abc",
"text": "Convolutional Neural Networks (CNNs) dominate various computer vision tasks since Alex Krizhevsky showed that they can be trained effectively and reduced the top-5 error from 26.2 % to 15.3 % on the ImageNet large scale visual recognition challenge. Many aspects of CNNs are examined in various publications, but literature about the analysis and construction of neural network architectures is rare. This work is one step to close this gap. A comprehensive overview over existing techniques for CNN analysis and topology construction is provided. A novel way to visualize classification errors with confusion matrices was developed. Based on this method, hierarchical classifiers are described and evaluated. Additionally, some results are confirmed and quantified for CIFAR-100. For example, the positive impact of smaller batch sizes, averaging ensembles, data augmentation and test-time transformations on the accuracy. Other results, such as the positive impact of learned color transformation on the test accuracy could not be confirmed. A model which has only one million learned parameters for an input size of 32× 32× 3 and 100 classes and which beats the state of the art on the benchmark dataset Asirra, GTSRB, HASYv2 and STL-10 was developed.",
"title": ""
},
{
"docid": "0cf97758f5f7dab46e969af14bb36db9",
"text": "The design complexity of modern high performance processors calls for innovative design methodologies for achieving time-to-market goals. New design techniques are also needed to curtail power increases that inherently arise from ever increasing performance targets. This paper describes new design approaches employed by the POWER8 processor design team to address complexity and power consumption challenges. Improvements in productivity are attained by leveraging a new and more synthesis-centric design methodology. New optimization strategies for synthesized macros allow power reduction without sacrificing performance. These methodology innovations contributed to the industry leading performance of the POWER8 processor. Overall, POWER8 delivers a 2.5x increase in per-socket performance over its predecessor, POWER7+, while maintaining the same power dissipation.",
"title": ""
},
{
"docid": "beedcc735e6e0c2e58ede1dc042e9979",
"text": "Paleolimnological studies which included analyses of diatoms, fossil pigments and physico-chemical characteristics of bottom sediments have been used to describe the limnological history of Racze Lake. The influx of terrigenous material into the lake have been determined on the basis of stratigraphy of elements associated with mineral content. The successively eroded soils as well as process of chemical erosion caused increase leaching of metals Mg, Fe, Al into the lake basin. However the concentration of these metals finally deposited in bottom sediments was also effected by the oxygen regime at the sediment-water interface. Both ratios, chlorophyll derivatives to total carotenoids (CD:TC) and Fe:Mn indicated hypolimnetic oxygen depletion in the middle part of the profile. The development of blue-green algal population, estimated by the ratio epiphasic to hypophasic carotenoids (EC:HC) was correlated with periods of redox conditions in the lake. The pH changes ranged from 6.5 to 7.7. The most important factors effecting pH changes were inflow of mineral matter from the watershed and structural changes in the littoral biocenosis.",
"title": ""
},
{
"docid": "91abbad1c392dd4fcaf9c75b468c5e2d",
"text": "Face alignment is very crucial to the task of face attributes recognition. The performance of face attributes recognition would notably degrade if the fiducial points of the original face images are not precisely detected due to large lighting, pose and occlusion variations. In order to alleviate this problem, we propose a spatial transform based deep CNNs to improve the performance of face attributes recognition. In this approach, we first learn appropriate transformation parameters by a carefully designed spatial transformer network called LoNet to align the original face images, and then recognize the face attributes based on the aligned face images using a deep network called ClNet. To the best of our knowledge, this is the first attempt to use spatial transformer network in face attributes recognition task. Extensive experiments on two large and challenging databases (CelebA and LFWA) clearly demonstrate the effectiveness of the proposed approach over the current state-of-the-art.",
"title": ""
}
] |
scidocsrr
|
f7114efd1f36deffcd23d8bf5b41709d
|
A Novel Six-Axis Force/Torque Sensor for Robotic Applications
|
[
{
"docid": "ef785a3eadaa01a7b45d978f63583513",
"text": "This paper presents a laparoscopic grasping tool for minimally invasive surgery with the capability of multiaxis force sensing. The tool is able to sense three-axis Cartesian manipulation force and a single-axis grasping force. The forces are measured by a wrist force sensor located at the distal end of the tool, and two torque sensors at the tool base, respectively. We propose an innovative design of a miniature force sensor achieving structural simplicity and potential cost effectiveness. A prototype is manufactured and experiments are conducted in a simulated surgical environment by using an open platform for surgical robot research, called Raven-II.",
"title": ""
}
] |
[
{
"docid": "e0ee4f306bb7539d408f606d3c036ac5",
"text": "Despite the growing popularity of mobile web browsing, the energy consumed by a phone browser while surfing the web is poorly understood. We present an infrastructure for measuring the precise energy used by a mobile browser to render web pages. We then measure the energy needed to render financial, e-commerce, email, blogging, news and social networking sites. Our tools are sufficiently precise to measure the energy needed to render individual web elements, such as cascade style sheets (CSS), Javascript, images, and plug-in objects. Our results show that for popular sites, downloading and parsing cascade style sheets and Javascript consumes a significant fraction of the total energy needed to render the page. Using the data we collected we make concrete recommendations on how to design web pages so as to minimize the energy needed to render the page. As an example, by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience. We conclude by estimating the point at which offloading browser computations to a remote proxy can save energy on the phone.",
"title": ""
},
{
"docid": "761ff3bbbb50ae44243f6f6ff60349a0",
"text": "Memristor technology is regarded as a potential solution to the memory bottleneck in Von Neumann Architecture by putting storage and computation integrated in the same physical location. In this paper, we proposed a nonvolatile exclusive-OR (XOR) logic gate with 5 memristors, which can execute operation in a single step. Moreover, based on the XOR logic gate, a full adder was presented and simulated by SPICE. Compared to other logic gate and full adder, the proposed circuits have benefits of simpler architecture, higher speed and lower power consumption. This paper provides a memristor-based element as a solution to the future alternative Computation-In-Memory architecture.",
"title": ""
},
{
"docid": "1561273ef56ca08c8a4d68f6eeffc399",
"text": "Blockchain, the underlying technology that powers cryptocurrencies such as Bitcoin and Ethereum, is gaining so much attention from different industry stakeholders, governments and research communities. Its application is extending beyond cryptocurrencies and has been exploited in different domains such as finance, E-commerce, Internet of Things (IoT), healthcare, and governance. Some key attributes of the technology are decentralization, immutability, security and transparency. This paper aims to describe how permissioned Blockchain can be applied to a specific educational use case — decentralized verification of academic credentials. The proposed Blockchain-based solution, named ‘CredenceLedger’, is a system that stores compact data proofs of digital academic credentials in Blockchain ledger that are easily verifiable for education stakeholders and interested third party organizations.",
"title": ""
},
{
"docid": "277652d76d68b547b76b3476e0b3ad05",
"text": "A theory of inductive learning is presented that characterizes it as a heuristic search through a space of symbolic descriptions, generated by an application of certain inference rules to the initial observational statements (the teacher-provided examples of some concepts, or facts about a class of objects or a phenomenon). The inference rules include generalization rules, which perform generalizing transformations on descriptions, and conventional truth-preserving deductive rules (specialization and reformulation rules). The application of the inference rules to descriptions is constrained by problem background knowledge, and guided by criteria evaluating the 'quality' of generated inductive assertions. Based on this theory, a general methodology for learning structural descriptions from examples, called STAR, is described and illustrated by a problem from the area of conceptual data analysis. \" . . . scientific knowledge through demonstration 1 is impossible unless a man knows the primary immediate premises . . . . \" \" . . . we must get to know the primary premises by induction; for the method by which even senseperception implants the universal is induc t ive . . . \"",
"title": ""
},
{
"docid": "18b6fe3cbf66ede3467fe3b5bbc4a9d6",
"text": "Dermoscopy image as a non-invasive diagnosis technique plays an important role for early diagnosis of malignant melanoma. Even for experienced dermatologists, however, diagnosis by human vision can be subjective, inaccurate and non-reproducible. This is attributed to the challenging image characteristics including varying lesion sizes and their shapes, fuzzy lesion boundaries, different skin color types and presence of hair. To aid in the image interpretation, automatic classification of dermoscopy images have been shown to be a valuable aid in the clinical decision making. Existing methods however have problems in representing and differentiating skin lesions due to high degree of similarities between melanoma and non-melanoma images and large variations inherited from skin lesion images. To overcome these limitations, this study proposes a new automatic melanoma detection method for dermoscopy images via multi-scale lesion-biased representation (MLR) and joint reverse classification (JRC). Our proposed MLR representation enable us to represent skin lesions using multiple closely related histograms derived from different rotations and scales while traditional methods can only represent skin lesion using a single-scale histogram. The MLR representation was then used with JRC for melanoma detection. The proposed JRC model allows us to use a set of closely related histograms to derive additional information for melanoma detection, where existing methods mainly rely on histogram itself. Our method was evaluated on a public dataset of dermoscopy images, and we demonstrate superior classification performance compared to the current state-of-the-art methods.",
"title": ""
},
{
"docid": "924c7216c12771f52a69b03d0883c10a",
"text": "A quantum computer, if built, will be to an ordinary computer as a hydrogen bomb is to gunpowder, at least for some types of computations. Today no quantum computer exists, beyond laboratory prototypes capable of solving only tiny problems, and many practical problems remain to be solved. Yet the theory of quantum computing has advanced significantly in the past decade, and is becoming a significant discipline in itself. This article explains the concepts and basic mathematics behind quantum computers and some of the promising approaches for building them. We also discuss quantum communication, an essential component of future quantum information processing, and quantum cryptography, widely expected to be the first practical application for quantum information technology.",
"title": ""
},
{
"docid": "a34f658fcc70e6a0bc4eddacf5b8123f",
"text": "Recently, Radio Frequency Identification (RFID) technique has been widely deployed in many applications, such as medical drugs management in hospitals and missing children searching in amusement parks. The applications basically can be classified into two types: non-public key cryptosystem (PKC)-based and PKC-based. However, many of them have been found to be flawed in the aspect of privacy problem. Therefore, many researchers tried to resolve this problem. They mainly investigated on how low-cost RFID tags can be used in large-scale systems. However, after analyses, we found those studies have some problems, such as suffering physical attack or de-synch attack. Hence, in this paper, we try to design an efficient RFID scheme based on Elliptic Curve Cryptography (ECC) to avoid these problems. After analyses, we conclude that our scheme not only can resist various kinds of attacks but also outperforms the other ECC based RFID schemes in security requirements, with needing only little extra elliptic curve point multiplications.",
"title": ""
},
{
"docid": "7f7a67af972d26746ce1ae0c7ec09499",
"text": "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task.",
"title": ""
},
{
"docid": "19c8893f9e27e48c9d31b759735936ec",
"text": "Advanced driver assistance systems (ADAS) can be significantly improved with effective driver action prediction (DAP). Predicting driver actions early and accurately can help mitigate the effects of potentially unsafe driving behaviors and avoid possible accidents. In this paper, we formulate driver action prediction as a timeseries anomaly prediction problem. While the anomaly (driver actions of interest) detection might be trivial in this context, finding patterns that consistently precede an anomaly requires searching for or extracting features across multi-modal sensory inputs. We present such a driver action prediction system, including a real-time data acquisition, processing and learning framework for predicting future or impending driver action. The proposed system incorporates camera-based knowledge of the driving environment and the driver themselves, in addition to traditional vehicle dynamics. It then uses a deep bidirectional recurrent neural network (DBRNN) to learn the correlation between sensory inputs and impending driver behavior achieving accurate and high horizon action prediction. The proposed system performs better than other existing systems on driver action prediction tasks and can accurately predict key driver actions including acceleration, braking, lane change and turning at durations of 5sec before the action is executed by the driver. Keywords— timeseries modeling, driving assistant system, driver action prediction, driver intent estimation, deep recurrent neural network",
"title": ""
},
{
"docid": "83c99801d4dd18d6fa4f725e2c2e3e51",
"text": "Aging is a complex phenomenon and its impact is becoming more relevant due to the rising life expectancy and because aging itself is the basis for the development of age-related diseases such as cancer, neurodegenerative diseases and type 2 diabetes. Recent years of scientific research have brought up different theories that attempt to explain the aging process. So far, there is no single theory that fully explains all facets of aging. The damage accumulation theory is one of the most accepted theories due to the large body of evidence found over the years. Damage accumulation is thought to be driven, among others, by oxidative stress. This condition results in an excess attack of oxidants on biomolecules, which lead to damage accumulation over time and contribute to the functional involution of cells, tissues and organisms. If oxidative stress persists, cellular senescence is a likely outcome and an important hallmark of aging. Therefore, it becomes crucial to understand how senescent cells function and how they contribute to the aging process. This review will cover cellular senescence features related to the protein pool such as morphological and molecular hallmarks, how oxidative stress promotes protein modifications, how senescent cells cope with them by proteostasis mechanisms, including antioxidant enzymes and proteolytic systems. We will also highlight the nutritional status of senescent cells and aged organisms (including human clinical studies) by exploring trace elements and micronutrients and on their importance to develop strategies that might increase both, life and health span and postpone aging onset.",
"title": ""
},
{
"docid": "88f9e84042ac81606a5853bb2229eb64",
"text": "In binary convolutional neural networks (BCNN), arithmetic operations are replaced by bitwise operations and the required memory size is greatly reduced, which is a good opportunity to accelerate training or inference on FPGAs. This paper proposes a BCNN architecture with a single engine that achieves high resource utilization. The proposed design deploys a large number of processing elements in parallel to increase throughput, and a forwarding scheme to increase resource utilization on the existing engine. In addition, we demonstrate a novel reuse scheme to make fully-connected layers exploit the same engine. The proposed design is combined with an inference environment for comparison and implemented on a Xilinx XCVU190 FPGA. The implemented design uses 61k look-up tables (LUTs), 45k flip-flops (FFs), and 13.9Mbit block RAM (BRAM). In addition, it achieves 61.6 GOPS/kLUT at 240MHz, which is 1.16 times higher than that of the best prior BCNN design, even though it uses a single engine without optimal configurations on each layer.",
"title": ""
},
{
"docid": "4f03fc6c2f1d042758a536c753aaeb37",
"text": "Detecting humans in films and videos is a challenging problem owing to the motion of the subjects, the camera and the background and to variations in pose, appearance, clothing, illumination and background clutter. We develop a detector for standing and moving people in videos with possibly moving cameras and backgrounds, testing several different motion coding schemes and showing empirically that orientated histograms of differential optical flow give the best overall performance. These motion-based descriptors are combined with our Histogram of Oriented Gradient appearance descriptors. The resulting detector is tested on several databases including a challenging test set taken from feature films and containing wide ranges of pose, motion and background variations, including moving cameras and backgrounds. We validate our results on two challenging test sets containing more than 4400 human examples. The combined detector reduces the false alarm rate by a factor of 10 relative to the best appearance-based detector, for example giving false alarm rates of 1 per 20,000 windows tested at 8% miss rate on our Test Set 1.",
"title": ""
},
{
"docid": "1a0c3cd8fc62326da3a87692455e62a5",
"text": "One of the most important tasks of conference organizers is the assignment of papers to reviewers. Reviewers’ assessments of papers is a crucial step in determining the conference program, and in a certain sense to shape the direction of a field. However this is not a simple task: large conferences typically have to assign hundreds of papers to hundreds of reviewers, and time constraints make the task impossible for one person to accomplish. Furthermore other constraints, such as reviewer load have to be taken into account, preventing the process from being completely distributed. We built the first version of a system to suggest reviewer assignments for the NIPS 2010 conference, followed, in 2012, by a release that better integrated our system with Microsoft’s popular Conference Management Toolkit (CMT). Since then our system has been widely adopted by the leading conferences in both the machine learning and computer vision communities. This paper provides an overview of the system, a summary of learning models and methods of evaluation that we have been using, as well as some of the recent progress and open issues.",
"title": ""
},
{
"docid": "400a56ea0b2c005ed16500f0d7818313",
"text": "Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buyers and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate prices. However, it depends on the design and calculation of a complex economic-related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this paper, we employ a recurrent neural network to predict real estate prices using the state-of-the-art visual features. The experimental results indicate that our model outperforms several other state-of-the-art baseline algorithms in terms of both mean absolute error and mean absolute percentage error.",
"title": ""
},
{
"docid": "f65c3e60dbf409fa2c6e58046aad1e1c",
"text": "The gut microbiota is essential for the development and regulation of the immune system and the metabolism of the host. Germ-free animals have altered immunity with increased susceptibility to immunologic diseases and show metabolic alterations. Here, we focus on two of the major immune-mediated microbiota-influenced components that signal far beyond their local environment. First, the activation or suppression of the toll-like receptors (TLRs) by microbial signals can dictate the tone of the immune response, and they are implicated in regulation of the energy homeostasis. Second, we discuss the intestinal mucosal surface is an immunologic component that protects the host from pathogenic invasion, is tightly regulated with regard to its permeability and can influence the systemic energy balance. The short chain fatty acids are a group of molecules that can both modulate the intestinal barrier and escape the gut to influence systemic health. As modulators of the immune response, the microbiota-derived signals influence functions of distant organs and can change susceptibility to metabolic diseases.",
"title": ""
},
{
"docid": "b9bc1b10d144e6680de682273dbced00",
"text": "We propose a new and, arguably, a very simple reduction of instance segmentation to semantic segmentation. This reduction allows to train feed-forward non-recurrent deep instance segmentation systems in an end-to-end fashion using architectures that have been proposed for semantic segmentation. Our approach proceeds by introducing a fixed number of labels (colors) and then dynamically assigning object instances to those labels during training (coloring). A standard semantic segmentation objective is then used to train a network that can color previously unseen images. At test time, individual object instances can be recovered from the output of the trained convolutional network using simple connected component analysis. In the experimental validation, the coloring approach is shown to be capable of solving diverse instance segmentation tasks arising in autonomous driving (the Cityscapes benchmark), plant phenotyping (the CVPPP leaf segmentation challenge), and high-throughput microscopy image analysis. The source code is publicly available: https://github.com/kulikovv/DeepColoring.",
"title": ""
},
{
"docid": "be1c48183fbba677f9dd3d262b70b9b8",
"text": "The goal of our research is to investigate whether a Cognitive Tutor can be made more effective by extending it to help students acquire help-seeking skills. We present a preliminary model of help-seeking behavior that will provide the basis for a Help-Seeking Tutor Agent. The model, implemented by 57 production rules, captures both productive and unproductive help-seeking behavior. As a first test of the model’s efficacy, we used it off-line to evaluate students’ help-seeking behavior in an existing data set of student-tutor interactions, We found that 72% of all student actions represented unproductive help-seeking behavior. Consistent with some of our earlier work (Aleven & Koedinger, 2000) we found a proliferation of hint abuse (e.g., using hints to find answers rather than trying to understand). We also found that students frequently avoided using help when it was likely to be of benefit and often acted in a quick, possibly undeliberate manner. Students’ help-seeking behavior accounted for as much variance in their learning gains as their performance at the cognitive level (i.e., the errors that they made with the tutor). These findings indicate that the help-seeking model needs to be adjusted, but they also underscore the importance of the educational need that the Help-Seeking Tutor Agent aims to address.",
"title": ""
},
{
"docid": "4dd92ba65219dd86b5de47a4170d4fda",
"text": "In a private database query system a client issues queries to a database server and obtains the results without learning anything else about the database and without the server learning the query. In this work we develop tools for implementing private database queries using somewhat-homomorphic encryption (SWHE), that is, using an encryption system that supports only limited computations on encrypted data. We show that a polynomial encoding of the database enables an efficient implementation of several different query types using only low-degree computations on ciphertexts. Specifically, we study two separate settings that offer different privacy/efficiency tradeoffs. In the basic client-server setting, we show that additive homomorphisms are sufficient to implement conjunction and threshold queries. We obtain further efficiency improvements using an additive system that also supports a single homomorphic multiplication on ciphertexts. This implementation hides all aspects of the client’s query from the server, and reveals nothing to the client on non-matching records. To improve performance further we turn to the “Isolated-Box” architecture of De Cristofaro et al. In that architecture the role of the database server is split between two non-colluding parties. The server encrypts and pre-processes the n-record database and also prepares an encrypted inverted index. The server sends the encrypted database and inverted index to a proxy, but keeps the decryption keys to itself. The client interacts with both server and proxy for every query and privacy holds as long as the server and proxy do not collude. We show that using a system that supports only log(n) multiplications on encrypted data it is possible to implement conjunctions and threshold queries efficiently. We implemented our protocols for the Isolated-box architecture using the somewhat homomorphic encryption system by Brakerski, and compared it to a simpler implementation that only uses Paillier’s additively homomorphic encryption system. The implementation using somewhat homomorphic encryption was able to handle a query with a few thousand matches out of a million-record database in just a few minutes, far outperforming the implementation using additively homomorphic encryption.",
"title": ""
},
{
"docid": "25aee378edb95f74b650ea79f4fb7293",
"text": "Traffic light control is a challenging problem in many cities. This is due to the large number of vehicles and the high dynamics of the traffic system. Poor traffic systems are the big reason for of accidents, time losses. This system proposed, in this paper aims at reducing waiting times of the vehicles at traffic signals. Traffic Light Control (TLC) system also based on microcontroller and microprocessor. But the disadvantage of with microcontroller or microprocessor is that it works on fixed time, which is functioning according to the program that does not have the flexibility of modification on real time basis. This proposed system using FPGA with traffic sensors to control traffic according requirement means designer can change the program if it require and thus reduces the waiting time. The hardware design has been developed using Verilog Hardware Description Language (HDL) programming. The output of system has been tested using Xilinx. The implementation of traffic Light Controller also through Application Specific Integrated Circuit. But implementation with FPGA is less expensive compared to ASIC design. This paper presents the FPGA implemented low cost advanced TLC system. Coding of the design is done using Verilog HDL and the design is tested and simulated on Spartan-3E FPGA development kit.",
"title": ""
},
{
"docid": "eb31d3d6264e3a6aba0753b5ba14f572",
"text": "Using aggregate product search data from Amazon.com, we jointly estimate consumer information search and online demand for consumer durable goods. To estimate the demand and search primitives, we introduce an optimal sequential search process into a model of choice and treat the observed marketlevel product search data as aggregations of individual-level optimal search sequences. The model builds on the dynamic programming framework by Weitzman (1979) and combines it with a choice model. It can accommodate highly complex demand patterns at the market level. At the individual level, the model has a number of attractive properties in estimation, including closed-form expressions for the probability distribution of alternative sets of searched goods and breaking the curse of dimensionality. Using numerical experiments, we verify the model's ability to identify the heterogeneous consumer tastes and search costs from product search data. Empirically, the model is applied to the online market for camcorders and is used to answer manufacturer questions about market structure and competition, and to address policy maker issues about the e ect of selectively lowered search costs on consumer surplus outcomes. We nd that consumer search for camcorders at Amazon.com is typically limited to little over 10 choice options, and that this a ects the estimates of own and cross elasticities. In a policy simulation, we also nd that the vast majority of the households bene t from the Amazon.com's product recommendations via lower search costs.",
"title": ""
}
] |
scidocsrr
|
6e80cfea0c00378b27064947f922debd
|
Small Sample Learning in Big Data Era
|
[
{
"docid": "a70d1e15dfb814ded7667d9758b54069",
"text": "The aim of this paper1 is to give an overview of domain adaptation and transfer learning with a specific view on visual applications. After a general motivation, we first position domain adaptation in the larger transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and the heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we overview the methods that go beyond image categorization, such as object detection or image segmentation, video analyses or learning visual attributes. Finally, we conclude the paper with a section where we relate domain adaptation to other machine learning solutions.",
"title": ""
},
{
"docid": "4c7e66c0447f7eb527396c369dfdeb19",
"text": "What are the functions of curiosity? What are the mechanisms of curiosity-driven learning? We approach these questions about the living using concepts and tools from machine learning and developmental robotics. We argue that curiosity-driven learning enables organisms to make discoveries to solve complex problems with rare or deceptive rewards. By fostering exploration and discovery of a diversity of behavioural skills, and ignoring these rewards, curiosity can be efficient to bootstrap learning when there is no information, or deceptive information, about local improvement towards these problems. We also explain the key role of curiosity for efficient learning of world models. We review both normative and heuristic computational frameworks used to understand the mechanisms of curiosity in humans, conceptualizing the child as a sense-making organism. These frameworks enable us to discuss the bi-directional causal links between curiosity and learning, and to provide new hypotheses about the fundamental role of curiosity in self-organizing developmental structures through curriculum learning. We present various developmental robotics experiments that study these mechanisms in action, both supporting these hypotheses to understand better curiosity in humans and opening new research avenues in machine learning and artificial intelligence. Finally, we discuss challenges for the design of experimental paradigms for studying curiosity in psychology and cognitive neuroscience.",
"title": ""
},
{
"docid": "259dd03ebaa3293ad947e91bf4531361",
"text": "Zero-shot learning (ZSL) aims to recognize objects of unseen classes with available training data from another set of seen classes. Existing solutions are focused on exploring knowledge transfer via an intermediate semantic embedding (e.g., attributes) shared between seen and unseen classes. In this paper, we propose a novel projection framework based on matrix tri-factorization with manifold regularizations. Specifically, we learn the semantic embedding projection by decomposing the visual feature matrix under the guidance of semantic embedding and class label matrices. By additionally introducing manifold regularizations on visual data and semantic embeddings, the learned projection can effectively captures the geometrical manifold structure residing in both visual and semantic spaces. To avoid the projection domain shift problem, we devise an effective prediction scheme by exploiting the test-time manifold structure. Extensive experiments on four benchmark datasets show that our approach significantly outperforms the state-of-the-arts, yielding an average improvement ratio by 7.4% and 31.9% for the recognition and retrieval task, respectively.",
"title": ""
},
{
"docid": "7709fa95a26a1d8a45250cf850c92755",
"text": "Metric learning aims to learn a distance function to measure the similarity of samples, which plays an important role in many visual understanding applications. Generally, the optimal similarity functions for different visual understanding tasks are task specific because the distributions for data used in different tasks are usually different. It is generally believed that learning a metric from training data can obtain more encouraging performances than handcrafted metrics [1]-[3], e.g., the Euclidean and cosine distances. A variety of metric learning methods have been proposed in the literature [2]-[5], and many of them have been successfully employed in visual understanding tasks such as face recognition [6], [7], image classification [2], [3], visual search [8], [9], visual tracking [10], [11], person reidentification [12], cross-modal matching [13], image set classification [14], and image-based geolocalization [15]-[17].",
"title": ""
}
] |
[
{
"docid": "5384fb9496219b66a8deca4748bc711f",
"text": "We uses the backslide procedure to determine the Noteworthiness Score of the sentences in a paper and then uses the Integer Linear Programming Algorithms system to create well-organized slides by selecting and adjusting key expressions and sentences. Evaluated result based on a certain set of 200 arrangements of papers and slide assemble on the web displays in our proposed structure of PPSGen can create slides with better quality and quick. Paper talks about a technique for consequently getting outline slides from a content, contemplating the programmed era of presentation slides from a specialized paper and also examines the challenging task of continuous creating presentation slides from academic paper. The created slide can be used as a draft to help moderator setup their systematic slides in a quick manners. This paper introduces novel systems called PPSGen to help moderators create such slide. A customer study also exhibits that PPSGen has obvious advantage over baseline method and speed is fast for creations. . Keyword : Artificial Support Vector Regression (SVR), Integer Linear Programming (ILP), Abstract methods, texts mining, Classification etc....",
"title": ""
},
{
"docid": "7b2e02c62c06f244d24fb798a5998725",
"text": "This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific--it is what it is by how it differs from alternative experiences; integration says that it is unified--irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as \"differences that make a difference\" within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true \"zombies\"--unconscious feed-forward systems that are functionally equivalent to conscious complexes.",
"title": ""
},
{
"docid": "c63d32013627d0bcea22e1ad62419e62",
"text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.",
"title": ""
},
{
"docid": "995a5523c131e09f8a8f04a3cf304045",
"text": "Topic models are often applied in industrial settings to discover user profiles from activity logs where documents correspond to users and words to complex objects such as web sites and installed apps. Standard topic models ignore the content-based similarity structure between these objects largely because of the inability of the Dirichlet prior to capture such side information of word-word correlation. Several approaches were proposed to replace the Dirichlet prior with more expressive alternatives. However, this added expressivity comes with a heavy premium: inference becomes intractable and sparsity is lost which renders these alternatives not suitable for industrial scale applications. In this paper we take a radically different approach to incorporating word-word correlation in topic models by applying this side information at the posterior level rather than at the prior level. We show that this choice preserves sparsity and results in a graph-based sampler for LDA whose computational complexity is asymptotically on bar with the state of the art Alias base sampler for LDA \\cite{aliasLDA}. We illustrate the efficacy of our approach over real industrial datasets that span up to billion of users, tens of millions of words and thousands of topics. To the best of our knowledge, our approach provides the first practical and scalable solution to this important problem.",
"title": ""
},
{
"docid": "b206560e0c9f3e59c8b9a8bec6f12462",
"text": "A symmetrical microstrip directional coupler design using the synthesis technique without prior knowledge of the physical geometry of the directional coupler is analytically given. The introduced design method requires only the information of the port impedances, the coupling level, and the operational frequency. The analytical results are first validated by using a planar electromagnetic simulation tool and then experimentally verified. The error between the experimental and analytical results is found to be within 3% for the worst case. The design charts that give all the physical dimensions, including the length of the directional coupler versus frequency and different coupling levels, are given for alumina, Teflon, RO4003, FR4, and RF-60, which are widely used in microwave applications. The complete design of symmetrical two-line microstrip directional couplers can be obtained for the first time using our results in this paper.",
"title": ""
},
{
"docid": "970626f1586f8053ea2d7d9a3a0c723d",
"text": "The aim of this paper is to provide an efficient frequency-domain method for bifurcation analysis of nonlinear dynamical systems. The proposed method consists in directly tracking the bifurcation points when a system parameter such as the excitation or nonlinearity level is varied. To this end, a so-called extended system comprising the equation of motion and an additional equation characterizing the bifurcation of interest is solved by means of the Harmonic Balance Method coupled with an arc-length continuation technique. In particular, an original extended system for the detection and tracking of Neimark-Sacker (secondary Hopf) bifurcations is introduced. By applying the methodology to a nonlinear energy sink and to a rotor-stator rubbing system, it is shown that the bifurcation tracking can be used to efficiently compute the boundaries of stability and/or dynamical regimes, i.e., safe operating zones.",
"title": ""
},
{
"docid": "ebb70af20b550c911a63757b754c6619",
"text": "This paper presents a vehicle price prediction system by using the supervised machine learning technique. The research uses multiple linear regression as the machine learning prediction method which offered 98% prediction precision. Using multiple linear regression, there are multiple independent variables but one and only one dependent variable whose actual and predicted values are compared to find precision of results. This paper proposes a system where price is dependent variable which is predicted, and this price is derived from factors like vehicle’s model, make, city, version, color, mileage, alloy rims and power steering.",
"title": ""
},
{
"docid": "666919bbe7e63d99e3314657aecb3e02",
"text": " Real-time Upper-body Human Pose Estimation using a Depth Camera Himanshu Prakash Jain, Anbumani Subramanian HP Laboratories HPL-2010-190 Haar cascade based detection, template matching, weighted distance transform and pose estimation Automatic detection and pose estimation of humans is an important task in HumanComputer Interaction (HCI), user interaction and event analysis. This paper presents a model based approach for detecting and estimating human pose by fusing depth and RGB color data from monocular view. The proposed system uses Haar cascade based detection and template matching to perform tracking of the most reliably detectable parts namely, head and torso. A stick figure model is used to represent the detected body parts. Then, the fitting is performed independently for each limb, using the weighted distance transform map. The fact that each limb is fitted independently speeds-up the fitting process and makes it robust, avoiding the combinatorial complexity problems that are common with these types of methods. The output is a stick figure model consistent with the pose of the person in the given input image. The algorithm works in real-time and is fully automatic and can detect multiple non-intersecting people. External Posting Date: November 21, 2010 [Fulltext] Approved for External Publication Internal Posting Date: November 21, 2010 [Fulltext] Copyright 2010 Hewlett-Packard Development Company, L.P.",
"title": ""
},
{
"docid": "49dcfa6459c83b20f731c61f3a1ed7cf",
"text": "The number of unmanned vehicles and devices deployed underwater is increasing. New communication systems and networking protocols are required to handle this growth. Underwater free-space optical communication is poised to augment acoustic communication underwater, especially for short-range, mobile, multi-user environments in future underwater systems. Existing systems are typically point-to-point links with strict pointing and tracking requirements. In this paper we demonstrate compact smart transmitters and receivers for underwater free-space optical communications. The receivers have segmented wide field of view and are capable of estimating angle of arrival of signals. The transmitters are highly directional with individually addressable LEDs for electronic switched beamsteering, and are capable of estimating water quality from its backscattered light collected by its co-located receiver. Together they form enabling technologies for non-traditional networking schemes in swarms of unmanned vehicles underwater.",
"title": ""
},
{
"docid": "6c76fcf20405c6826060821ac7c662e8",
"text": "A perception system for pedestrian detection in urban scenarios using information from a LIDAR and a single camera is presented. Two sensor fusion architectures are described, a centralized and a decentralized one. In the former, the fusion process occurs at the feature level, i.e., features from LIDAR and vision spaces are combined in a single vector for posterior classification using a single classifier. In the latter, two classifiers are employed, one per sensor-feature space, which were offline selected based on information theory and fused by a trainable fusion method applied over the likelihoods provided by the component classifiers. The proposed schemes for sensor combination, and more specifically the trainable fusion method, lead to enhanced detection performance and, in addition, maintenance of false-alarms under tolerable values in comparison with singlebased classifiers. Experimental results highlight the performance and effectiveness of the proposed pedestrian detection system and the related sensor data combination strategies.",
"title": ""
},
{
"docid": "b7b9826b831131401d21b735249d8314",
"text": "The existing approaches for trajectory prediction (TP) are primarily concerned with discovering frequent trajectory patterns (FTPs) from historical movement data. Moreover, most of these approaches work by using a linear TP model to depict the positions of objects, which does not lend itself to the complexities of most real-world applications. In this research, we propose a three-in-one TP model in road-constrained transportation networks called TraPlan. TraPlan contains three essential techniques: 1) constrained network R-tree (CNR-tree), which is a two-tiered dynamic index structure of moving objects based on transportation networks; 2) a region-of-interest (RoI) discovery algorithm is employed to partition a large number of trajectory points into distinct clusters; and 3) a FTP-tree-based TP approach, called FTP-mining, is proposed to discover FTPs to infer future locations of objects moving within RoIs. In order to evaluate the results of the proposed CNR-tree index structure, we conducted experiments on synthetically generated data sets taken from real-world transportation networks. The results show that the CNR-tree can reduce the time cost of index maintenance by an average gap of about 40% when compared with the traditional NDTR-tree, as well as reduce the time cost of trajectory queries. Moreover, compared with fixed network R-Tree (FNR-trees), the accuracy of range queries has shown an on average improvement of about 32%. Furthermore, the experimental results show that the TraPlan demonstrates accurate and efficient prediction of possible motion curves of objects in distinct trajectory data sets by over 80% on average. Finally, we evaluate these results and the performance of the TraPlan model in regard to TP by comparing it with other TP algorithms.",
"title": ""
},
{
"docid": "c2816721fa6ccb0d676f7fdce3b880d4",
"text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.",
"title": ""
},
{
"docid": "2ca724c035515a7e5a4369fae856f8a1",
"text": "This paper presents a model-based planner called the Probabilistic Sulu Planner or the p-Sulu Planner, which controls stochastic systems in a goal directed manner within user-specified risk bounds. The objective of the p-Sulu Planner is to allow users to command continuous, stochastic systems, such as unmanned aerial and space vehicles, in a manner that is both intuitive and safe. To this end, we first develop a new plan representation called a chance-constrained qualitative state plan (CCQSP), through which users can specify the desired evolution of the plant state as well as the acceptable level of risk. An example of a CCQSP statement is “go to A through B within 30 minutes, with less than 0.001% probability of failure.” We then develop the p-Sulu Planner, which can tractably solve a CCQSP planning problem. In order to enable CCQSP planning, we develop the following two capabilities in this paper: 1) risk-sensitive planning with risk bounds, and 2) goal-directed planning in a continuous domain with temporal constraints. The first capability is to ensures that the probability of failure is bounded. The second capability is essential for the planner to solve problems with a continuous state space such as vehicle path planning. We demonstrate the capabilities of the p-Sulu Planner by simulations on two real-world scenarios: the path planning and scheduling of a personal aerial vehicle as well as the space rendezvous of an autonomous cargo spacecraft.",
"title": ""
},
{
"docid": "1cba225a1f9de1576a5fdfb16c101bff",
"text": "Electromagnetic trackers have many favorable characteristics but are notorious for their sensitivity to magnetic field distortions resulting from metal and electronic equipment in the environment. We categorize existing tracker calibration methods and present an improved technique for reducing the static position and orientation errors that are inherent to these devices. A quaternion-based formulation provides a simple and fast computational framework for representing orientation errors. Our experimental apparatus consists of a 6-DOF mobile platform and an optical position measurement system, allowing the collection of full-pose data at nearly arbitrary orientations of the receiver. A polynomial correction technique is applied and evaluated using a Polhemus Fastrak resulting in a substantial improvement of tracking accuracy. Finally, we apply advanced visualization algorithms to give new insight into the nature of the magnetic distortion field.",
"title": ""
},
{
"docid": "8bd619e8d1816dd5c692317a8fb8e0ed",
"text": "The data mining field in computer science specializes in extracting implicit information that is distributed across the stored data records and/or exists as associations among groups of records. Criminal databases contain information on the crimes themselves, the offenders, the victims as well as the vehicles that were involved in the crime. Among these records lie groups of crimes that can be attributed to serial criminals who are responsible for multiple criminal offenses and usually exhibit patterns in their operations, by specializing in a particular crime category (i.e., rape, murder, robbery, etc.), and applying a specific method for implementing their crimes. Discovering serial criminal patterns in crime databases is, in general, a clustering activity in the area of data mining that is concerned with detecting trends in the data by classifying and grouping similar records. In this paper, we report on the different statistical and neural network approaches to the clustering problem in data mining in general, and as it applies to our crime domain in particular. We discuss our approach of using a cascaded network of Kohonen neural networks followed by heuristic processing of the networks outputs that best simulated the experts in the field. We address the issues in this project and the reasoning behind this approach, including: the choice of neural networks, in general, over statistical algorithms as the main tool, and the use of Kohonen networks in particular, the choice for the cascaded approach instead of the direct approach, and the choice of a heuristics subsystem as a back-end subsystem to the neural networks. We also report on the advantages of this approach over both the traditional approach of using a single neural network to accommodate all the attributes, and that of applying a single clustering algorithm on all the data attributes.",
"title": ""
},
{
"docid": "3dfd3093b6abb798474dec6fb9cfca36",
"text": "This paper proposes a new image representation for texture categorization, which is based on extension of local binary patterns (LBP). As we know LBP can achieve effective description ability with appearance invariance and adaptability of patch matching based methods. However, LBP only thresholds the differential values between neighborhood pixels and the focused one to 0 or 1, which is very sensitive to noise existing in the processed image. This study extends LBP to local ternary patterns (LTP), which considers the differential values between neighborhood pixels and the focused one as negative or positive stimulus if the absolute differential value is large; otherwise no stimulus (set as 0). With the ternary values of all neighbored pixels, we can achieve a pattern index for each local patch, and then extract the pattern histogram for image representation. Experiments on two texture datasets: Brodats32 and KTH TIPS2-a validate that the robust LTP can achieve much better performances than the conventional LBP and the state-of-the-art methods.",
"title": ""
},
{
"docid": "d2b30e1c74a7be4d5fec404797f2d3eb",
"text": "User intent detection plays a critical role in question-answering and dialog systems. Most previous works treat intent detection as a classification problem where utterances are labeled with predefined intents. However, it is labor-intensive and time-consuming to label users’ utterances as intents are diversely expressed and novel intents will continually be involved. Instead, we study the zero-shot intent detection problem, which aims to detect emerging user intents where no labeled utterances are currently available. We propose two capsule-based architectures: INTENTCAPSNET that extracts semantic features from utterances and aggregates them to discriminate existing intents, and INTENTCAPSNET-ZSL which gives INTENTCAPSNET the zero-shot learning ability to discriminate emerging intents via knowledge transfer from existing intents. Experiments on two real-world datasets show that our model not only can better discriminate diversely expressed existing intents, but is also able to discriminate emerging intents when no labeled utterances are available.",
"title": ""
},
{
"docid": "99a8926f31f4e357608b10040c2415ee",
"text": "Adolescence is a time of tremendous change in physical appearance. Many adolescents report dissatisfaction with their body shape and size. Forming one's body image is a complex process, influenced by family, peers, and media messages. Increasing evidence shows that the combination of ubiquitous ads for foods and emphasis on female beauty and thinness in both advertising and programming leads to confusion and dissatisfaction for many young people. Sociocultural factors, specifically media exposure, play an important role in the development of disordered body image. Of significant concern, studies have revealed a link between media exposure and the likelihood of having symptoms of disordered eating or a frank eating disorder. Pediatricians and other adults must work to promote media education and make media healthier for young people. More research is needed to identify the most vulnerable children and adolescents.",
"title": ""
},
{
"docid": "80d9439987b7eac8cf021be7dc533ec9",
"text": "While previous studies have investigated the determinants and consequences of online trust, online distrust has seldom been studied. Assuming that the positive antecedents of online trust are necessarily negative antecedents of online distrust or that positive consequences of online trust are necessarily negatively affected by online distrust is inappropriate. This study examines the different antecedents of online trust and distrust in relation to consumer and website characteristics. Moreover, this study further examines whether online trust and distrust asymmetrically affect behaviors with different risk levels. A model is developed and tested using a survey of 1,153 online consumers. LISREL was employed to test the proposed model. Overall, different consumer and website characteristics influence online trust and distrust, and online trust engenders different behavioral outcomes to online distrust. The authors also discuss the theoretical and managerial implications of the study findings.",
"title": ""
},
{
"docid": "068935eccad836eefae34908e15467b7",
"text": "We study the problem of k-means clustering in the presence of outliers. The goal is to cluster a set of data points to minimize the variance of the points assigned to the same cluster, with the freedom of ignoring a small set of data points that can be labeled as outliers. Clustering with outliers has received a lot of attention in the data processing community, but practical, efficient, and provably good algorithms remain unknown for the most popular k-means objective. Our work proposes a simple local search-based algorithm for k-means clustering with outliers. We prove that this algorithm achieves constant-factor approximate solutions and can be combined with known sketching techniques to scale to large data sets. Using empirical evaluation on both synthetic and large-scale real-world data, we demonstrate that the algorithm dominates recently proposed heuristic approaches for the problem.",
"title": ""
}
] |
scidocsrr
|
741a4c9cd025906c95e5be9a5638607f
|
Two improvements to detect duplicates in Stack Overflow
|
[
{
"docid": "c58e773f5505cf1591f1667749d528a7",
"text": "Stack Overflow is a popular question answering site that is focused on programming problems. Despite efforts to prevent asking questions that have already been answered, the site contains duplicate questions. This may cause developers to unnecessarily wait for a question to be answered when it has already been asked and answered. The site currently depends on its moderators and users with high reputation to manually mark those questions as duplicates, which not only results in delayed responses but also requires additional efforts. In this paper, we first perform a manual investigation to understand why users submit duplicate questions in Stack Overflow. Based on our manual investigation we propose a classification technique that uses a number of carefully chosen features to identify duplicate questions. Evaluation using a large number of questions shows that our technique can detect duplicate questions with reasonable accuracy. We also compare our technique with DupPredictor, a state-of-the-art technique for detecting duplicate questions, and we found that our proposed technique has a better recall-rate than that technique.",
"title": ""
},
{
"docid": "9e1636893734e56e8cd507778ca04669",
"text": "Stack Overflow is a popular on-line question and answer site for software developers to share their experience and expertise. Among the numerous questions posted in Stack Overflow, two or more of them may express the same point and thus are duplicates of one another. Duplicate questions make Stack Overflow site maintenance harder, waste resources that could have been used to answer other questions, and cause developers to unnecessarily wait for answers that are already available. To reduce the problem of duplicate questions, Stack Overflow allows questions to be manually marked as duplicates of others. Since there are thousands of questions submitted to Stack Overflow every day, manually identifying duplicate questions is a difficult work. Thus, there is a need for an automated approach that can help in detecting these duplicate questions. To address the above-mentioned need, in this paper, we propose an automated approach named DupPredictor that takes a new question as input and detects potential duplicates of this question by considering multiple factors. DupPredictor extracts the title and description of a question and also tags that are attached to the question. These pieces of information (title, description, and a few tags) are mandatory information that a user needs to input when posting a question. DupPredictor then computes the latent topics of each question by using a topic model. Next, for each pair of questions, it computes four similarity scores by comparing their titles, descriptions, latent topics, and tags. These four similarity scores are finally combined together to result in a new similarity score that comprehensively considers the multiple factors. To examine the benefit of DupPredictor, we perform an experiment on a Stack Overflow dataset which contains a total of more than two million questions. The result shows that DupPredictor can achieve a recall-rate@20 score of 63.8%. We compare our approach with the standard search engine of Stack Overflow, and DupPredictor improves its recall-rate@10 score by 40.63%. We also compare our approach with approaches that only use title, description, topic, and tag similarity and Runeson et al.’s approach that has been used to detect duplicate bug reports, and DupPredictor improves their recall-rate@10 scores by 27.2%, 97.4%, 746.0%, 231.1%, and 16.4% respectively.",
"title": ""
}
] |
[
{
"docid": "8868fe4e0907fc20cc6cbc2b01456707",
"text": "Tracking multiple objects is a challenging task when objects move in groups and occlude each other. Existing methods have investigated the problems of group division and group energy-minimization; however, lacking overall objectgroup topology modeling limits their ability in handling complex object and group dynamics. Inspired with the social affinity property of moving objects, we propose a Graphical Social Topology (GST) model, which estimates the group dynamics by jointly modeling the group structure and the states of objects using a topological representation. With such topology representation, moving objects are not only assigned to groups, but also dynamically connected with each other, which enables in-group individuals to be correctly associated and the cohesion of each group to be precisely modeled. Using well-designed topology learning modules and topology training, we infer the birth/death and merging/splitting of dynamic groups. With the GST model, the proposed multi-object tracker can naturally facilitate the occlusion problem by treating the occluded object and other in-group members as a whole while leveraging overall state transition. Experiments on both RGB and RGB-D datasets confirm that the proposed multi-object tracker improves the state-of-the-arts especially in crowded scenes.",
"title": ""
},
{
"docid": "bef6c1e237e52d9a40c78856126a9be8",
"text": "An approach to robotics called layered evolution and merging features from the subsumption architecture into evolutionary robotics is presented, and its advantages are discussed. This approach is used to construct a layered controller for a simulated robot that learns which light source to approach in an environment with obstacles. The evolvability and performance of layered evolution on this task is compared to (standard) monolithic evolution, incremental and modularised evolution. To corroborate the hypothesis that a layered controller performs at least as well as an integrated one, the evolved layers are merged back into a single network. On the grounds of the test results, it is argued that layered evolution provides a superior approach for many tasks, and it is suggested that this approach may be the key to scaling up evolutionary robotics.",
"title": ""
},
{
"docid": "5b79a02ccfbbcab32113abf1477bbb59",
"text": "Using a large sample of publicly traded US firms over 16 years, we investigate the impact of corporate socially responsible (CSR) strategies on security analysts’ recommendations. Socially responsible firms receive more favorable recommendations in recent years relative to earlier ones, documenting a changing perception of the value of such strategies by the analysts. Moreover, we find that firms with higher visibility receive more favorable recommendations for their CSR strategies and that analysts with more experience, broader CSR awareness or those with more resources at their disposal, are more likely to perceive the value of CSR strategies more favorably. Our results document how CSR strategies can affect value creation in public equity markets through analyst recommendations. 1 Assistant Professor of Strategic and International Management, London Business School, Regent’s Park, NW1 4SA, London, United Kingdom. Email: iioannou@london.edu, Ph: +44 20 7000 8748, Fx: +44 20 7000 7001. 2 Assistant Professor of Business Administration, Harvard Business School, Soldiers’ Field Road, Morgan Hall 381, 02163 Boston, MA, USA. Email:gserafeim@hbs.edu, Ph: +1 617 495 6548, Fx: +1 617 496 7387. We are grateful to Constantinos Markides, and seminar participants at the research brown bag (SIM area) of the London Business School, the academic conference on Social Responsibility at University of Washington Tacoma, the 2010 European Academy of Management Conference, and the 2010 Academy of Management Conference. Ioannou acknowledges financial support from the Research and Materials Development Fund (RAMD) at the London Business School. All remaining errors are our own.",
"title": ""
},
{
"docid": "e3d9d30900b899bcbf54cbd1b5479713",
"text": "A new test method has been implemented for testing the EMC performance of small components like small connectors and IC's, mainly used in mobile applications. The test method is based on the EMC-stripline method. Both emission and immunity can be tested up to 6GHz, based on good RF matching conditions and with high field strengths.",
"title": ""
},
{
"docid": "c48a33e4688d2997c0ac31efd178919a",
"text": "The digital information age has generated new outlets for content creators to publish so-called “fake news”, a new form of propaganda that is intentionally designed to mislead the reader. With the widespread effects of the fast dissemination of fake news, efforts have been made to automate the process of fake news detection. A promising solution that has come up recently is to use machine learning to detect patterns in the news sources and articles, specifically deep neural networks, which have been successful in natural language processing. However, deep networks come with lack of transparency in the decision-making process, i.e. the “black-box problem”, which obscures its reliability. In this paper, we open this “black-box” and we show that the emergent representations from deep neural networks capture subtle but consistent differences in the language of fake and real news: signatures of exaggeration and other forms of rhetoric. Unlike previous work, we test the transferability of the learning process to novel news topics. Our results demonstrate the generalization capabilities of deep learning to detect fake news in novel subjects only from language patterns.1",
"title": ""
},
{
"docid": "9b1643284b783f2947be11f16ae8d942",
"text": "We investigate the task of modeling opendomain, multi-turn, unstructured, multiparticipant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant’s history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.",
"title": ""
},
{
"docid": "87396c917dd760eddc2d16e27a71e81d",
"text": "We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism-neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation.",
"title": ""
},
{
"docid": "573b563cfc7eb96552a906fb9263ea6d",
"text": "Supply chain is complex today. Multi-echelon, highly disjointed, and geographically spread are some of the cornerstones of today’s supply chain. All these together with different governmental policies and human behavior make it almost impossible to probe incidents and trace events in case of supply chain disruptions. In effect, an end-to-end supply chain, from the most basic raw material to the final product in a customer’s possession, is opaque. The inherent cost involved in managing supply chain intermediaries, their reliability, traceability, and transparency further complicate the supply chain. The solution to such complicated problems lies in improving supply chain transparency. This is now possible with the concept of blockchain. The usage of blockchain in a financial transaction is well known. This paper reviews blockchain technology, which is changing the face of supply chain and bringing in transparency and authenticity. This paper first discusses the history and evolution of blockchain from the bitcoin network, and goes on to explore the protocols. The author takes a deep dive into the design of blockchain, exploring its five pillars and three-layered architecture, which enables most of the blockchains today. With the architecture, the author focuses on the applications, use cases, road map, and challenges for blockchain in the supply chain domain as well as the synergy of blockchain with enterprise applications. It analyzes the integration of the enterprise resource planning (ERP) system of the supply chain domain with blockchain. It also explores the three distinct growth areas: ERP-blockchain supply chain use cases, the middleware for connecting the blockchain with ERP, and blockchain as a service (BaaS). The paper ends with a brief conclusion and a discussion.",
"title": ""
},
{
"docid": "991ba1b12ee067a33424a711cf9e1d4d",
"text": "The most popular and actively researched class of quad remeshing techniques is the family of parametrization based quad meshing methods. They all strive to generate an integer-grid map, i.e. a parametrization of the input surface into R2 such that the canonical grid of integer iso-lines forms a quad mesh when mapped back onto the surface in R3. An essential, albeit broadly neglected aspect of these methods is the quad extraction step, i.e. the materialization of an actual quad mesh from the mere \"quad texture\". Quad (mesh) extraction is often believed to be a trivial matter but quite the opposite is true: numerous special cases, ambiguities induced by numerical inaccuracies and limited solver precision, as well as imperfections in the maps produced by most methods (unless costly countermeasures are taken) pose significant challenges to the quad extractor. We present a method to sanitize a provided parametrization such that it becomes numerically consistent even in a limited precision floating point representation. Based on this we are able to provide a comprehensive and sound description of how to perform quad extraction robustly and without the need for any complex tolerance thresholds or disambiguation rules. On top of that we develop a novel strategy to cope with common local fold-overs in the parametrization. This allows our method, dubbed QEx, to generate all-quadrilateral meshes where otherwise holes, non-quad polygons or no output at all would have been produced. We thus enable the practical use of an entire class of maps that was previously considered defective. Since state of the art quad meshing methods spend a significant share of their run time solely to prevent local fold-overs, using our method it is now possible to obtain quad meshes significantly quicker than before. We also provide libQEx, an open source C++ reference implementation of our method and thus significantly lower the bar to enter the field of quad meshing.",
"title": ""
},
{
"docid": "d6a6cadd782762e4591447b7dd2c870a",
"text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.",
"title": ""
},
{
"docid": "1315247aa0384097f5f9e486bce09bd4",
"text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.",
"title": ""
},
{
"docid": "e5b05292bee316cbc5cb6da35bd615a2",
"text": "Blockchain technology has emerged as a primary enabler for verification-driven transactions between parties that do not have complete trust among themselves. Bitcoin uses this technology to provide a provenance-driven verifiable ledger that is based on consensus. Nevertheless, the use of blockchain as a transaction service in non-cryptocurrency applications, for example, business networks, is at a very nascent stage. While the blockchain supports transactional provenance, the datamanagement community and other scientific and industrial communities are assessing how blockchain can be used to enable certain key capabilities for business applications. We have reviewed a number of proof of concepts and early adoptions of blockchain solutions that we have been involved spanning diverse use cases to draw common data life cycle, persistence as well as analytics patterns used in real-world applications with the ultimate aim to identify new frontier of exciting research in blockchain data management and analytics. In this paper, we discuss several open topics that researchers could increase focus on: (1) leverage existing capabilities of mature data and information systems, (2) enhance data security and privacy assurances, (3) enable analytics services on blockchain as well as across off-chain data, and (4) make blockchain-based systems active-oriented and intelligent.",
"title": ""
},
{
"docid": "51f9661061bf69f8d9303101c00558ec",
"text": "In this paper we introduce an architecture maturity model for the domain of enterprise architecture. The model differs from other existing models in that it departs from the standard 5-level approach. It distinguishes 18 factors, called key areas, which are relevant to developing an architectural practice. Each key area has its own maturity development path that is balanced against the maturity development paths of the other key areas. Two real-life case studies are presented to illustrate the use of the model. Usage of the model in these cases shows that the model delivers recognizable results, that the results can be traced back to the basic approach to architecture taken by the organizations investigated and that the key areas chosen bear relevance to the architectural practice of the organizations. 1 MATURITY IN ENTERPRISE",
"title": ""
},
{
"docid": "7159c79664f69f7ebe95a12babfee1f5",
"text": "In information visualization, interaction is commonly carried out by using traditional input devices, and visual feedback is usually given on desktop displays. By contrast, recent advances in interactive surface technology suggest combining interaction and display functionality in a single device for a more direct interaction. With our work, we contribute to the seamless integration of interaction and display devices and introduce new ways of visualizing and directly interacting with information. Rather than restricting the interaction to the display surface alone, we explicitly use the physical three-dimensional space above it for natural interaction with multiple displays. For this purpose, we introduce tangible views as spatially aware lightweight displays that can be interacted with by moving them through the physical space on or above a tabletop display's surface. Tracking the 3D movement of tangible views allows us to control various parameters of a visualization with more degrees of freedom. Tangible views also facilitate making multiple -- previously virtual -- views physically \"graspable\". In this paper, we introduce a number of interaction and visualization patterns for tangible views that constitute the vocabulary for performing a variety of common visualization tasks. Several implemented case studies demonstrate the usefulness of tangible views for widely used information visualization approaches and suggest the high potential of this novel approach to support interaction with complex visualizations.",
"title": ""
},
{
"docid": "b10ad91ce374a772790666da5a79616c",
"text": "Photophobia is a common yet debilitating symptom seen in many ophthalmic and neurologic disorders. Despite its prevalence, it is poorly understood and difficult to treat. However, the past few years have seen significant advances in our understanding of this symptom. We review the clinical characteristics and disorders associated with photophobia, discuss the anatomy and physiology of this phenomenon, and conclude with a practical approach to diagnosis and treatment.",
"title": ""
},
{
"docid": "ddd4ccf3d68d12036ebb9e5b89cb49b8",
"text": "This paper presents a modified FastSLAM approach for the specific application of radar sensors using the Doppler information to increase the localization and map accuracy. The developed approach is based on the FastSLAM 2.0 algorithm. It is shown how the FastSLAM 2.0 approach can be significantly improved by taking the Doppler information into account. Therefore, the modelled, so-called expected Doppler, and the measured Doppler are compared for every detection. Both, simulations and experiments on real world data show the increase in accuracy of the modified FastSLAM approach by incorporating the Doppler measurements of automotive radar sensors. The proposed algorithm is compared to the state-of-the-art FastSLAM 2.0 algorithm and the vehicle odometry, whereas profiles of an Automotive Dynamic Motion Analyzer serve as the reference.",
"title": ""
},
{
"docid": "261796369653e128821136f327056894",
"text": "Automatic note-level transcription is considered one of the most challenging tasks in music information retrieval. The specific case of flamenco singing transcription poses a particular challenge due to its complex melodic progressions, intonation inaccuracies, the use of a high degree of ornamentation, and the presence of guitar accompaniment. In this study, we explore the limitations of existing state of the art transcription systems for the case of flamenco singing and propose a specific solution for this genre: We first extract the predominant melody and apply a novel contour filtering process to eliminate segments of the pitch contour which originate from the guitar accompaniment. We formulate a set of onset detection functions based on volume and pitch characteristics to segment the resulting vocal pitch contour into discrete note events. A quantised pitch label is assigned to each note event by combining global pitch class probabilities with local pitch contour statistics. The proposed system outperforms state of the art singing transcription systems with respect to voicing accuracy, onset detection, and overall performance when evaluated on flamenco singing datasets.",
"title": ""
},
{
"docid": "d86325c91717683b5332bedd8e20639f",
"text": "English. This article describes a Twitter corpus of social media contents in the Subjective Well-Being domain. A multilayered manual annotation for exploring attitudes on fertility and parenthood has been applied. The corpus was further analysed by using sentiment and emotion lexicons in order to highlight relationships between the use of affective language and specific sub-topics in the domain. This analysis is useful to identify features for the development of an automatic tool for sentiment-related classification tasks in this domain. The gold standard is available to the community. Italiano. L’articolo descrive la creazione di un corpus tratto da Twitter sui temi del Subjective Well-Being, fertilità e genitorialità. Un’analisi lessicale ha mostrato il legame tra l’uso di linguaggio affettivo e specifiche categorie di messaggi. Questo esame è utile per se e per l’addestramento di sistemi di classificazione automatica sul dominio. Il gold standard è disponibile su",
"title": ""
},
{
"docid": "67f716403b420fcd14c057dcf3be97e3",
"text": "In this paper, the answer selection problem in community question answering (CQA) is regarded as an answer sequence labeling task, and a novel approach is proposed based on the recurrent architecture for this problem. Our approach applies convolution neural networks (CNNs) to learning the joint representation of questionanswer pair firstly, and then uses the joint representation as input of the long shortterm memory (LSTM) to learn the answer sequence of a question for labeling the matching quality of each answer. Experiments conducted on the SemEval 2015 CQA dataset shows the effectiveness of our approach.",
"title": ""
},
{
"docid": "106add4e66ec7f673450c226b86b9b76",
"text": "Three different algorithms for finding blood pressure through the oscillometric method were researched and assessed. It is shown that these algorithms are based on two different underlying approaches. The estimated values of systolic and diastolic blood pressure are compared against the nurse readings. The best two approaches turned out to be the linear approximation algorithm and the points of rapidly increasing/decreasing slope algorithm. Future work on combining these two algorithms using algorithm fusion is envisaged.",
"title": ""
}
] |
scidocsrr
|
02f341c7704ed1ebcbca1b81dc057d6e
|
Integrating Function , Geometry , Appearance for Scene Parsing
|
[
{
"docid": "a5989c562f4c14a67e9effadad92550f",
"text": "We address the problem of understanding an indoor scene from a single image in terms of recovering the room geometry (floor, ceiling, and walls) and furniture layout. A major challenge of this task arises from the fact that most indoor scenes are cluttered by furniture and decorations, whose appearances vary drastically across scenes, thus can hardly be modeled (or even hand-labeled) consistently. In this paper we tackle this problem by introducing latent variables to account for clutter, so that the observed image is jointly explained by the room and clutter layout. Model parameters are learned from a training set of images that are only labeled with the layout of the room geometry. Our approach enables taking into account and inferring indoor clutter without hand-labeling of the clutter in the training set, which is often inaccurate. Yet it outperforms the state-of-the-art method of Hedau et al. that requires clutter labels. As a latent variable based method, our approach has an interesting feature that latent variables are used in direct correspondence with a concrete visual concept (clutter in the room) and thus interpretable.",
"title": ""
},
{
"docid": "2a56585a288405b9adc7d0844980b8bf",
"text": "In this paper we propose the first exact solution to the problem of estimating the 3D room layout from a single image. This problem is typically formulated as inference in a Markov random field, where potentials count image features (e.g ., geometric context, orientation maps, lines in accordance with vanishing points) in each face of the layout. We present a novel branch and bound approach which splits the label space in terms of candidate sets of 3D layouts, and efficiently bounds the potentials in these sets by restricting the contribution of each individual face. We employ integral geometry in order to evaluate these bounds in constant time, and as a consequence, we not only obtain the exact solution, but also in less time than approximate inference tools such as message-passing. We demonstrate the effectiveness of our approach in two benchmarks and show that our bounds are tight, and only a few evaluations are necessary.",
"title": ""
}
] |
[
{
"docid": "49c19e5417aa6a01c59f666ba7cc3522",
"text": "The effect of various drugs on the extracellular concentration of dopamine in two terminal dopaminergic areas, the nucleus accumbens septi (a limbic area) and the dorsal caudate nucleus (a subcortical motor area), was studied in freely moving rats by using brain dialysis. Drugs abused by humans (e.g., opiates, ethanol, nicotine, amphetamine, and cocaine) increased extracellular dopamine concentrations in both areas, but especially in the accumbens, and elicited hypermotility at low doses. On the other hand, drugs with aversive properties (e.g., agonists of kappa opioid receptors, U-50,488, tifluadom, and bremazocine) reduced dopamine release in the accumbens and in the caudate and elicited hypomotility. Haloperidol, a neuroleptic drug, increased extracellular dopamine concentrations, but this effect was not preferential for the accumbens and was associated with hypomotility and sedation. Drugs not abused by humans [e.g., imipramine (an antidepressant), atropine (an antimuscarinic drug), and diphenhydramine (an antihistamine)] failed to modify synaptic dopamine concentrations. These results provide biochemical evidence for the hypothesis that stimulation of dopamine transmission in the limbic system might be a fundamental property of drugs that are abused.",
"title": ""
},
{
"docid": "3ba9e91a4d2ff8cb1fe479f5dddc86c1",
"text": "Researchers have shown that program analyses that drive software development and maintenance tools supporting search, traceability and other tasks can benefit from leveraging the natural language information found in identifiers and comments. Accurate natural language information depends on correctly splitting the identifiers into their component words and abbreviations. While conventions such as camel-casing can ease this task, conventions are not well-defined in certain situations and may be modified to improve readability, thus making automatic splitting more challenging. This paper describes an empirical study of state-of-the-art identifier splitting techniques and the construction of a publicly available oracle to evaluate identifier splitting algorithms. In addition to comparing current approaches, the results help to guide future development and evaluation of improved identifier splitting approaches.",
"title": ""
},
{
"docid": "d614eb429aa62e7d568acbba8ac7fe68",
"text": "Four women, who previously had undergone multiple unsuccessful in vitro fertilisation (IVF) cycles because of failure of implantation of good quality embryos, were identified as having coexisting uterine adenomyosis. Endometrial biopsies showed that adenomyosis was associated with a prominent aggregation of macrophages within the superficial endometrial glands, potentially interfering with embryo implantation. The inactivation of adenomyosis by an ultra-long pituitary downregulation regime promptly resulted in successful pregnancy for all women in this case series.",
"title": ""
},
{
"docid": "dcfb5ebabf07e87843668338d8d9927a",
"text": "Click Fraud Bots pose a significant threat to the online economy. To-date efforts to filter bots have been geared towards identifiable useragent strings, as epitomized by the IAB's Robots and Spiders list. However bots designed to perpetrate malicious activity or fraud, are designed to avoid detection with these kinds of lists, and many use very sophisticated schemes for cloaking their activities. In order to combat this emerging threat, we propose the creation of Bot Signatures for training and evaluation of candidate Click Fraud Detection Systems. Bot signatures comprise keyed records connected to case examples. We demonstrate the technique by developing 8 simulated examples of Bots described in the literature including Click Bot A.",
"title": ""
},
{
"docid": "ce3e480e50ffc7a79c3dbc71b07ec9f7",
"text": "A relatively recent advance in cognitive neuroscience has been multi-voxel pattern analysis (MVPA), which enables researchers to decode brain states and/or the type of information represented in the brain during a cognitive operation. MVPA methods utilize machine learning algorithms to distinguish among types of information or cognitive states represented in the brain, based on distributed patterns of neural activity. In the current investigation, we propose a new approach for representation of neural data for pattern analysis, namely a Mesh Learning Model. In this approach, at each time instant, a star mesh is formed around each voxel, such that the voxel corresponding to the center node is surrounded by its p-nearest neighbors. The arc weights of each mesh are estimated from the voxel intensity values by least squares method. The estimated arc weights of all the meshes, called Mesh Arc Descriptors (MADs), are then used to train a classifier, such as Neural Networks, k-Nearest Neighbor, Naïve Bayes and Support Vector Machines. The proposed Mesh Model was tested on neuroimaging data acquired via functional magnetic resonance imaging (fMRI) during a recognition memory experiment using categorized word lists, employing a previously established experimental paradigm (Öztekin & Badre, 2011). Results suggest that the proposed Mesh Learning approach can provide an effective algorithm for pattern analysis of brain activity during cognitive processing.",
"title": ""
},
{
"docid": "4e95abd0786147e5e9f4195f2c7a8ff7",
"text": "The contour tree is an abstraction of a scalar field that encodes the nesting relationships of isosurfaces. We show how to use the contour tree to represent individual contours of a scalar field, how to simplify both the contour tree and the topology of the scalar field, how to compute and store geometric properties for all possible contours in the contour tree, and how to use the simplified contour tree as an interface for exploratory visualization.",
"title": ""
},
{
"docid": "fc63dbad7a3c6769ee1a1df19da6e235",
"text": "For global companies that compete in high-velocity industries, business strategies and initiatives change rapidly, and thus the CIO struggles to keep the IT organization aligned with a moving target. In this paper we report on research-in-progress that focuses on how the CIO attempts to meet this challenge. Specifically, we are conducting case studies to closely examine how toy industry CIOs develop their IT organizations’ assets, competencies, and dynamic capabilities in alignment with their companies’ evolving strategy and business priorities (which constitute the “moving target”). We have chosen to study toy industry CIOs, because their companies compete in a global, high-velocity environment, yet this industry has been largely overlooked by the information systems research community. Early findings reveal that four IT application areas are seen as holding strong promise: supply chain management, knowledge management, data mining, and eCommerce, and that toy CIO’s are attempting to both cope with and capitalize on the current financial crisis by more aggressively pursuing offshore outsourcing than heretofore. We conclude with a discussion of next steps as the study proceeds.",
"title": ""
},
{
"docid": "ef39209e61597136d5a954c70fcecbfe",
"text": "We introduce the Android Security Framework (ASF), a generic, extensible security framework for Android that enables the development and integration of a wide spectrum of security models in form of code-based security modules. The design of ASF reflects lessons learned from the literature on established security frameworks (such as Linux Security Modules or the BSD MAC Framework) and intertwines them with the particular requirements and challenges from the design of Android's software stack. ASF provides a novel security API that supports authors of Android security extensions in developing their modules. This overcomes the current unsatisfactory situation to provide security solutions as separate patches to the Android software stack or to embed them into Android's mainline codebase. This system security extensibility is of particular benefit for enterprise or government solutions that require deployment of advanced security models, not supported by vanilla Android. We present a prototypical implementation of ASF and demonstrate its effectiveness and efficiency by modularizing different security models from related work, such as dynamic permissions, inlined reference monitoring, and type enforcement.",
"title": ""
},
{
"docid": "de4d63a3ac8767c715f09aa5a61ecc07",
"text": "In this paper we challenge current definitions of mobile learning and suggest that the direction of progress, both in theoretical/applied research as well as its role as a tool that serves social transformation and development, will be determined and even dictated by the availability of an adequate definition. A new framework for the definition of mobile learning is proposed, one that considers a repertoire of domains, and which embraces not only technical, methodological and educational aspects, but also considers social and philosophical dimensions.",
"title": ""
},
{
"docid": "2d94bc7459304885c60c7bf29341fa5d",
"text": "Bayesian optimization schemes often rely on Gaussian processes (GP). GP models are very flexible, but are known to scale poorly with the number of training points. While several efficient sparse GP models are known, they have limitations when applied in optimization settings. We propose a novel Bayesian optimization framework that uses sparse online Gaussian processes. We introduce a new updating scheme for the online GP that accounts for our preference during optimization for regions with better performance. We apply this method to optimize the performance of a free-electron laser, and demonstrate empirically that the weighted updating scheme leads to substantial improvements to performance in optimization.",
"title": ""
},
{
"docid": "4161b52b832c0b80d0815b9e80a5dda0",
"text": "Machine Comprehension (MC) is a challenging task in Natural Language Processing field, which aims to guide the machine to comprehend a passage and answer the given question. Many existing approaches on MC task are suffering the inefficiency in some bottlenecks, such as insufficient lexical understanding, complex question-passage interaction, incorrect answer extraction and so on. In this paper, we address these problems from the viewpoint of how humans deal with reading tests in a scientific way. Specifically, we first propose a novel lexical gating mechanism to dynamically combine the words and characters representations. We then guide the machines to read in an interactive way with attention mechanism and memory network. Finally we add a checking layer to refine the answer for insurance. The extensive experiments on two popular datasets SQuAD and TriviaQA show that our method exceeds considerable performance than most stateof-the-art solutions at the time of submission.",
"title": ""
},
{
"docid": "64fbffe75209359b540617fac4930c44",
"text": "Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy.",
"title": ""
},
{
"docid": "be29c412c17f9a87829cfe86fd3b1040",
"text": "Nowadays there is a continuously increasing worldwide concern for the development of wastewater treatment technologies. The utilization of iron oxide nanomaterials has received much attention due to their unique properties, such as extremely small size, high surface-area-to-volume ratio, surface modifiability, excellent magnetic properties and great biocompatibility. A range of environmental clean-up technologies have been proposed in wastewater treatment which applied iron oxide nanomaterials as nanosorbents and photocatalysts. Moreover, iron oxide based immobilization technology for enhanced removal efficiency tends to be an innovative research point. This review outlined the latest applications of iron oxide nanomaterials in wastewater treatment, and gaps which limited their large-scale field applications. The outlook for potential applications and further challenges, as well as the likely fate of nanomaterials discharged to the environment were discussed.",
"title": ""
},
{
"docid": "84baffce37a20423b88a52f086431425",
"text": "When designing tabletop digital games, designers often draw inspiration from board games because of their similarities (e.g., spatial structure, social setting, and physical interaction). As part of our tabletop handheld augmented reality (THAR) games research, in which computer graphics content is rendered and registered on top of the players’ view of the physical world, we are motivated to understand how social play unfolds in board games with the purpose of informing design decisions for THAR games. In this paper we report an empirical study of recorded video from a series of board game play sessions. We present five categories of social interactions based on how each interaction is initiated, among which we believe that the category of “chores” (interactions arising from the bookkeeping activities required to maintain and update game state) provides opportunities and support for four other kinds of social interaction, namely, “Reflection on Gameplay” (reacting to and reflecting on gameplay after a move); “Strategies” (deciding how to play before a move); “Out-of-game” (reacting to and talking about out-of-game subjects); and “Game itself” (commenting on and reacting to the game as an artifact of interest). We note that “chores” in board games (e.g. waiting for a turn, rule learning and enforcement, maneuvering physical objects), which at first appear to be merely functional, are critical for supporting players’ engagement with each other. Although most of these chores can be automated using technology, we argue that this is often not the best choice when designing social interactions with digital media. Based on our experience with THAR games, we discuss several design choices related to “chores”. To understand the connection between game design elements and social experience, we apply Interaction Ritual (IR) theory from micro-sociology to interpret our data.",
"title": ""
},
{
"docid": "df5df8eb9b7bdd4dbbcaa4469486fec6",
"text": "The human population generates vast quantities of waste material. Macro (>1 mm) and microscopic (<1 mm) fragments of plastic debris represent a substantial contamination problem. Here, we test hypotheses about the influence of wind and depositional regime on spatial patterns of micro- and macro-plastic debris within the Tamar Estuary, UK. Debris was identified to the type of polymer using Fourier-transform infrared spectroscopy (FT-IR) and categorized according to density. In terms of abundance, microplastic accounted for 65% of debris recorded and mainly comprised polyvinylchloride, polyester, and polyamide. Generally, there were greater quantities of plastic at downwind sites. For macroplastic, there were clear patterns of distribution for less dense items, while for microplastic debris, clear patterns were for denser material. Small particles of sediment and plastic are both likely to settle slowly from the water-column and are likely to be transported by the flow of water and be deposited in areas where the movements of water are slower. There was, however, no relationship between the abundance of microplastic and the proportion of clay in sediments from the strandline. These results illustrate how FT-IR spectroscopy can be used to identify the different types of plastic and in this case was used to indicate spatial patterns, demonstrating habitats that are downwind acting as potential sinks for the accumulation of debris.",
"title": ""
},
{
"docid": "f9bd24894ed3eace01f51966c61f2a5d",
"text": "Ethanolic extract from the fruits of Pimpinella anisoides, an aromatic plant and a spice, exhibited activity against AChE and BChE, with IC(50) values of 227.5 and 362.1 microg/ml, respectively. The most abundant constituents of the extract were trans-anethole, (+)-limonene and (+)-sabinene. trans-Anethole exhibited the highest activity against AChE and BChE with IC(50) values of 134.7 and 209.6 microg/ml, respectively. The bicyclic monoterpene (+)-sabinene exhibited a promising activity against AChE (IC(50) of 176.5 microg/ml) and BChE (IC(50) of 218.6 microg/ml).",
"title": ""
},
{
"docid": "dd38dfd7214b4baafa8ecdf72dc8ca6f",
"text": "Bottom-Up (BU) saliency models do not perform well in complex interactive environments where humans are actively engaged in tasks (e.g., sandwich making and playing the video games). In this paper, we leverage Reinforcement Learning (RL) to highlight task-relevant locations of input frames. We propose a soft attention mechanism combined with the Deep Q-Network (DQN) model to teach an RL agent how to play a game and where to look by focusing on the most pertinent parts of its visual input. Our evaluations on several Atari 2600 games show that the soft attention based model could predict fixation locations significantly better than bottom-up models such as Itti-Kochs saliency and Graph-Based Visual Saliency (GBVS) models.",
"title": ""
},
{
"docid": "00dfecba30f7c6e3a1f9f98e53e58528",
"text": "In this study a novel electronic health information system that integrates the functions of medical recording, reporting and data utilization is presented. The goal of this application is to provide synchronized operation and auto-generated reports to improve the efficiency and accuracy for physicians working at regional clinics and health centers in China, where paper record is the dominant way for diagnosis and medicine prescription. The database design offers high efficiency for operations such as data mining on the medical data collected by the system during diagnosis. The result of data mining can be applied on inventory planning, diagnosis assistance, clinical research and disease control and prevention. Compared with electronic health and medical information system used in urban hospitals, the system presented here is light-weighted, with simpler database structure, self-explanatory webpage display, and tag-oriented navigations. These features makes the system more accessible and affordable for regional clinics and health centers such as university clinics and community hospitals, which have a much more lagging development with limited funding and resources than urban hospitals while they are playing an increasingly important role in the health care system in China.",
"title": ""
},
{
"docid": "bd700aba43a8a8de5615aa1b9ca595a7",
"text": "Cloud computing has formed the conceptual and infrastructural basis for tomorrow’s computing. The global computing infrastructure is rapidly moving towards cloud based architecture. While it is important to take advantages of could based computing by means of deploying it in diversified sectors, the security aspects in a cloud based computing environment remains at the core of interest. Cloud based services and service providers are being evolved which has resulted in a new business trend based on cloud technology. With the introduction of numerous cloud based services and geographically dispersed cloud service providers, sensitive information of different entities are normally stored in remote servers and locations with the possibilities of being exposed to unwanted parties in situations where the cloud servers storing those information are compromised. If security is not robust and consistent, the flexibility and advantages that cloud computing has to offer will have little credibility. This paper presents a review on the cloud computing concepts as well as security issues inherent within the context of cloud computing and cloud",
"title": ""
},
{
"docid": "cc4458a843a2a6ffa86b4efd1956ffca",
"text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters",
"title": ""
}
] |
scidocsrr
|
16119b1770d801487c4f955fdd066fc4
|
Live Speech Driven Head-and-Eye Motion Generators
|
[
{
"docid": "10ca113b333bf891beff38bd84914324",
"text": "In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject gaze at the faces of conversational partners during four-person conversations. Results indicate that when someone is listening or speaking to individuals, there is indeed a high probability that the person looked at is the person listened (p=88%) or spoken to (p=77%). We conclude that gaze is an excellent predictor of conversational attention in multiparty conversations. As such, it may form a reliable source of input for conversational systems that need to establish whom the user is speaking or listening to. We implemented our findings in FRED, a multi-agent conversational system that uses eye input to gauge which agent the user is listening or speaking to.",
"title": ""
}
] |
[
{
"docid": "2f110c5f312ceefdf6c1ea1fd78a361f",
"text": "Enrollments in introductory computer science courses are growing rapidly, thereby taxing scarce teaching resources and motivating the increased use of automated tools for program grading. Such tools commonly rely on regression testing methods from industry. However, the goals of automated grading differ from those of testing for software production. In academia, a primary motivation for testing is to provide timely and accurate feedback to students so that they can understand and fix defects in their programs. Testing strategies for program grading are therefore distinct from those of traditional software testing. This paper enumerates and describes a number of testing strategies that improve the quality of feedback for different types of programming assignments.",
"title": ""
},
{
"docid": "04549adc3e956df0f12240c4d9c02bd7",
"text": "Gamification, applying game mechanics to nongame contexts, has recently become a hot topic across a wide range of industries, and has been presented as a potential disruptive force in education. It is based on the premise that it can promote motivation and engagement and thus contribute to the learning process. However, research examining this assumption is scarce. In a set of studies we examined the effects of points, a basic element of gamification, on performance in a computerized assessment of mastery and fluency of basic mathematics concepts. The first study, with adult participants, found no effect of the point manipulation on accuracy of responses, although the speed of responses increased. In a second study, with 6e8 grade middle school participants, we found the same results for the two aspects of performance. In addition, middle school participants' reactions to the test revealed higher likeability ratings for the test under the points condition, but only in the first of the two sessions, and perceived effort during the test was higher in the points condition, but only for eighth grade students. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "64e57a5382411ade7c0ad4ef7f094aa9",
"text": "In this paper we present the techniques used for the University of Montréal's team submissions to the 2013 Emotion Recognition in the Wild Challenge. The challenge is to classify the emotions expressed by the primary human subject in short video clips extracted from feature length movies. This involves the analysis of video clips of acted scenes lasting approximately one-two seconds, including the audio track which may contain human voices as well as background music. Our approach combines multiple deep neural networks for different data modalities, including: (1) a deep convolutional neural network for the analysis of facial expressions within video frames; (2) a deep belief net to capture audio information; (3) a deep autoencoder to model the spatio-temporal information produced by the human actions depicted within the entire scene; and (4) a shallow network architecture focused on extracted features of the mouth of the primary human subject in the scene. We discuss each of these techniques, their performance characteristics and different strategies to aggregate their predictions. Our best single model was a convolutional neural network trained to predict emotions from static frames using two large data sets, the Toronto Face Database and our own set of faces images harvested from Google image search, followed by a per frame aggregation strategy that used the challenge training data. This yielded a test set accuracy of 35.58%. Using our best strategy for aggregating our top performing models into a single predictor we were able to produce an accuracy of 41.03% on the challenge test set. These compare favorably to the challenge baseline test set accuracy of 27.56%.",
"title": ""
},
{
"docid": "2c9200b9897219cfaf7bbcc953f33886",
"text": "The concept of precise point positioning (PPP) is currently associated with global networks. Precise orbit and clock solutions are used to enable absolute positioning of a single receiver. However, it is restricted in ambiguity resolution, in convergence time and in accuracy. Precise point positioning based on RTK networks (PPP-RTK) as presented overcomes these limitations and gives centimeter-accuracy in a few seconds. The primary task in RTK networks using the Geo++ GNSMART software is the precise monitoring and representation of all individual GNSS error components using state-space modeling. The advantages of state-space modeling are well known for PPP applications. It is much closer to the physical error sources and can thus better represent the error characteristics. It allows to better separate the various error sources to improve performance and can lead to much less bandwidth for transmission. With RTK networks based on GNSMART it is possible to apply the PPP concept with high accuracy. Ambiguity resolution within the RTK network is mandatory and allows the precise modeling of the system state. Since the integer nature of the carrier phase ambiguities is maintained, all error components can be consistently modeled and give full accuracy in an ambiguity fixing GNSS application. For today's realtime applications, observations of a reference station together with network derived parameters to describe distance dependent errors or a virtual reference station are transmitted to GNSS users in the field using the RTCM standards. This can be termed as representation in observation space (Observation Space Representation: OSR). In contrast to this, also the actual state-space data Presented at the 18th International Technical Meeting, ION GNSS-05, September 13-16, 2005, Long Beach, California. can be used for the representation of the complete GNSS state (State Space Representation: SSR). Hence, precise absolute positioning based on a RTK network (PPP-RTK) using state-space data is a practicable concept. In principle, the concept can be applied to small, regional and global networks. A reference station separation of several 100 km to achieve ambiguity resolution and therefore the key-issue to PPP-RTK is already possible with GNSMART. The complete transition from observation-space to statespace requires the definition of adequate formats and standardized models to provide the state-space data for GNSS application. A single receiver then can position itself with centimeter-accuracy within a few seconds in post-processing and realtime applications. In between, state-space data can still be used to generate data in observation-space, e.g. RTCM or RINEX format, through a conversion algorithm. The state-space concept and pre-requisites are discussed. The benefits of state space representation of GNSS errors and their applications are pointed out.",
"title": ""
},
{
"docid": "5980e6111c145db3e1bfc5f47df7ceaf",
"text": "Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today's algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data-and the CNNs outperformed the human test persons.",
"title": ""
},
{
"docid": "3efb43150881649d020a0c721dc39ae5",
"text": "Six studies explore the role of goal shielding in self-regulation by examining how the activation of focal goals to which the individual is committed inhibits the accessibility of alternative goals. Consistent evidence was found for such goal shielding, and a number of its moderators were identified: Individuals' level of commitment to the focal goal, their degree of anxiety and depression, their need for cognitive closure, and differences in their goal-related tenacity. Moreover, inhibition of alternative goals was found to be more pronounced when they serve the same overarching purpose as the focal goal, but lessened when the alternative goals facilitate focal goal attainment. Finally, goal shielding was shown to have beneficial consequences for goal pursuit and attainment.",
"title": ""
},
{
"docid": "dcf4278becbc530d9648b5df4a64ec53",
"text": "Variable speed operation is essential for large wind turbines in order to optimize the energy capture under variable wind speed conditions. Variable speed wind turbines require a power electronic interface converter to permit connection with the grid. The power electronics can be either partially-rated or fully-rated [1]. A popular interface method for large wind turbines that is based on a partiallyrated interface is the doubly-fed induction generator (DFIG) system [2]. In the DFIG system, the power electronic interface controls the rotor currents in order to control the electrical torque and thus the rotational speed. Because the power electronics only process the rotor power, which is typically less than 25% of the overall output power, the DFIG offers the advantages of speed control for a reduction in cost and power losses. This report presents a DFIG wind turbine system that is modeled in PLECS and Simulink. A full electrical model that includes the switching converter implementation for the rotor-side power electronics and a dq model of the induction machine is given. The aerodynamics of the wind turbine and the mechanical dynamics of the induction machine are included to extend the use of the model to simulating system operation under variable wind speed conditions. For longer simulations that include these slower mechanical and wind dynamics, an averaged PWM converter model is presented. The averaged electrical model offers improved simulation speed at the expense of neglecting converter switching detail.",
"title": ""
},
{
"docid": "04b961d5ec0c60f0d6e7eccaded62172",
"text": "Hand rehabilitation after stroke is essential for restoring functional independent lifestyles. After stroke, patients often have flexor hypertonia, making it difficult to open their hand for functional grasp. The development and initial testing of a passive hand rehabilitation device is discussed. The device, Hand Spring Operated Movement Enhancer (HandSOME), assists with opening the patient's hand using a series of bungee cords that apply extension torques to the finger joints that compensate for the flexor hypertonia. This results in significant increase in range of motion and functional use when wearing HandSOME, even in severely impaired subjects. Device design, calibration, and range of motion are described as well as functional and usability testing with stroke subjects.",
"title": ""
},
{
"docid": "273bf17fa1e6ad901a1bf7dbb540ba76",
"text": "BAHARAV, AND ARIEH BORUT. Running in cheetahs, gazelles, and goats: energy cost and limb conjguration. Am. J. Physiol. 227(4) : 848-850. 1974.-Functional anatomists have argued that an animal can be built to run cheaply by lightening the distal parts of the limbs and/or by concentrating the muscle mass of the limbs around their pivot points. These arguments assume .that much of the energy expended as animals run at a constant speed goes into alternately accelerating and decelerating the limbs. Gazelles, goats, and cheetahs offer a nice gradation of limb configurations in animals of similar total mass and limb length and, therefore, provide the opportunity to quantify the effect of limb design on the energy cost of running. We found that, despite large differences in limb configuration, the energetic cost of running in cheetahs, gazelles, and goats of about the same mass was nearly identical over a wide range of speeds. Also, the observed energetic cost of running was almost the same as that predicted on the basis of body weight for all three species: cheetah, 0.14 ml 02 (g l km)-’ observed vs. 0.13 ml 02 (g *km)-l predicted; gazelle, 0.16 ml 02 (g *km)-’ observed vs. 0.15 ml 02 (g *km)-’ predicted; and goat, 0.18 ml 02 (g . km)-’ observed vs. 0.14 ml 02 (g *km)-’ predicted. Thus the relationship between body weight and energetic cost of running apparently applies to animals with very different limb configurations and is more general than anticipated. This suggests that most of the energy expended in running at a constant speed is not used to accelerate and decelerate the limbs.",
"title": ""
},
{
"docid": "f3574f1e3f0ef3a5e1d20cb15b040105",
"text": "Composed of tens of thousands of tiny devices with very limited resources (\"motes\"), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost.We present Maté, a tiny communication-centric virtual machine designed for sensor networks. Maté's high-level interface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Maté's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.",
"title": ""
},
{
"docid": "e2991def3d4b03340b0fc9b708aa1efc",
"text": "Author Samuli Laine Title Efficient Physically-Based Shadow Algorithms This research focuses on developing efficient algorithms for computing shadows in computer-generated images. A distinctive feature of the shadow algorithms presented in this thesis is that they produce correct, physicallybased results, instead of giving approximations whose quality is often hard to ensure or evaluate. Light sources that are modeled as points without any spatial extent produce hard shadows with sharp boundaries. Shadow mapping is a traditional method for rendering such shadows. A shadow map is a depth buffer computed from the scene, using a point light source as the viewpoint. The finite resolution of the shadow map requires that its contents are resampled when determining the shadows on visible surfaces. This causes various artifacts such as incorrect self-shadowing and jagged shadow boundaries. A novel method is presented that avoids the resampling step, and provides exact shadows for every point visible in the image. The shadow volume algorithm is another commonly used algorithm for real-time rendering of hard shadows. This algorithm gives exact results and does not suffer from any resampling problems, but it tends to consume a lot of fillrate, which leads to performance problems. This thesis presents a new technique for locally choosing between two previous shadow volume algorithms with different performance characteristics. A simple criterion for making the local choices is shown to yield better performance than using either of the algorithms alone. Light sources with nonzero spatial extent give rise to soft shadows with smooth boundaries. A novel method is presented that transposes the classical processing order for soft shadow computation in offline rendering. Instead of casting shadow rays, the algorithm first conceptually collects every ray that would need to be cast, and then processes the shadow-casting primitives one by one, hierarchically finding the rays that are blocked. Another new soft shadow algorithm takes a different point of view into computing the shadows. Only the silhouettes of the shadow casters are used for determining the shadows, and an unintrusive execution model makes the algorithm practical for production use in offline rendering. The proposed techniques accelerate the computing of physically-based shadows in real-time and offline rendering. These improvements make it possible to use correct, physically-based shadows in a broad range of scenes that previous methods cannot handle efficiently enough. UDC 004.925, 004.383.5",
"title": ""
},
{
"docid": "e2a5031e29948f4def8cf445b31951ba",
"text": "We describe our entry, C2L2, to the CoNLL 2017 shared task on parsing Universal Dependencies from raw text. Our system features an ensemble of three global parsing paradigms, one graph-based and two transition-based. Each model leverages character-level bidirectional LSTMs as lexical feature extractors to encode morphological information. Though relying on baseline tokenizers and focusing only on parsing, our system ranked second in the official end-toend evaluation with a macro-average of 75.00 LAS F1 score over 81 test treebanks. In addition, we had the top average performance on the four surprise languages and on the small treebank subset.",
"title": ""
},
{
"docid": "7af1ddcefae86ffa989ddd106f032002",
"text": "In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that “Some people are gay” is toxic while “Some people are straight” is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classifiers, and describe its relationship with group fairness. Further, we offer three approaches, blindness, counterfactual augmentation, and counterfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we find that blindness and CLP address counterfactual token fairness. The methods do not harm classifier performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classification.",
"title": ""
},
{
"docid": "d815e254478a9503f1063b5595f48e0f",
"text": "•We present an approach to this unpaired image captioning problem by language pivoting. •Our method can effectively capture the characteristics of an image captioner from the pivot language (Chinese) and align it to the target language (English) using another pivot-target (Chinese-English) parallel corpus. •Quantitative comparisons against several baseline approaches demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "e0d274835d8e4a621960bcd884544d73",
"text": "Systemic sclerosis is a chronic multi-system disorder predominantly affecting the skin, musculoskeletal, gastrointestinal, pulmonary, and renal systems. Although the exact etiology is unknown, recent evidence suggests that immune activation play a pivotal role in the pathogenesis. Ocular involvement in systemic sclerosis has been documented; however, due to the rare nature of the disease, most papers have been single case reports or small case series. This review paper aims to consolidate the findings of previous papers with a view to providing a comprehensive review of the ocular manifestations of systemic sclerosis.",
"title": ""
},
{
"docid": "41ac115647c421c44d7ef1600814dc3e",
"text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.",
"title": ""
},
{
"docid": "b60dbd2b871e03a38931a54f87b4789b",
"text": "Neural networks have shown promising results for relation extraction. State-ofthe-art models cast the task as an end-toend problem, solved incrementally using a local classifier. Yet previous work using statistical models have demonstrated that global optimization can achieve better performances compared to local classification. We build a globally optimized neural model for end-to-end relation extraction, proposing novel LSTM features in order to better learn context representations. In addition, we present a novel method to integrate syntactic information to facilitate global learning, yet requiring little background on syntactic grammars thus being easy to extend. Experimental results show that our proposed model is highly effective, achieving the best performances on two standard benchmarks.",
"title": ""
},
{
"docid": "29cc6e8a51d03861aa4915f2842a3902",
"text": "Pointing, a cornerstone of our graphical user interfaces, has been conceptualized and implemented so far as the act of selecting pixels in bitmap displays. We show that the current technique, which we call bitmap pointing (BMP), is often sub-optimal as it requires continuous information from the mouse while the system often just needs the discrete specification of objects. The paper introduces object pointing (OP), a novel interaction technique based on a special screen cursor that skips empty spaces, thus drastically reducing the waste of input information. We report data from 1D and 2D Fitts’ law experiments showing that OP outperforms BMP and that the performance facilitation increases with the task’s index of difficulty. We discuss the implementation of OP in current interfaces.",
"title": ""
},
{
"docid": "462a0746875e35116f669b16d851f360",
"text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.",
"title": ""
}
] |
scidocsrr
|
2e81cc281938346b0d5febf953ca23dc
|
Sign Language Learning System with Image Sampling and Convolutional Neural Network
|
[
{
"docid": "a9346f8d40a8328e963774f2604da874",
"text": "Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs. Keywords-Convolutional Neural Networks, Softmax (key words) __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "ee9c0e79b29fbe647e3e0ccb168532b5",
"text": "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15%, 7% and 12% respectively in mAP.",
"title": ""
},
{
"docid": "0ccd9c5f54d679320baf7547ae312c24",
"text": "Sign language (SL) recognition, although has been explored for many years, is still a challenging problem for real practice. The complex background and illumination conditions affect the hand tracking and make the SL recognition very difficult. Fortunately, Kinect is able to provide depth and color data simultaneously, based on which the hand and body action can be tracked more accurate and easier. Therefore, 3D motion trajectory of each sign language vocabulary is aligned and matched between probe and gallery to get the recognized result. This demo will show our primary efforts on sign language recognition and translation with Kinect. Keywords-sign language; hand tracking; 3D motion trajectory",
"title": ""
}
] |
[
{
"docid": "9fdaddce26965be59f9d46d06fa0296a",
"text": "Using emotion detection technologies from biophysical signals, this study explored how emotion evolves during learning process and how emotion feedback could be used to improve learning experiences. This article also described a cutting-edge pervasive e-Learning platform used in a Shanghai online college and proposed an affective e-Learning model, which combined learners’ emotions with the Shanghai e-Learning platform. The study was guided by Russell’s circumplex model of affect and Kort’s learning spiral model. The results about emotion recognition from physiological signals achieved a best-case accuracy (86.3%) for four types of learning emotions. And results from emotion revolution study showed that engagement and confusion were the most important and frequently occurred emotions in learning, which is consistent with the findings from AutoTutor project. No evidence from this study validated Kort’s learning spiral model. An experimental prototype of the affective e-Learning model was built to help improve students’ learning experience by customizing learning material delivery based on students’ emotional state. Experiments indicated the superiority of emotion aware over non-emotion-aware with a performance increase of 91%.",
"title": ""
},
{
"docid": "f95e568513847369eba15e154461a3c1",
"text": "We address the problem of identifying the domain of onlinedatabases. More precisely, given a set F of Web forms automaticallygathered by a focused crawler and an online databasedomain D, our goal is to select from F only the formsthat are entry points to databases in D. Having a set ofWebforms that serve as entry points to similar online databasesis a requirement for many applications and techniques thataim to extract and integrate hidden-Web information, suchas meta-searchers, online database directories, hidden-Webcrawlers, and form-schema matching and merging.We propose a new strategy that automatically and accuratelyclassifies online databases based on features that canbe easily extracted from Web forms. By judiciously partitioningthe space of form features, this strategy allows theuse of simpler classifiers that can be constructed using learningtechniques that are better suited for the features of eachpartition. Experiments using real Web data in a representativeset of domains show that the use of different classifiersleads to high accuracy, precision and recall. This indicatesthat our modular classifier composition provides an effectiveand scalable solution for classifying online databases.",
"title": ""
},
{
"docid": "b93ab92ac82a34d3a83240e251cf714e",
"text": "Short text is becoming ubiquitous in many modern information systems. Due to the shortness and sparseness of short texts, there are less informative word co-occurrences among them, which naturally pose great difficulty for classification tasks on such data. To overcome this difficulty, this paper proposes a new way for effectively classifying the short texts. Our method is based on a key observation that there usually exists ordered subsets in short texts, which is termed ``information path'' in this work, and classification on each subset based on the classification results of some pervious subsets can yield higher overall accuracy than classifying the entire data set directly. We propose a method to detect the information path and employ it in short text classification. Different from the state-of-art methods, our method does not require any external knowledge or corpus that usually need careful fine-tuning, which makes our method easier and more robust on different data sets. Experiments on two real world data sets show the effectiveness of the proposed method and its superiority over the existing methods.",
"title": ""
},
{
"docid": "40e06996a22e1de4220a09e65ac1a04d",
"text": "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.",
"title": ""
},
{
"docid": "7bbd6adfe5950e390de479ced1860ef5",
"text": "In this work, we investigate the impact of social influence of a Facebook fan page on movie box offices. We aim to enhance the accuracy of predicting the box office by leveraging the social influence among users in the fan page. We develop the Global Influence Model to compute the user influence and predict the engagements between the fan page and users. In addition, we propose the Linear Box Office Revenue Prediction Model to bridge the gap between Facebook fan pages and the box offices by utilizing the social influence and some statistics obtained from Facebook fan pages. By considering the social influence, the accuracy of forecasting box offices for movies can be improved significantly.",
"title": ""
},
{
"docid": "9eabecdc7c013099c0bcb266b43fa0dc",
"text": "Aging influences how a person is perceived on multiple dimensions (e.g., physical power). Here we examined how facial structure informs these evolving social perceptions. Recent work examining young adults' faces has revealed the impact of the facial width-to-height ratio (fWHR) on perceived traits, such that individuals with taller, thinner faces are perceived to be less aggressive, less physically powerful, and friendlier. These perceptions are similar to those stereotypically associated with older adults. Examining whether fWHR might contribute to these changing perceptions over the life span, we found that age provides a shifting context through which fWHR differentially impacts aging-related social perceptions (Study 1). In addition, archival analyses (Study 2) established that fWHR decreases across age, and a subsequent study found that fWHR mediated the relationship between target age and multiple aging-related perceptions (Study 3). The findings provide evidence that fWHR decreases across age and influences stereotypical perceptions that change with age.",
"title": ""
},
{
"docid": "755f2d11ad9653806f26e5ae7beaf49b",
"text": "Deep Neural Networks (DNNs) have shown remarkable success in pattern recognition tasks. However, parallelizing DNN training across computers has been difficult. We present the Deep Stacking Network (DSN), which overcomes the problem of parallelizing learning algorithms for deep architectures. The DSN provides a method of stacking simple processing modules in buiding deep architectures, with a convex learning problem in each module. Additional fine tuning further improves the DSN, while introducing minor non-convexity. Full learning in the DSN is batch-mode, making it amenable to parallel training over many machines and thus be scalable over the potentially huge size of the training data. Experimental results on both the MNIST (image) and TIMIT (speech) classification tasks demonstrate that the DSN learning algorithm developed in this work is not only parallelizable in implementation but it also attains higher classification accuracy than the DNN.",
"title": ""
},
{
"docid": "768edb95e76c9a1ffda7806b9b930832",
"text": "The modern software development landscape has seen a shift in focus toward mobile applications as tablets and smartphones near ubiquitous adoption. Due to this trend, the complexity of these “apps” has been increasing, making development and maintenance challenging. Additionally, current bug tracking systems are not able to effectively support construction of reports with actionable information that directly lead to a bug’s resolution. To address the need for an improved reporting system, we introduce a novel solution, called FUSION, that helps users auto-complete reproduction steps in bug reports for mobile apps. FUSION links user-provided information to program artifacts extracted through static and dynamic analysis performed before testing or release. The approach that FUSION employs is generalizable to other current mobile software platforms, and constitutes a new method by which off-device bug reporting can be conducted for mobile software projects. In a study involving 28 participants we applied FUSION to support the maintenance tasks of reporting and reproducing defects from 15 real-world bugs found in 14 open source Android apps while qualitatively and qualitatively measuring the user experience of the system. Our results demonstrate that FUSION both effectively facilitates reporting and allows for more reliable reproduction of bugs from reports compared to traditional issue tracking systems by presenting more detailed contextual app information.",
"title": ""
},
{
"docid": "0e387b0ce86b00123ed6dd69459033e8",
"text": "3-D hand pose estimation is an essential problem for human–computer interaction. Most of the existing depth-based hand pose estimation methods consume 2-D depth map or 3-D volume via 2-D/3-D convolutional neural networks. In this paper, we propose a deep semantic hand pose regression network (SHPR-Net) for hand pose estimation from point sets, which consists of two subnetworks: a semantic segmentation subnetwork and a hand pose regression subnetwork. The semantic segmentation network assigns semantic labels for each point in the point set. The pose regression network integrates the semantic priors with both input and late fusion strategy and regresses the final hand pose. Two transformation matrices are learned from the point set and applied to transform the input point cloud and inversely transform the output pose, respectively, which makes the SHPR-Net more robust to geometric transformations. Experiments on NYU, ICVL, and MSRA hand pose data sets demonstrate that our SHPR-Net achieves high performance on par with the start-of-the-art methods. We also show that our method can be naturally extended to hand pose estimation from the multi-view depth data and achieves further improvement on the NYU data set.",
"title": ""
},
{
"docid": "168a959b617dc58e6355c1b0ab46c3fc",
"text": "Detection of true human emotions has attracted a lot of interest in the recent years. The applications range from e-retail to health-care for developing effective companion systems with reliable emotion recognition. This paper proposes heart rate variability (HRV) features extracted from photoplethysmogram (PPG) signal obtained from a cost-effective PPG device such as Pulse Oximeter for detecting and recognizing the emotions on the basis of the physiological signals. The HRV features obtained from both time and frequency domain are used as features for classification of emotions. These features are extracted from the entire PPG signal obtained during emotion elicitation and baseline neutral phase. For analyzing emotion recognition, using the proposed HRV features, standard video stimuli are used. We have considered three emotions namely, happy, sad and neutral or null emotions. Support vector machines are used for developing the models and features are explored to achieve average emotion recognition of 83.8% for the above model and listed features.",
"title": ""
},
{
"docid": "fbe0c6e8cbaf6c419990c1a7093fe2a9",
"text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.",
"title": ""
},
{
"docid": "b80bb16e8f5bff921304908c5731c158",
"text": "Internet and networks applications are growing very fast, so the needs to protect such applications are increased. Encryption algorithms play a main role in information security systems. . In this paper, we compare the various cryptographic algorithms. On the basis of parameter taken as time various cryptographic algorithms are evaluated on different video files. Different video files are having different processing speed on which various size of file are processed. Calculation of time for encryption and decryption in different video file format such as .vob, and .DAT, having file size for audio and for video 1MB to 1100MB respectively. Encryption processing time and decryption processing time are compared between various cryptographic algorithms which come out to be not too much. Overall time depend on the corresponding file size. Throughput analysis also done.",
"title": ""
},
{
"docid": "76dcd35124d95bffe47df5decdc5926a",
"text": "While kernel drivers have long been know to poses huge security risks, due to their privileged access and lower code quality, bug-finding tools for drivers are still greatly lacking both in quantity and effectiveness. This is because the pointer-heavy code in these drivers present some of the hardest challenges to static analysis, and their tight coupling with the hardware make dynamic analysis infeasible in most cases. In this work, we present DR. CHECKER, a soundy (i.e., mostly sound) bug-finding tool for Linux kernel drivers that is based on well-known program analysis techniques. We are able to overcome many of the inherent limitations of static analysis by scoping our analysis to only the most bug-prone parts of the kernel (i.e., the drivers), and by only sacrificing soundness in very few cases to ensure that our technique is both scalable and precise. DR. CHECKER is a fully-automated static analysis tool capable of performing general bug finding using both pointer and taint analyses that are flow-sensitive, context-sensitive, and fieldsensitive on kernel drivers. To demonstrate the scalability and efficacy of DR. CHECKER, we analyzed the drivers of nine production Linux kernels (3.1 million LOC), where it correctly identified 158 critical zero-day bugs with an overall precision of 78%.",
"title": ""
},
{
"docid": "9be1489eb73687b9a98c23f46dae6421",
"text": "Smart consumer devices are omnipresent in our everyday lives. We find novel interesting applications of smart devices, such as mobile phones, tablets, smartwatches and smartglasses, in monitoring personal health, tracking sporting performances, identifying physical activities and obtaining navigation information among many others. These novel applications make use of a large variety of the smart devices' sensors, such as accelerometer, gyroscope and GPS. However, the usefulness of these applications relies often primarily on the abilities of interpreting noisy, and often biased, measurements from said sensors in order to extract high-level context information, such as the activity currently performed by the user.",
"title": ""
},
{
"docid": "875e165e70000d15b11d724607be1917",
"text": "Internet-based Chat environments such as Internet relay Chat and instant messaging pose a challenge for data mining and information retrieval systems due to the multi-threaded, overlapping nature of the dialog and the nonstandard usage of language. In this paper we present preliminary methods of topic detection and topic thread extraction that augment a typical TF-IDF-based vector space model approach with temporal relationship information between posts of the Chat dialog combined with WordNet hypernym augmentation. We show results that promise better performance than using only a TF-IDF bag-of-words vector space model.",
"title": ""
},
{
"docid": "6deff83de8ad1e0d08565129c5cefb8a",
"text": "Correlations between prototypical usability metrics from 90 distinct usability tests were strong when measured at the task-level (r between .44 and .60). Using test-level satisfaction ratings instead of task-level ratings attenuated the correlations (r between .16 and .24). The method of aggregating data from a usability test had a significant effect on the magnitude of the resulting correlations. The results of principal components and factor analyses on the prototypical usability metrics provided evidence for an underlying construct of general usability with objective and subjective factors.",
"title": ""
},
{
"docid": "57666e9d9b7e69c38d7530633d556589",
"text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.",
"title": ""
},
{
"docid": "70e89d5d0b886b1c32b1f1b8c01db99b",
"text": "In clinical dictation, speakers try to be as concise as possible to save time, often resulting in utterances without explicit punctuation commands. Since the end product of a dictated report, e.g. an out-patient letter, does require correct orthography, including exact punctuation, the latter need to be restored, preferably by automated means. This paper describes a method for punctuation restoration based on a stateof-the-art stack of NLP and machine learning techniques including B-RNNs with an attention mechanism and late fusion, as well as a feature extraction technique tailored to the processing of medical terminology using a novel vocabulary reduction model. To the best of our knowledge, the resulting performance is superior to that reported in prior art on similar tasks.",
"title": ""
},
{
"docid": "13e9a722f59a1497fc6eaf4ee877a007",
"text": "The ability to generate specific genetic modifications in mice provides a powerful approach to assess gene function. When genetic modifications have been generated in the germ line, however, the resulting phenotype often only reflects the first time a gene has an influence on - or is necessary for - a particular biological process. Therefore, systems allowing conditional genetic modification have been developed (for a review, see [1]); for example, inducible forms of the Cre recombinase from P1 phage have been generated that can catalyse intramolecular recombination between target recognition sequences (loxP sites) in response to ligand [2] [3] [4] [5]. Here, we assessed whether a tamoxifen-inducible form of Cre recombinase (Cre-ERTM) could be used to modify gene activity in the mouse embryo in utero. Using the enhancer of the Wnt1 gene to restrict the expression of Cre-ERTM to the embryonic neural tube, we found that a single injection of tamoxifen into pregnant mice induced Cre-mediated recombination within the embryonic central nervous system, thereby activating expression of a reporter gene. Induction was ligand dependent, rapid and efficient. The results demonstrate that tamoxifen-inducible recombination can be used to effectively modify gene function in the mouse embryo.",
"title": ""
},
{
"docid": "57974e76bf29edb7c2ae54462aab839f",
"text": "UWB is a very attractive technology for many applications. It provides many advantages such as fine resolution and high power efficiency. Our interest in the current study is the use of UWB radar technique in microwave medical imaging systems, especially for early breast cancer detection. The Federal Communications Commission FCC allowed frequency bandwidth of 3.1 to 10.6 GHz for this purpose. In this paper we suggest an UWB Bowtie slot antenna with enhanced bandwidth. Effects of varying the geometry of the antenna on its performance and bandwidth are studied. The proposed antenna is simulated in CST Microwave Studio. Details of antenna design and simulation results such as return loss and radiation patterns are discussed in this paper. The final antenna structure exhibits good UWB characteristics and has surpassed the bandwidth requirements. Keywords—Ultra Wide Band (UWB), microwave imaging system, Bowtie antenna, return loss, impedance bandwidth enhancement.",
"title": ""
}
] |
scidocsrr
|
daf04d5cd7a4234fd82c6efda2a688ae
|
Particle PHD Filter Based Multiple Human Tracking Using Online Group-Structured Dictionary Learning
|
[
{
"docid": "0d25072b941ee3e8690d9bd274623055",
"text": "The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.",
"title": ""
}
] |
[
{
"docid": "d452700b9c919ba62156beecb0d50b91",
"text": "In this paper we propose a solution to the problem of body part segmentation in noisy silhouette images. In developing this solution we revisit the issue of insufficient labeled training data, by investigating how synthetically generated data can be used to train general statistical models for shape classification. In our proposed solution we produce sequences of synthetically generated images, using three dimensional rendering and motion capture information. Each image in these sequences is labeled automatically as it is generated and this labeling is based on the hand labeling of a single initial image.We use shape context features and Hidden Markov Models trained based on this labeled synthetic data. This model is then used to segment silhouettes into four body parts; arms, legs, body and head. Importantly, in all the experiments we conducted the same model is employed with no modification of any parameters after initial training.",
"title": ""
},
{
"docid": "d54aff38bab1a8877877ddba9e20e88d",
"text": "SiMultaneous Acquisition of Spatial Harmonics (SMASH) is a new fast-imaging technique that increases MR image acquisition speed by an integer factor over existing fast-imaging methods, without significant sacrifices in spatial resolution or signal-to-noise ratio. Image acquisition time is reduced by exploiting spatial information inherent in the geometry of a surface coil array to substitute for some of the phase encoding usually produced by magnetic field gradients. This allows for partially parallel image acquisitions using many of the existing fast-imaging sequences. Unlike the data combination algorithms of prior proposals for parallel imaging, SMASH reconstruction involves a small set of MR signal combinations prior to Fourier transformation, which can be advantageous for artifact handling and practical implementation. A twofold savings in image acquisition time is demonstrated here using commercial phased array coils on two different MR-imaging systems. Larger time savings factors can be expected for appropriate coil designs.",
"title": ""
},
{
"docid": "9c38ad75ac16b2b6ee41144aceb373ea",
"text": "Although end-to-end neural text-to-speech (TTS) methods (such as Tacotron2) are proposed and achieve state-of-theart performance, they still suffer from two problems: 1) low efficiency during training and inference; 2) hard to model long dependency using current recurrent neural networks (RNNs). Inspired by the success of Transformer network in neural machine translation (NMT), in this paper, we introduce and adapt the multi-head attention mechanism to replace the RNN structures and also the original attention mechanism in Tacotron2. With the help of multi-head self-attention, the hidden states in the encoder and decoder are constructed in parallel, which improves training efficiency. Meanwhile, any two inputs at different times are connected directly by a self-attention mechanism, which solves the long range dependency problem effectively. Using phoneme sequences as input, our Transformer TTS network generates mel spectrograms, followed by a WaveNet vocoder to output the final audio results. Experiments are conducted to test the efficiency and performance of our new network. For the efficiency, our Transformer TTS network can speed up the training about 4.25 times faster compared with Tacotron2. For the performance, rigorous human tests show that our proposed model achieves state-of-the-art performance (outperforms Tacotron2 with a gap of 0.048) and is very close to human quality (4.39 vs 4.44 in MOS).",
"title": ""
},
{
"docid": "a35a564a2f0e16a21e0ef5e26601eab9",
"text": "The social media revolution has created a dynamic shift in the digital marketing landscape. The voice of influence is moving from traditional marketers towards consumers through online social interactions. In this study, we focus on two types of online social interactions, namely, electronic word of mouth (eWOM) and observational learning (OL), and explore how they influence consumer purchase decisions. We also examine how receiver characteristics, consumer expertise and consumer involvement, moderate consumer purchase decision process. Analyzing panel data collected from a popular online beauty forum, we found that consumer purchase decisions are influenced by their online social interactions with others and that action-based OL information is more influential than opinion-based eWOM. Further, our results show that both consumer expertise and consumer involvement play an important moderating role, albeit in opposite direction: Whereas consumer expertise exerts a negative moderating effect, consumer involvement is found to have a positive moderating effect. The study makes important contributions to research and practice.",
"title": ""
},
{
"docid": "a0df97ad89f6f2b58a5e31eace4a72fa",
"text": "Ultra high voltage (UHV) AC Gas Insulated Switchgear (GIS) spacer is one of the most important components in GIS. In the process of long-term operation, partial discharge often occurs at the interface between the center conductor and epoxy resin (EP) basin body if this interface has some defects, and the discharge sometimes even leads to the crack of basin. It has become a bottleneck for the safe and stable operation of GIS equipment. In this paper, the interface material with EP, hydroxy-terminated butadiene nitrile liquid rubber (HTBN) and conductive carbon black was created, and its parameters were measured. With three-dimensional finite element method, the interfacial structure with semi-conductive coatings were designed, the calculation models of GIS spacer which had interfacial layer structure were built. The influences of semi-conductive coatings on the electric field distribution of the interface between center conductor and basin body were studied. To different defects on the conductor surface, the improvement of the semi-conductive coatings to the electric field distribution was analyzed. Compared with the interfacial layer without coatings, there is an 80 percent decrease of electric field intensity of surface defects than that without coatings, which proves its significant shielding effect. The rationality of GIS spacer's performance of material and structure of semi-conductive coatings are verified, which is helpful to improve the uneven distribution of electric field at the interface of UHV electrical equipment.",
"title": ""
},
{
"docid": "37dbfc84d3b04b990d8b3b31d2013f77",
"text": "Large projects such as kernels, drivers and libraries follow a code style, and have recurring patterns. In this project, we explore learning based code recommendation, to use the project context and give meaningful suggestions. Using word vectors to model code tokens, and neural network based learning techniques, we are able to capture interesting patterns, and predict code that that cannot be predicted by a simple grammar and syntax based approach as in conventional IDEs. We achieve a total prediction accuracy of 56.0% on Linux kernel, a C project, and 40.6% on Twisted, a Python networking library.",
"title": ""
},
{
"docid": "1cc962ab0d15a47725858ed5ff5872f6",
"text": "Although spontaneous remyelination does occur in multiple sclerosis lesions, its extent within the global population with this disease is presently unknown. We have systematically analysed the incidence and distribution of completely remyelinated lesions (so-called shadow plaques) or partially remyelinated lesions (shadow plaque areas) in 51 autopsies of patients with different clinical courses and disease durations. The extent of remyelination was variable between cases. In 20% of the patients, the extent of remyelination was extensive with 60-96% of the global lesion area remyelinated. Extensive remyelination was found not only in patients with relapsing multiple sclerosis, but also in a subset of patients with progressive disease. Older age at death and longer disease duration were associated with significantly more remyelinated lesions or lesion areas. No correlation was found between the extent of remyelination and either gender or age at disease onset. These results suggest that the variable and patient-dependent extent of remyelination must be considered in the design of future clinical trials aimed at promoting CNS repair.",
"title": ""
},
{
"docid": "7925100b85dce273b92f4d9f52253cda",
"text": "Named entities such as people, locations, and organizations play a vital role in characterizing online content. They often reflect information of interest and are frequently used in search queries. Although named entities can be detected reliably from textual content, extracting relations among them is more challenging, yet useful in various applications (e.g., news recommending systems). In this paper, we present a novel model and system for learning semantic relations among named entities from collections of news articles. We model each named entity occurrence with sparse structured logistic regression, and consider the words (predictors) to be grouped based on background semantics. This sparse group LASSO approach forces the weights of word groups that do not influence the prediction towards zero. The resulting sparse structure is utilized for defining the type and strength of relations. Our unsupervised system yields a named entities’ network where each relation is typed, quantified, and characterized in context. These relations are the key to understanding news material over time and customizing newsfeeds for readers. Extensive evaluation of our system on articles from TIME magazine and BBC News shows that the learned relations correlate with static semantic relatedness measures like WLM, and capture the evolving relationships among named entities over time.",
"title": ""
},
{
"docid": "6a4595e71ad1c4e6196f17af20c8c1ef",
"text": "We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminatorD spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D . Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.",
"title": ""
},
{
"docid": "b7cc4a094988643e65d80d4989276d98",
"text": "In this paper, we describe the design and layout of an automotive radar sensor demonstrator for 77 GHz with a SiGe chipset and a fully parallel receiver architecture which is capable of digital beamforming and superresolution direction of arrival estimation methods in azimuth. Additionally, we show measurement results of this radar sensor mounted on a test vehicle.",
"title": ""
},
{
"docid": "90beb588be7ed7db4831fb5b9de485ce",
"text": "Collaboration in visual sensor networks is essential not only to compensate for the limitations of each sensor node but also to tolerate inaccurate information generated by faulty sensors. This article focuses on the design of a collaborative target localization algorithm that is resilient to sensor faults. We first develop a distributed solution to fault-tolerant target localization based on a so-called certainty map. To tolerate potential sensor faults, a voting mechanism is adopted and a threshold value needs to be specified which is the key to the realization of the distributed solution. Analytical study is conducted to derive the lower and upper bounds for the threshold such that the probability of faulty sensors negatively impacts the localization performance is less than a small value. Second, we focus on the detection and correction of one type of sensor faults, error in camera orientation. We construct a generative image model in each camera based on the detected target location to estimate camera's orientation, detect inaccuracies in camera orientations and correct them before they cascade. Based on results obtained from both simulation and real experiments, we show that the proposed method is effective in localization accuracy as well as fault detection and correction performance.",
"title": ""
},
{
"docid": "dd13c22807017c7d26a4112debfae58b",
"text": "Over the last years, streaming of multimedia content has become more prominent than ever. To meet increasing user requirements, the concept of HTTP Adaptive Streaming (HAS) has recently been introduced. In HAS, video content is temporally divided into multiple segments, each encoded at several quality levels. A rate adaptation heuristic selects the quality level for every segment, allowing the client to take into account the observed available bandwidth and the buffer filling level when deciding the most appropriate quality level for every new video segment. Despite the ability of HAS to deal with changing network conditions, a low average quality and a large camera-to-display delay are often observed in live streaming scenarios. In the meantime, the HTTP/2 protocol was standardized in February 2015, providing new features which target a reduction of the page loading time in web browsing. In this paper, we propose a novel push-based approach for HAS, in which HTTP/2’s push feature is used to actively push segments from server to client. Using this approach with video segments with a sub-second duration, referred to as super-short segments, it is possible to reduce the startup time and end-to-end delay in HAS live streaming. Evaluation of the proposed approach, through emulation of a multi-client scenario with highly variable bandwidth and latency, shows that the startup time can be reduced with 31.2% compared to traditional solutions over HTTP/1.1 in mobile, high-latency networks. Furthermore, the end-to-end delay in live streaming scenarios can be reduced with 4 s, while providing the content at similar video quality.",
"title": ""
},
{
"docid": "3fa16d5e442bc4a2398ba746d6aaddfe",
"text": "Although many users create predictable passwords, the extent to which users realize these passwords are predictable is not well understood. We investigate the relationship between users' perceptions of the strength of specific passwords and their actual strength. In this 165-participant online study, we ask participants to rate the comparative security of carefully juxtaposed pairs of passwords, as well as the security and memorability of both existing passwords and common password-creation strategies. Participants had serious misconceptions about the impact of basing passwords on common phrases and including digits and keyboard patterns in passwords. However, in most other cases, participants' perceptions of what characteristics make a password secure were consistent with the performance of current password-cracking tools. We find large variance in participants' understanding of how passwords may be attacked, potentially explaining why users nonetheless make predictable passwords. We conclude with design directions for helping users make better passwords.",
"title": ""
},
{
"docid": "3d8df2c8fcbdc994007104b8d21d7a06",
"text": "The purpose of this research was to analysis the efficiency of global strategies. This paper identified six key strategies necessary for firms to be successful when expanding globally. These strategies include differentiation, marketing, distribution, collaborative strategies, labor and management strategies, and diversification. Within this analysis, we chose to focus on the Coca-Cola Company because they have proven successful in their international operations and are one of the most recognized brands in the world. We performed an in-depth review of how effectively or ineffectively Coca-Cola has used each of the six strategies. The paper focused on Coca-Cola's operations in the United States, China, Belarus, Peru, and Morocco. The author used electronic journals from the various countries to determine how effective Coca-Cola was in these countries. The paper revealed that Coca-Cola was very successful in implementing strategies regardless of the country. However, the author learned that Coca-Cola did not effectively utilize all of the strategies in each country.",
"title": ""
},
{
"docid": "52eb676c79c797f99e811daf9fe3ef71",
"text": "SWI-Prolog version 7 extends the Prolog language as a general purpose programming language that can be used as ‘glue’ between components written in different languages. Taking this role rather than that of a domain specific language (DSL) inside other IT components has always been the design objective of SWI-Prolog as illustrated by XPCE (its object oriented communication to the OS and graphics), the HTTP server library and the many interfaces to external systems and file formats. In recent years, we started extending the language itself, notably to accommodate expressing syntactic constructs of other languages such a HTML and JavaScript. This resulted in an extended notion of operators and quasi quotations. SWI-Prolog version 7 takes this one step further by extending the primitive data types of Prolog. This article describes and motivates these extensions.",
"title": ""
},
{
"docid": "cce465180d48695a6ed150c7024fbbf2",
"text": "The Convolutional Neural Network (CNN) has significantly improved the state-of-the-art in person re-identification (re-ID). In the existing available identification CNN model, the softmax loss function is employed as the supervision signal to train the CNN model. However, the softmax loss only encourages the separability of the learned deep features between different identities. The distinguishing intra-class variations have not been considered during the training process of CNN model. In order to minimize the intra-class variations and then improve the discriminative ability of CNN model, this paper combines a new supervision signal with original softmax loss for person re-ID. Specifically, during the training process, a center of deep features is learned for each pedestrian identity and the deep features are subtracted from the corresponding identity centers, simultaneously. So that, the deep features of the same identity to the center will be pulled efficiently. With the combination of loss functions, the inter-class dispersion and intra-class aggregation can be constrained as much as possible. In this way, a more discriminative CNN model, which has two key learning objectives, can be learned to extract deep features for person re-ID task. We evaluate our method in two identification CNN models (i.e., CaffeNet and ResNet-50). It is encouraging to see that our method has a stable improvement compared with the baseline and yields a competitive performance to the state-of-the-art person re-ID methods on three important person re-ID benchmarks (i.e., Market-1501, CUHK03 and MARS).",
"title": ""
},
{
"docid": "6386c0ef0d7cc5c33e379d9c4c2ca019",
"text": "BACKGROUND\nEven after negative sentinel lymph node biopsy (SLNB) for primary melanoma, patients who develop in-transit (IT) melanoma or local recurrences (LR) can have subclinical regional lymph node involvement.\n\n\nSTUDY DESIGN\nA prospective database identified 33 patients with IT melanoma/LR who underwent technetium 99m sulfur colloid lymphoscintigraphy alone (n = 15) or in conjunction with lymphazurin dye (n = 18) administered only if the IT melanoma/LR was concurrently excised.\n\n\nRESULTS\nSeventy-nine percent (26 of 33) of patients undergoing SLNB in this study had earlier removal of lymph nodes in the same lymph node basin as the expected drainage of the IT melanoma or LR at the time of diagnosis of their primary melanoma. Lymphoscintography at time of presentation with IT melanoma/LR was successful in 94% (31 of 33) cases, and at least 1 sentinel lymph node was found intraoperatively in 97% (30 of 31) cases. The SLNB was positive in 33% (10 of 30) of these cases. Completion lymph node dissection was performed in 90% (9 of 10) of patients. Nine patients with negative SLNB and IT melanoma underwent regional chemotherapy. Patients in this study with a positive sentinel lymph node at the time the IT/LR was mapped had a considerably shorter time to development of distant metastatic disease compared with those with negative sentinel lymph nodes.\n\n\nCONCLUSIONS\nIn this study, we demonstrate the technical feasibility and clinical use of repeat SLNB for recurrent melanoma. Performing SLNB cannot only optimize local, regional, and systemic treatment strategies for patients with LR or IT melanoma, but also appears to provide important prognostic information.",
"title": ""
},
{
"docid": "5565f51ad8e1aaee43f44917befad58a",
"text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.",
"title": ""
},
{
"docid": "c3e037cb49fb639217142437ed3e8e04",
"text": "Machine learning models are now used extensively for decision making in diverse applications, but for non-experts they are essentially black boxes. While there has been some work on the explanation of classifications, these are targeted at the expert user. For the non-expert, a better model is one of justification not detailing how the model made its decision, but justifying it to the human user on his or her terms. In this paper we introduce the idea of a justification narrative: a simple model-agnostic mapping of the essential values underlying a classification to a semantic space. We present a package that automatically produces these narratives and realizes them visually or textually.",
"title": ""
},
{
"docid": "4912a90f30127d2e70a2bbcb3733d524",
"text": "To better understand procrastination, researchers have sought to identify cognitive personality factors associated with it. The study reported here attempts to extend previous research by exploring the application of explanatory style to academic procrastination. Findings of the study are discussed from the perspective of employers of this new generation.",
"title": ""
}
] |
scidocsrr
|
1a193cb8d09c241241c067693c24a0c9
|
ARTICULATED WHEELED ROBOTS : EXPLOITING RECONFIGURABILITY AND REDUNDANCY
|
[
{
"docid": "c8db1af44dccc23bf0e06dcc8c43bca6",
"text": "A reconfigurable mechanism for varying the footprint of a four-wheeled omnidirectional vehicle is developed and applied to wheelchairs. The variable footprint mechanism consists of a pair of beams intersecting at a pivotal point in the middle. Two pairs of ball wheels at the diagonal positions of the vehicle chassis are mounted, respectively, on the two beams intersecting in the middle. The angle between the two beams varies actively so that the ratio of the wheel base to the tread may change. Four independent servo motors driving the four ball wheels allow the vehicle to move in an arbitrary direction from an arbitrary configuration as well as to change the angle between the two beams and thereby change the footprint. The objective of controlling the beam angle is threefold. One is to augment static stability by varying the footprint so that the mass centroid of the vehicle may be kept within the footprint at all times. The second is to reduce the width of the vehicle when going through a narrow doorway. The third is to apparently change the gear ratio relating the vehicle speed to individual actuator speeds. First the concept of the varying footprint mechanism is described, and its kinematic behavior is analyzed, followed by the three control algorithms for varying the footprint. A prototype vehicle for an application as a wheelchair platform is designed, built, and tested.",
"title": ""
},
{
"docid": "e077bb23271fbc056290be84b39a9fcc",
"text": "Rovers will continue to play an important role in planetary exploration. Plans include the use of the rocker-bogie rover configuration. Here, models of the mechanics of this configuration are presented. Methods for solving the inverse kinematics of the system and quasi-static force analysis are described. Also described is a simulation based on the models of the rover’s performance. Experimental results confirm the validity of the models.",
"title": ""
},
{
"docid": "1914a215d1b937f544a60890f167dd49",
"text": "In our paper we present an innovative locomotion concept for rough terrain based on six motorized wheels. Using rhombus configuration, the rover named Shrimp has a steering wheel in the front and the rear, and two wheels arranged on a bogie on each side. The front wheel has a spring suspension to guarantee optimal ground contact of all wheels at any time. The steering of the rover is realized by synchronizing the steering of the front and rear wheels and the speed difference of the bogie wheels. This allows for precision maneuvers and even turning on the spot with minimum slippage. The use of parallel articulations for the front wheel and the bogies enables to set a virtual center of rotation at the level of or below the wheel axis. This insures maximum stability and climbing abilities even for very low friction coefficients between the wheel and the ground. A well functioning prototype has been designed and manufactured. It shows excellent performance surpassing our expectations. The robot, measuring only about 60 cm in length and 20 cm in height, is able to passively overcome obstacles of up to two times its wheel diameter and can climb stairs with steps of over 20 cm. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "9414f4f7164c69f67b4bf200da9f1358",
"text": "Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.",
"title": ""
},
{
"docid": "e6788f228c52f48107804622aab297c4",
"text": "Scholarly publishing increasingly requires automated systems that semantically enrich documents in order to support management and quality assessment of scientific output. However, contextual information, such as the authors’ affiliations, references, and funding agencies, is typically hidden within PDF files. To access this information we have developed a processing pipeline that analyses the structure of a PDF document incorporating a diverse set of machine learning techniques. First, unsupervised learning is used to extract contiguous text blocks from the raw character stream as the basic logical units of the article. Next, supervised learning is employed to classify blocks into different meta-data categories, including authors and affiliations. Then, a set of heuristics are applied to detect the reference section at the end of the paper and segment it into individual reference strings. Sequence classification is then utilised to categorise the tokens of individual references to obtain information such as the journal and the year of the reference. Finally, we make use of named entity recognition techniques to extract references to research grants, funding agencies, and EU projects. Our system is modular in nature. Some parts rely on models learnt on training data, and the overall performance scales with the quality of these data sets.",
"title": ""
},
{
"docid": "98b0ce9e943ab1a22c4168ba1c79ceb6",
"text": "Along with rapid advancement of power semiconductors, voltage multipliers have introduced new series of pulsed power generators. In this paper, current topologies of capacitor-diode voltage multipliers (CDVM) are investigated. Alternative structures for voltage multiplier based on power electronics switches are presented in high voltage pulsed power supplies application. The new topology is able to generate the desired high voltage output without increasing the voltage rating of semiconductor devices as well as capacitors. Finally, a comparative analysis is carried out between different CDVM topologies. Experimental and simulation results are presented to verify the analysis.",
"title": ""
},
{
"docid": "59c68b4e5399fbfd3f74952258c807b0",
"text": "Quaternions have been a popular tool in 3D computer graphics for more than 20 years. However, classical quaternions are restricted to the representation of rotations, whereas in graphical applications we typically work with rotation composed with translation (i.e., a rigid transformation). Dual quaternions represent rigid transformations in the same way as classical quaternions represent rotations. In this paper we show how to generalize established techniques for blending of rotations to include all rigid transformations. Algorithms based on dual quaternions are computationally more efficient than previous solutions and have better properties (constant speed, shortest path and coordinate invariance). For the specific example of skinning, we demonstrate that problems which required considerable research effort recently are trivial to solve using our dual quaternion formulation. However, skinning is only one application of dual quaternions, so several further promising research directions are suggested in the paper. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling – Geometric Transformations— [I.3.7]: Computer Graphics—Three-Dimensional Graphics and Realism – Animation",
"title": ""
},
{
"docid": "4a5959a7bcfaa0c7768d9a0d742742be",
"text": "In this paper, we are interested in understanding the interrelationships between mainstream and social media in forming public opinion during mass crises, specifically in regards to how events are framed in the mainstream news and on social networks and to how the language used in those frames may allow to infer political slant and partisanship. We study the lingual choices for political agenda setting in mainstream and social media by analyzing a dataset of more than 40M tweets and more than 4M news articles from the mass protests in Ukraine during 2013-2014 — known as \"Euromaidan\" — and the post-Euromaidan conflict between Russian, pro-Russian and Ukrainian forces in eastern Ukraine and Crimea. We design a natural language processing algorithm to analyze at scale the linguistic markers which point to a particular political leaning in online media and show that political slant in news articles and Twitter posts can be inferred with a high level of accuracy. These findings allow us to better understand the dynamics of partisan opinion formation during mass crises and the interplay between mainstream and social media in such circumstances.",
"title": ""
},
{
"docid": "3ce605f4f8e512fd94d373902f4159f9",
"text": "In this paper, we extensively tune and then compare the performance of web servers based on three different server architectures. The μserver utilizes an event-driven architecture, Knot uses the highly-efficient Capriccio thread library to implement a thread-per-connection model, and WatPipe uses a hybrid of events and threads to implement a pipeline-based server that is similar in spirit to a staged event-driven architecture (SEDA) server like Haboob.\n We describe modifications made to the Capriccio thread library to use Linux's zero-copy sendfile interface. We then introduce the SY mmetric Multi-Processor Event Driven (SYMPED) architecture in which relatively minor modifications are made to a single process event-driven (SPED) server (the μserver) to allow it to continue processing requests in the presence of blocking due to disk accesses. Finally, we describe our C++ implementation of WatPipe, which although utilizing a pipeline-based architecture, excludes the dynamic controls over event queues and thread pools used in SEDA. When comparing the performance of these three server architectures on the workload used in our study, we arrive at different conclusions than previous studies. In spite of recent improvements to threading libraries and our further improvements to Capriccio and Knot, both the event-based μserver and pipeline-based Wat-Pipe server provide better throughput (by about 18%). We also observe that when using blocking sockets to send data to clients, the performance obtained with some architectures is quite good and in one case is noticeably better than when using non-blocking sockets.",
"title": ""
},
{
"docid": "9fb9db5835c860fb376949fc24da9318",
"text": "This paper describes an implementation to enable interaction between smart home solutions and Smart Meter Gateways (SMGWs). This is conducted in the example of the approach of the AnyPLACE project to interconnect openHAB with the HAN interface of the SMGW. Furthermore, security issues in the combination of those two realms are addressed, answered and tested so that in addition to the open character of the solution, it is still secure. 1 Smart Home and Smart Metering in Europe 1.1 Challenges for Interconnecting Smart Home and Smart Metering In a time of highly volatile electricity generation, the need for a dynamic energy system and thus Smart Grids is expected [1]. Potentially, also end users with significant load or distributed energy resources can participate in the smart energy distribution by using home energy management systems or smart metering concepts which involve interactions with external market entities. One of two main challenges for interconnecting those components is the demand to support a wide range of different technologies and solutions in the background of proprietary smart home solutions. A second major challenge is the handling of private meter data according to EU requirements on smart metering as well as country specific regulations derived from them. Due to EU requirements being rather high-level, the communication and security requirements differ in each EU member country. © IFIP International Federation for Information Processing 2017 Published by Springer International Publishing AG 2017. All Rights Reserved C. Derksen and C. Weber (Eds.): SmartER Europe 2016/2017, IFIP AICT 495, pp. 136–146, 2017. DOI: 10.1007/978-3-319-66553-5_10 1.2 Approach for an Interoperable Solution The European research project AnyPLACE is developing a smart metering platform with management and control functionalities. The aim is to create a solution which interconnects in-home appliances, smart meters and also external services, and which can be applied in any European country – in “any place”. For making the solution highly interoperable, AnyPLACE is designed to have a common basis as well as adaptable elements. The generic part comprises e.g. a graphical user interface and energy management algorithms. The adaptable elements are realized in the following approach to connect the AnyPLACE core functionalities with other devices and systems. An existing open-source smart home framework openHAB [2] has been chosen to interconnect a broad variety of different technologies, systems and products. One of its core features is the possibility to amend it with new functionalities e.g. adding the support of new protocols to add new kinds of devices. This can be done by adding optional packages, which can be selected from awide range of already existing add-ons developed for different smart home appliances and systems. In the AnyPLACE project, additional country specific packages have been designed to connect meters to the smart home system, taking into account respective technological, privacy as well as security requirements which were analyzed for each addressed country. Further details about the requirements which were identified for the different European countries are described in [3]. The present paper focuses on the application environment and thus requirements of the German market and the derived solutions. At first, the regulations for the German smart metering infrastructure as well as possible resulting functionalities are sketched. Afterwards, the implementation of solutions to enable an interaction between this infrastructure and smart home systems is described in details. Finally, the paper gives insights of how those solutions for an interaction between the German smart metering infrastructure and smart home solutions shall be tested regarding security considerations. 2 German Smart Metering Infrastructure Functionalities 2.1 BSI Smart Metering Infrastructure Offers Platform for Connection of Subsystems In Germany, the smart meter rollout has recently been initialized by law and will start in 2017. The Federal Office for Information Security (BSI) prescribes the security architecture for a secure and transparent handling of the end users’ private meter data [4–6]. The Smart Meter Gateway (SMGW) is a core element in this architecture as it is depicted in Fig. 1. Its name suggests that the only purpose of the “Smart Meter Gateway” is to serve as a gateway to transfer meter data from smart meters to the respective energy supplier. But it is not limited to this functionality. It does provide secure communications to meters in the Local Metrological Network (LMN), but also to external service providers (EMT) in the Wide Area Network (WAN) as well as to the Home Area Network (HAN). Due to the communication with those networks, the SMGW serves as a platform to interconnect sub-systems that enable several additional functionalities. Amending the Security of the BSI Smart Metering Infrastructure 137",
"title": ""
},
{
"docid": "4aec63cb23b43f4d1d2f7ab53cedbff9",
"text": "Presently, there is no recommendation on how to assess functional status of chronic obstructive pulmonary disease (COPD) patients. This study aimed to summarize and systematically evaluate these measures.Studies on measures of COPD patients' functional status published before the end of January 2015 were included using a search filters in PubMed and Web of Science, screening reference lists of all included studies, and cross-checking against some relevant reviews. After title, abstract, and main text screening, the remaining was appraised using the Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) 4-point checklist. All measures from these studies were rated according to best-evidence synthesis and the best-rated measures were selected.A total of 6447 records were found and 102 studies were reviewed, suggesting 44 performance-based measures and 14 patient-reported measures. The majority of the studies focused on internal consistency, reliability, and hypothesis testing, but only 21% of them employed good or excellent methodology. Their common weaknesses include lack of checks for unidimensionality, inadequate sample sizes, no prior hypotheses, and improper methods. On average, patient-reported measures perform better than performance-based measures. The best-rated patient-reported measures are functional performance inventory (FPI), functional performance inventory short form (FPI-SF), living with COPD questionnaire (LCOPD), COPD activity rating scale (CARS), University of Cincinnati dyspnea questionnaire (UCDQ), shortness of breath with daily activities (SOBDA), and short-form pulmonary functional status scale (PFSS-11), and the best-rated performance-based measures are exercise testing: 6-minute walk test (6MWT), endurance treadmill test, and usual 4-meter gait speed (usual 4MGS).Further research is needed to evaluate the reliability and validity of performance-based measures since present studies failed to provide convincing evidence. FPI, FPI-SF, LCOPD, CARS, UCDQ, SOBDA, PFSS-11, 6MWT, endurance treadmill test, and usual 4MGS performed well and are preferable to assess functional status of COPD patients.",
"title": ""
},
{
"docid": "8f47cd3066eefb2a4ceb279ba884a8a9",
"text": "BACKGROUND\nEndothelin (ET)-1 is a potent vasoconstrictor that contributes to vascular remodeling in hypertension and other cardiovascular diseases. Endogenous ET-1 is produced predominantly by vascular endothelial cells. To directly test the role of endothelium-derived ET-1 in cardiovascular pathophysiology, we specifically targeted expression of the human preproET-1 gene to the endothelium by using the Tie-2 promoter in C57BL/6 mice.\n\n\nMETHODS AND RESULTS\nTen-week-old male C57BL/6 transgenic (TG) and nontransgenic (wild type; WT) littermates were studied. TG mice exhibited 3-fold higher vascular tissue ET-1 mRNA and 7-fold higher ET-1 plasma levels than did WT mice but no significant elevation in blood pressure. Despite the absence of significant blood pressure elevation, TG mice exhibited marked hypertrophic remodeling and oxidant excess-dependent endothelial dysfunction of resistance vessels, altered ET-1 and ET-3 vascular responses, and significant increases in ET(B) expression compared with WT littermates. Moreover, TG mice generated significantly higher oxidative stress, possibly through increased activity and expression of vascular NAD(P)H oxidase than did their WT counterparts.\n\n\nCONCLUSIONS\nIn this new murine model of endothelium-restricted human preproET-1 overexpression, ET-1 caused structural remodeling and endothelial dysfunction of resistance vessels, consistent with a direct nonhemodynamic effect of ET-1 on the vasculature, at least in part through the activation of vascular NAD(P)H oxidase.",
"title": ""
},
{
"docid": "ef130d13d27903181f5337d03b8f88b6",
"text": "In this paper we address reduction of complexity in management of scientific computations in distributed computing environments. We explore an approach based on separation of computation design (application development) and distributed execution of computations, and investigate best practices for construction of virtual infrastructures for computational science - software systems that abstract and virtualize the processes of managing scientific computations on heterogeneous distributed resource systems. As a result we present StratUm, a toolkit for management of eScience computations. To illustrate use of the toolkit, we present it in the context of a case study where we extend the capabilities of an existing kinetic Monte Carlo software framework to utilize distributed computational resources. The case study illustrates a viable design pattern for construction of virtual infrastructures for distributed scientific computing. The resulting infrastructure is evaluated using a computational experiment from molecular systems biology.",
"title": ""
},
{
"docid": "df4146f0b223b9bc7a983a4198589b48",
"text": "Since its official introduction in 2012, the Robot Web Tools project has grown tremendously as an open-source community, enabling new levels of interoperability and portability across heterogeneous robot systems, devices, and front-end user interfaces. At the heart of Robot Web Tools is the rosbridge protocol as a general means for messaging ROS topics in a client-server paradigm suitable for wide area networks, and human-robot interaction at a global scale through modern web browsers. Building from rosbridge, this paper describes our efforts with Robot Web Tools to advance: 1) human-robot interaction through usable client and visualization libraries for more efficient development of front-end human-robot interfaces, and 2) cloud robotics through more efficient methods of transporting high-bandwidth topics (e.g., kinematic transforms, image streams, and point clouds). We further discuss the significant impact of Robot Web Tools through a diverse set of use cases that showcase the importance of a generic messaging protocol and front-end development systems for human-robot interaction.",
"title": ""
},
{
"docid": "4e9438fede70ff0aa1c87cdcd64f0bac",
"text": "This paper presents a novel formulation for detecting objects with articulated rigid bodies from high-resolution monitoring images, particularly engineering vehicles. There are many pixels in high-resolution monitoring images, and most of them represent the background. Our method first detects object patches from monitoring images using a coarse detection process. In this phase, we build a descriptor based on histograms of oriented gradient, which contain color frequency information. Then we use a linear support vector machine to rapidly detect many image patches that may contain object parts, with a low false negative rate and a high false positive rate. In the second phase, we apply a refinement classification to determine the patches that actually contain objects. In this stage, we increase the size of the image patches so that they include the complete object using models of the object parts. Then an accelerated and improved salient mask is used to improve the performance of the dense scale-invariant feature transform descriptor. The detection process returns the absolute position of positive objects in the original images. We have applied our methods to three datasets to demonstrate their effectiveness.",
"title": ""
},
{
"docid": "113cf34bf2a86a8f1a041cfd366c00b7",
"text": "People perceive and conceive of activity in terms of discrete events. Here the authors propose a theory according to which the perception of boundaries between events arises from ongoing perceptual processing and regulates attention and memory. Perceptual systems continuously make predictions about what will happen next. When transient errors in predictions arise, an event boundary is perceived. According to the theory, the perception of events depends on both sensory cues and knowledge structures that represent previously learned information about event parts and inferences about actors' goals and plans. Neurological and neurophysiological data suggest that representations of events may be implemented by structures in the lateral prefrontal cortex and that perceptual prediction error is calculated and evaluated by a processing pathway, including the anterior cingulate cortex and subcortical neuromodulatory systems.",
"title": ""
},
{
"docid": "4ad106897a19830c80a40e059428f039",
"text": "In 1972, and later in 1979, at the peak of the golden era of Good Old Fashioned Artificial Intelligence (GOFAI), the voice of philosopher Hubert Dreyfus made itself heard as one of the few calls against the hubristic programme of modelling the human mind as a mechanism of symbolic information processing (Dreyfus, 1979). He did not criticise particular solutions to specific problems; instead his deep concern was with the very foundations of the programme. His critical stance was unusual, at least for most GOFAI practitioners, in that it did not rely on technical issues, but on a philosophical position emanating from phenomenology and existentialism, a fact contributing to his claims being largely ignored or dismissed for a long time by the AI community. But, for the most part, he was eventually proven right. AI’s over-reliance on worldmodelling and planning went against the evidence provided by phenomenology of human activity as situated and with a clear and ever-present focus of practical concern – the body and not some algorithm is the originating locus of intelligent activity (if by intelligent we understand intentional, directed and flexible), and the world is not the sum total of all available facts, but the world-as-it-is-for-this-body. Such concerns were later vindicated by the Brooksian revolution in autonomous robotics with its foundations on embodiment, situatedness and de-centralised mechanisms (Brooks, 1991). Brooks’ practical and methodological preoccupations – building robots largely based on biologically plausible principles and capable of acting in the real world – proved parallel, despite his claim that his approach was not “German philosophy”, to issues raised by Dreyfus. Putting robotics back as the acid test of AI, as oppossed to playing chess and proving theorems, is now often seen as a positive response to Dreyfus’ point that AI was unable to capture true meaning by the summing of meaningless processes. This criticism was later devastatingly recast in Searle’s Chinese Room argument (1980), and extended by Harnad’s Symbol Grounding Problem (1990). Meaningful activity – that is, meaningful for the agent and not only for the designer – must obtain through sensorimotor grounding in the agent’s world, and for this both a body and world are needed. Following these developments, work in autonomous robotics and new AI since the 1990s rebelled against pure connectionism because of its lack of biological plausibility and also because most of connectionist research was carried out in vacuo – it was compellingly argued that neural network models as simple input/output processing units are meaningless for modelling the cognitive capabilities of insects, let alone humans, unless they are embedded in a closed sensorimotor loop of interaction with a world (Cliff, 1991). Objective meaning, that is meaningful internal states and states of the world, can only obtain in an embodied agent whose effector and sensor activities become coordinated",
"title": ""
},
{
"docid": "d3df310f37045f4e85235623d7539ba4",
"text": "The aim of this paper is to review the available literature on goal scoring in elite male football leagues. A systematic search of two electronic databases (SPORTDiscus with Full Text and ISI Web Knowledge All Databases) was conducted and of the 610 studies initially identified, 19 were fully analysed. Studies that fitted all the inclusion criteria were organised according to the research approach adopted (static or dynamic). The majority of these studies were conducted in accordance with the static approach (n=15), where the data were collected without considering dynamic of performance during matches and were analysed using standard statistical methods for data analysis. They focused predominantly on a description of key performance indicators (technical and tactical). Meanwhile, in a few studies the dynamic approach (n=4) was adopted, where performance variables were recorded taking into account the chronological and sequential order in which they occurred. Different advanced analysis techniques for assessing performance evolution over time during the match were used in this second group of studies. The strengths and limitations of both approaches in terms of providing the meaningful information for coaches are discussed in the present study.",
"title": ""
},
{
"docid": "deb1c65a6e2dfb9ab42f28c74826309c",
"text": "Large knowledge bases consisting of entities and relationships between them have become vital sources of information for many applications. Most of these knowledge bases adopt the Semantic-Web data model RDF as a representation model. Querying these knowledge bases is typically done using structured queries utilizing graph-pattern languages such as SPARQL. However, such structured queries require some expertise from users which limits the accessibility to such data sources. To overcome this, keyword search must be supported. In this paper, we propose a retrieval model for keyword queries over RDF graphs. Our model retrieves a set of subgraphs that match the query keywords, and ranks them based on statistical language models. We show that our retrieval model outperforms the-state-of-the-art IR and DB models for keyword search over structured data using experiments over two real-world datasets.",
"title": ""
},
{
"docid": "640ba15172b56373b3a6bdfe9f5f6cd4",
"text": "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multiagent domains.",
"title": ""
},
{
"docid": "e9e215d1aa04c0448ced37779590f583",
"text": "We introduce a framework to transfer knowledge acquired from a repository of (heterogeneous) supervised datasets to new unsupervised datasets. Our perspective avoids the subjectivity inherent in unsupervised learning by reducing it to supervised learning, and provides a principled way to evaluate unsupervised algorithms. We demonstrate the versatility of our framework via rigorous agnostic bounds on a variety of unsupervised problems. In the context of clustering, our approach helps choose the number of clusters and the clustering algorithm, remove the outliers, and provably circumvent Kleinberg’s impossibility result. Experiments across hundreds of problems demonstrate improvements in performance on unsupervised data with simple algorithms despite the fact our problems come from heterogeneous domains. Additionally, our framework lets us leverage deep networks to learn common features across many small datasets, and perform zero shot learning.",
"title": ""
},
{
"docid": "7a4bf293b22a405c4b3c41a914bc7f3f",
"text": "Sutton, Szepesvári and Maei (2009) recently introduced the first temporal-difference learning algorithm compatible with both linear function approximation and off-policy training, and whose complexity scales only linearly in the size of the function approximator. Although their gradient temporal difference (GTD) algorithm converges reliably, it can be very slow compared to conventional linear TD (on on-policy problems where TD is convergent), calling into question its practical utility. In this paper we introduce two new related algorithms with better convergence rates. The first algorithm, GTD2, is derived and proved convergent just as GTD was, but uses a different objective function and converges significantly faster (but still not as fast as conventional TD). The second new algorithm, linear TD with gradient correction, or TDC, uses the same update rule as conventional TD except for an additional term which is initially zero. In our experiments on small test problems and in a Computer Go application with a million features, the learning rate of this algorithm was comparable to that of conventional TD. This algorithm appears to extend linear TD to off-policy learning with no penalty in performance while only doubling computational requirements.",
"title": ""
}
] |
scidocsrr
|
f177518fa8695384cb8ef7b0647e7236
|
Practical Byzantine Group Communication
|
[
{
"docid": "fc1c3291c631562a6d1b34d5b5ccd27e",
"text": "There are many methods for making a multicast protocol “reliable.” At one end of the spectrum, a reliable multicast protocol might offer tomicity guarantees, such as all-or-nothing delivery, delivery ordering, and perhaps additional properties such as virtually synchronous addressing. At the other are protocols that use local repair to overcome transient packet loss in the network, offering “best effort” reliability. Yet none of this prior work has treated stability of multicast delivery as a basic reliability property, such as might be needed in an internet radio, television, or conferencing application. This article looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a “bimodal multicast” in reference to its reliability model, which corresponds to a family of bimodal probability distributions. Here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. These confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput.",
"title": ""
}
] |
[
{
"docid": "4bc65e3c420fae22b2b78de36b8b7bf3",
"text": "This paper presents a tuturial introduction to predictions of stock time series. The various approaches of technical and fundamental analysis is presented and the prediction problem is formulated as a special case of inductive learning. The problems with performance evaluation of near-random-walk processes are illustrated with examples together with guidelines for avoiding the risk of data-snooping. The connections to concepts like \"the bias/variance dilemma\", overtraining and model complexity are further covered. Existing benchmarks and testing metrics are surveyed and some new measures are introduced.",
"title": ""
},
{
"docid": "55631b81d46fc3dcaad8375176cb1c68",
"text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.",
"title": ""
},
{
"docid": "2f6b866048c302b93b7f4eaf0907c7e1",
"text": "This study aimed to determine differences in speech perception and subjective preference after upgrade from the FSP coding strategy to the FS4 or FS4p coding strategies. Subjects were tested at the point of upgrade (n=10), and again at 1-(n=10), 3-(n=8), 6-(n=8) and 12 months (n=8) after the upgrade to the FS4 or FS4p coding strategy. In between test intervals patients had to use the FS4 or FS4p strategy in everyday life. Primary outcome measures, chosen to best evaluate individual speech understanding, were the Freiburg Monosyllable Test in quiet, the Oldenburg Sentence Test (OLSA) in noise, and the Hochmair-Schulz-Moser (HSM) Sentence Test in noise. To measure subjective sound quality the Hearing Implant Sound Quality Index was used. Subjects with the FS4/FS4p strategy performed as well as subjects with the FSP coding strategy in the speech tests. The subjective perception of subjects showed that subjects perceived a ‘moderate’ or ‘poor’ auditory benefit with the FS4/FS4p coding strategy. Subjects with the FS4 or FS4p coding strategies perform well in everyday situations. Both coding strategies offer another tool to individualize the fitting of audio processors and grant access to satisfying sound quality and speech perception.",
"title": ""
},
{
"docid": "b03b41f27b3046156a922f858349d4ed",
"text": "Charophytes are macrophytic green algae, occurring in standing and running waters throughout the world. Species descriptions of charophytes are contradictive and different determination keys use various morphologic characters for species discrimination. Chara intermedia Braun, C. baltica Bruzelius and C. hispida Hartman are treated as three species by most existing determination keys, though their morphologic differentiation is based on different characteristics. Amplified fragment length polymorphism (AFLP) was used to detect genetically homogenous groups within the C. intermedia-C. baltica-C. hispida-cluster, by the analysis of 122 C. intermedia, C. baltica and C. hispida individuals from central and northern Europe. C. hispida clustered in a distinct genetic group in the AFLP analysis and could be determined morphologically by its aulacanthous cortification. However, for C. intermedia and C. baltica no single morphologic character was found that differentiated the two genetic groups, thus C. intermedia and C. baltica are considered as cryptic species. All C. intermedia specimen examined came from freshwater habitats, whereas the second group, C. baltica, grew in brackish water. We conclude that the species differentiation between C. intermedia and C. baltica, which is assumed to be reflected by the genetic discrimination groups, corresponds more with ecological (salinity preference) than morphologic characteristics. Based on the genetic analysis three differing colonization models of the Baltic Sea and the Swedish lakes with C. baltica and C. intermedia were discussed. As samples of C. intermedia and C. baltica have approximately the same Jaccard coefficient for genetic similarity, we suggest that C. baltica colonized the Baltic Sea after the last glacial maximum from refugia along the Atlantic and North Sea coasts. Based on the similarity of C. intermedia intermediate individuals of Central Europe and Sweden we assume a colonization of the Swedish lakes from central Europe.",
"title": ""
},
{
"docid": "af0178d0bb154c3995732e63b94842ca",
"text": "Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "a0787399eaca5b59a87ed0644da10fc6",
"text": "This work faces the problem of combining the outputs of two co-siting BTS, one operating with 2G networks and the other with 3G (or 4G) networks. This requirement is becoming more and more frequent because many operators, for increasing the capacity for data and voice signal transmission, have overlaid the new network in 3G or 4G technology to the existing 2G infrastructure. The solution here proposed is constituted by a low loss combiner realized through a directional double single-sided filtering system, which manages both TX and RX signals from each BTS output. The design approach for the combiner architecture is described with a particular emphasis on the synthesis of the double single-sided filters (realized by means of extracted pole technique). A prototype of the low-loss combiner has been designed and fabricated for validating the proposed approach. The results obtained are here discussed making into evidence the pros & cons of the proposed solution.",
"title": ""
},
{
"docid": "b7e8da8733a2edd31d1fe53236f5eedf",
"text": "Cancer stem cell (CSC) biology and tumor immunology have shaped our understanding of tumorigenesis. However, we still do not fully understand why tumors can be contained but not eliminated by the immune system and whether rare CSCs are required for tumor propagation. Long latency or recurrence periods have been described for most tumors. Conceptually, this requires a subset of malignant cells which is capable of initiating tumors, but is neither eliminated by immune cells nor able to grow straight into overt tumors. These criteria would be fulfilled by CSCs. Stem cells are pluripotent, immune-privileged, and long-living, but depend on specialized niches. Thus, latent tumors may be maintained by a niche-constrained reservoir of long-living CSCs that are exempt from immunosurveillance while niche-independent and more immunogenic daughter cells are constantly eliminated. The small subpopulation of CSCs is often held responsible for tumor initiation, metastasis, and recurrence. Experimentally, this hypothesis was supported by the observation that only this subset can propagate tumors in non-obese diabetic/scid mice, which lack T and B cells. Yet, the concept was challenged when an unexpectedly large proportion of melanoma cells were found to be capable of seeding complex tumors in mice which further lack NK cells. Moreover, the link between stem cell-like properties and tumorigenicity was not sustained in these highly immunodeficient animals. In humans, however, tumor-propagating cells must also escape from immune-mediated destruction. The ability to persist and to initiate neoplastic growth in the presence of immunosurveillance - which would be lost in a maximally immunodeficient animal model - could hence be a decisive criterion for CSCs. Consequently, integrating scientific insight from stem cell biology and tumor immunology to build a new concept of \"CSC immunology\" may help to reconcile the outlined contradictions and to improve our understanding of tumorigenesis.",
"title": ""
},
{
"docid": "f1d0fc62f47c5fd4f47716a337fd9ed0",
"text": "We present the system architecture of a mobile outdoor augmented reality system for the Archeoguide project. We begin with a short introduction to the project. Then we present the hardware we chose for the mobile system and we describe the system architecture we designed for the software implementation. We conclude this paper with the first results obtained from experiments we made during our trials at ancient Olympia in Greece.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "8387c06436e850b4fb00c6b5e0dcf19f",
"text": "Since the beginning of the epidemic, human immunodeficiency virus (HIV) has infected around 70 million people worldwide, most of whom reside is sub-Saharan Africa. There have been very promising developments in the treatment of HIV with anti-retroviral drug cocktails. However, drug resistance to anti-HIV drugs is emerging, and many people infected with HIV have adverse reactions or do not have ready access to currently available HIV chemotherapies. Thus, there is a need to discover new anti-HIV agents to supplement our current arsenal of anti-HIV drugs and to provide therapeutic options for populations with limited resources or access to currently efficacious chemotherapies. Plant-derived natural products continue to serve as a reservoir for the discovery of new medicines, including anti-HIV agents. This review presents a survey of plants that have shown anti-HIV activity, both in vitro and in vivo.",
"title": ""
},
{
"docid": "32097bd3faa683f451ae982554f8ef5b",
"text": "According to the growth of the Internet technology, there is a need to develop strategies in order to maintain security of system. One of the most effective techniques is Intrusion Detection System (IDS). This system is created to make a complete security in a computerized system, in order to pass the Intrusion system through the firewall, antivirus and other security devices detect and deal with it. The Intrusion detection techniques are divided into two groups which includes supervised learning and unsupervised learning. Clustering which is commonly used to detect possible attacks is one of the branches of unsupervised learning. Fuzzy sets play an important role to reduce spurious alarms and Intrusion detection, which have uncertain quality.This paper investigates k-means fuzzy and k-means algorithm in order to recognize Intrusion detection in system which both of the algorithms use clustering method.",
"title": ""
},
{
"docid": "63b2bc943743d5b8ef9220fd672df84f",
"text": "In multiagent systems, we often have a set of agents each of which have a preference ordering over a set of items and one would like to know these preference orderings for various tasks, for example, data analysis, preference aggregation, voting etc. However, we often have a large number of items which makes it impractical to ask the agents for their complete preference ordering. In such scenarios, we usually elicit these agents’ preferences by asking (a hopefully small number of) comparison queries — asking an agent to compare two items. Prior works on preference elicitation focus on unrestricted domain and the domain of single peaked preferences and show that the preferences in single peaked domain can be elicited by much less number of queries compared to unrestricted domain. We extend this line of research and study preference elicitation for single peaked preferences on trees which is a strict superset of the domain of single peaked preferences. We show that the query complexity crucially depends on the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We then investigate the query complexity for finding a weak Condorcet winner for preferences single peaked on a tree and show that this task has much less query complexity than preference elicitation. Here again we observe that the number of leaves in the underlying single peaked tree and the path cover number of the tree influence the query complexity of the problem.",
"title": ""
},
{
"docid": "d8802a7fcdbd306bd474f3144bc688a4",
"text": "Shape from defocus (SFD) is one of the most popular techniques in monocular 3D vision. While most SFD approaches require two or more images of the same scene captured at a fixed view point, this paper presents an efficient approach to estimate absolute depth from a single defocused image. Instead of directly measuring defocus level of each pixel, we propose to design a sequence of aperture-shape filters to segment a defocused image by defocus level. A boundary-weighted belief propagation algorithm is employed to obtain a smooth depth map. We also give an estimation of depth error. Extensive experiments show that our approach outperforms the state-of-the-art single-image SFD approaches both in precision of the estimated absolute depth and running time.",
"title": ""
},
{
"docid": "eb8d681fcfd5b18c15dd09738ab4717c",
"text": "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over two baselines, one based on handcrafted rules and the other based on flat deep reinforcement learning.",
"title": ""
},
{
"docid": "e236a7cd184bbd09c9ffd90ad4cfd636",
"text": "It has been a challenge for financial economists to explain some stylized facts observed in securities markets, among them, high levels of trading volume. The most prominent explanation of excess volume is overconfidence. High market returns make investors overconfident and as a consequence, these investors trade more subsequently. The aim of our paper is to study the impact of the phenomenon of overconfidence on the trading volume and its role in the formation of the excess volume on the Tunisian stock market. Based on the work of Statman, Thorley and Vorkink (2006) and by using VAR models and impulse response functions, we find little evidence of the overconfidence hypothesis when we use volume (shares traded) as proxy of trading volume.",
"title": ""
},
{
"docid": "957863eafec491fae0710dd33c043ba8",
"text": "In this paper, we present an automated behavior analysis system developed to assist the elderly and individuals with disabilities who live alone, by learning and predicting standard behaviors to improve the efficiency of their healthcare. Established behavioral patterns have been recorded using wireless sensor networks composed by several event-based sensors that captured raw measures of the actions of each user. Using these data, behavioral patterns of the residents were extracted using Bayesian statistics. The behavior was statistically estimated based on three probabilistic features we introduce, namely sensor activation likelihood, sensor sequence likelihood, and sensor event duration likelihood. Real data obtained from different home environments were used to verify the proposed method in the individual analysis. The results suggest that the monitoring system can be used to detect anomalous behavior signs which could reflect changes in health status of the user, thus offering an opportunity to intervene if required.",
"title": ""
},
{
"docid": "9e7ff381dc439d9129ba936c7f067189",
"text": "We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply re-ordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.",
"title": ""
},
{
"docid": "4fd421bbe92b40e85ffd66cf0084b1b8",
"text": "Real-time performance of adaptive digital signal processing algorithms is required in many applications but it often means a high computational load for many conventional processors. In this paper, we present a configurable hardware architecture for adaptive processing of noisy signals for target detection based on Constant False Alarm Rate (CFAR) algorithms. The architecture has been designed to deal with parallel/pipeline processing and to be configured for three version of CFAR algorithms, the Cell-Average, the Max and the Min CFAR. The proposed architecture has been implemented on a Field Programmable Gate Array (FPGA) device providing good performance improvements over software implementations. FPGA implementation results are presented and discussed.",
"title": ""
},
{
"docid": "183df189a37dc4c4a174792fb8464d3d",
"text": "Rule engines form an essential component of most service execution frameworks in a Service Oriented Architecture (SOA) ecosystem. The efficiency of a service execution framework critically depends on the performance of the rule engine it uses to manage it's operations. Most common rule engines suffer from the fundamental performance issues of the Rete algorithm that they internally use for faster matching of rules against incoming facts. In this paper, we present the design of a scalable architecture of a service rule engine, where a rule clustering and hashing based mechanism is employed for lazy loading of relevant service rules and a prediction based technique for rule evaluation is used for faster actuation of the rules. We present experimental results to demonstrate the efficacy of the proposed rule engine framework over contemporary ones.",
"title": ""
}
] |
scidocsrr
|
e106d7c60ba7fb7cf13d7339cc7771fe
|
Brief Announcement: ZeroBlock: Timestamp-Free Prevention of Block-Withholding Attack in Bitcoin
|
[
{
"docid": "937d93600ad3d19afda31ada11ea1460",
"text": "Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the \"block withholding attack\". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.",
"title": ""
},
{
"docid": "cadafd50eba3e60d8133520ff15fcfb8",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Security of Electronic Payment Systems: A Comprehensive Survey Siamak Solat",
"title": ""
}
] |
[
{
"docid": "89e25ae1d0f5dbe3185a538c2318b447",
"text": "This paper presents a fully-integrated 3D image radar engine utilizing beamforming for electrical scanning and precise ranging technique for distance measurement. Four transmitters and four receivers form a sensor frontend with phase shifters and power combiners adjusting the beam direction. A built-in 31.3 GHz clock source and a frequency tripler provide both RF carrier and counting clocks for the distance measurement. Flip-chip technique with low-temperature co-fired ceramic (LTCC) antenna design creates a miniature module as small as 6.5 × 4.4 × 0.8 cm3. Designed and fabricated in 65 nm CMOS technology, the transceiver array chip dissipates 960 mW from a 1.2-V supply and occupies chip area of 3.6 × 2.1 mm 2. This prototype achieves ±28° scanning range, 2-m maximum distance, and 1 mm depth resolution.",
"title": ""
},
{
"docid": "5f4c9518ad93c7916010efcae888cefe",
"text": "Honeypots and similar sorts of decoys represent only the most rudimentary uses of deception in protection of information systems. But because of their relative popularity and cultural interest, they have gained substantial attention in the research and commercial communities. In this paper we will introduce honeypots and similar sorts of decoys, discuss their historical use in defense of information systems, and describe some of their uses today. We will then go into a bit of the theory behind deceptions, discuss their limitations, and put them in the greater context of information protection. 1. Background and History Honeypots and other sorts of decoys are systems or components intended to cause malicious actors to attack the wrong targets. Along the way, they produce potentially useful information for defenders. 1.1 Deception fundamentals According to the American Heritage Dictionary of the English Language (1981): \"deception\" is defined as \"the act of deceit\" \"deceit\" is defined as \"deception\". Fundamentally, deception is about exploiting errors in cognitive systems for advantage. History shows that deception is achieved by systematically inducing and suppressing signals entering the target cognitive system. There have been many approaches to the identification of cognitive errors and methods for their exploitation, and some of these will be explored here. For more thorough coverage, see [68]. Honeypots and decoys achieve this by presenting targets that appear to be useful targets for attackers. To quote Jesus Torres, who worked on honeypots as part of his graduate degree at the Naval Postgradua te School: “For a honeypot to work, it needs to have some honey” Honeypots work by providing something that appears to be desirable to the attacker. The attacker, in searching for the honey of interest, comes across the honeypot, and starts to taste of its wares. If they are appealing enough, the attacker spends significant time and effort getting at the honey provided. If the attacker has finite resources, the time spent going after the honeypot is time not spent going after other things the honeypot is intended to protect. If the attacker uses tools and techniques in attacking the honeypot, some aspects of those tools and techniques are revealed to the defender in the attack on the honeypot. Decoys, like the chaff used to cause information systems used in missiles to go after the wrong objective, induce some signals into the cognitive system of their target (the missile) that, if successful, causes the missile to go after the chaff instead of their real objective. While some readers might be confused for a moment about the relevance of military operations to normal civilian use of deceptions, this example is particularly useful because it shows how information systems are used to deceive other information systems and it is an example in which only the induction of signals is applied. Of course in tactical situations, the real object of the missile attack may also take other actions to suppress its own signals, and this makes the analogy even better suited for this use. Honeypots and decoys only induce signals, they do not suppress them. While other deceptions that suppress signals may be used in concert with honeypots and decoys, the remainder of this paper will focus on signal induction as a deceptive technique and shy away from signal suppression and combinations of signal suppression and induction. 1.2 Historical Deceptions Since long before 800 B.C. when Sun Tzu wrote \"The Art of War\" [28] deception has been key to success in warfare. Similarly, information protection as a field of study has been around for at least 4,000 years [41]. And long before humans documented the use of deceptions, even before humans existed, deception was common in nature. Just as baboons beat their chests, so did early humans, and of course who has not seen the films of Khrushchev at the United Nations beating his shoe on the table and stating “We will bury you!”. While this article is about deceptions involving computer systems, understanding cognitive issues in deception is fundamental to understanding any deception. 1.3 Cognitive Deception Background Many authors have examined facets of deception from both an experiential and cognitive perspective. Chuck Whitlock has built a large part of his career on identifying and demonst rating these sorts of deceptions. [12] His book includes detailed descriptions and examples of scores of common street deceptions. Fay Faron points out that most such confidence efforts are carried as as specific 'plays' and details the anatomy of a 'con' [30]. Bob Fellows [13] takes a detailed approach to how 'magic' and similar techniques exploit human fallibility and cognitive limits to deceive people. Thomas Gilovich [14] provides indepth analysis of human reasoning fallibility by presenting evidence from psychological studies that demonst rate a number of human reasoning mechanisms resulting in erroneous conclusions. Charles K. West [32] describes the steps in psychological and social distortion of information and provides detailed support for cognitive limits leading to deception. Al Seckel [15] provides about 100 excellent examples of various optical illusions, many of which work regardless of the knowledge of the observer, and some of which are defeated after the observer sees them only once. Donald D. Hoffman [36] expands this into a detailed examination of visual intelligence and how the brain processes visual information. It is particularly noteworthy that the visual cortex consumes a great deal of the total human brain space and that it has a great deal of effect on cognition. Deutsch [47] provides a series of demons trations of interpreta tion and misinterpretation of audio information. First Karrass [33] then Cialdini [34] have provided excellent summaries of negotiation strategies and the use of influence to gain advantage. Both also explain how to defend against influence tactics. Cialdini [34] provides a simple structure for influence and asserts that much of the effect of influence techniques is built in and occurs below the conscious level for most people. Robertson and Powers [31] have worked out a more detailed lowlevel theoretical model of cognition based on \"Perceptual Control Theory\" (PCT), but extensions to higher levels of cognition have been highly speculative to date. They define a set of levels of cognition in terms of their order in the control system, but beyond the lowest few levels they have inadequate basis for asserting that these are orders of complexity in the classic control theoretical sense. Their higher level analysis results have also not been shown to be realistic representations of human behaviors. David Lambert [2] provides an extensive collection of examples of deceptions and deceptive techniques mapped into a cognitive model intended for modeling deception in military situations. These are categorized into cognitive levels in Lambert's cognitive model. Charles Handy [37] discusses organizational structures and behaviors and the roles of power and influence within organizations. The National Research Council (NRC) [38] discusses models of human and organizational behavior and how automation has been applied in this area. The NRC report includes scores of examples of modeling techniques and details of simulation implementa tions based on those models and their applicability to current and future needs. Greene [46] describes the 48 laws of power and, along the way, demonst rates 48 methods that exert compliance forces in an organization. These can be traced to cognitive influences and mapped out using models like Lambert 's, Cialdini's, and the one we describe later in this paper. Closely related to the subject of deception is the work done by the CIA on the MKULTRA project. [52] A good summary of some of the pre1990 results on psychological aspects of self deception is provided in Heuer's CIA book on the psychology of intelligence analysis. [49] Heuer goes one step further in trying to start assessing ways to counter deception, and concludes that intelligence analysts can make improvements in their presentation and analysis process. Several other papers on deception detection have been written and substantially summarized in Vrij's book on the subject.[50] All of these books and papers are summarized in more detail in “A Framework for Deception” [68] which provides much of the basis for the historical issues in this paper as well as other related issues in deception not limited to honeypots, decoys, and signal induction deceptions. In addition, most of the computer deception background presented next is derived from this paper. 1.4 Computer Deception Background The most common example of a computer security mechanism based on deception is the response to attempted logins on most modern computer systems. When a user first attempts to access a system, they are asked for a user identification (UID) and password. Regardless of whether the cause of a failed access attempt was the result of a nonexistent UID or an invalid password for that UID, a failed attempt is met with the same message. In text based access methods, the UID is typically requested first and, even if no such UID exists in the system, a password is requested. Clearly, in such systems, the computer can identify that no such UID exists without asking for a password. And yet these systems intentionally suppress the information that no such UID exist and induce a message designed to indicate that the UID does exist. In earlier systems where this was not done, attackers exploited the result so as to gain additional information about which UIDs were on the system and this dramatically reduced their difficulty in attack. This is a very widely accepted practice, and when presented as a deception, many people who otherwise object to deceptions in computer systems indicate that this somehow doesn’t count as a d",
"title": ""
},
{
"docid": "483880f697329701db9412f5569b802f",
"text": "Online consumer reviews (OCR) have helped consumers to know about the strengths and weaknesses of different products and find the ones that best suit their needs. This research investigates the predictors of readership and helpfulness of OCR using a sentiment mining approach. Our findings show that reviews with higher levels of positive sentiment in the title receive more readerships. Sentimental reviews with neutral polarity in the text are also perceived to be more helpful. The length and longevity of a review positively influence both its readership and helpfulness. Our findings suggest that the current methods used for sorting OCR may bias both their readership and helpfulness. This study can be used by online vendors to develop scalable automated systems for sorting and classification of OCR which will benefit both vendors and consumers.",
"title": ""
},
{
"docid": "e91f0323df84e4c79e26822a799d54fd",
"text": "Researchers have renewed an interest in the harmful consequences of poverty on child development. This study builds on this work by focusing on one mechanism that links material hardship to child outcomes, namely the mediating effect of maternal depression. Using data from the National Maternal and Infant Health Survey, we found that maternal depression and poverty jeopardized the development of very young boys and girls, and to a certain extent, affluence buffered the deleterious consequences of depression. Results also showed that chronic maternal depression had severe implications for both boys and girls, whereas persistent poverty had a strong effect for the development of girls. The measures of poverty and maternal depression used in this study generally had a greater impact on measures of cognitive development than motor development.",
"title": ""
},
{
"docid": "bfd23678afff2ac4cd4650cf46195590",
"text": "The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and \"lone wolf\" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.",
"title": ""
},
{
"docid": "84d2cb7c4b8e0f835dab1cd3971b60c5",
"text": "Ambient intelligence (AmI) deals with a new world of ubiquitous computing devices, where physical environments interact intelligently and unobtrusively with people. These environments should be aware of people's needs, customizing requirements and forecasting behaviors. AmI environments can be diverse, such as homes, offices, meeting rooms, schools, hospitals, control centers, vehicles, tourist attractions, stores, sports facilities, and music devices. Artificial intelligence research aims to include more intelligence in AmI environments, allowing better support for humans and access to the essential knowledge for making better decisions when interacting with these environments. This article, which introduces a special issue on AmI, views the area from an artificial intelligence perspective.",
"title": ""
},
{
"docid": "83393c9a0392249409a057914c71b1a0",
"text": "Recent achievement of the learning-based classification leads to the noticeable performance improvement in automatic polyp detection. Here, building large good datasets is very crucial for learning a reliable detector. However, it is practically challenging due to the diversity of polyp types, expensive inspection, and labor-intensive labeling tasks. For this reason, the polyp datasets usually tend to be imbalanced, i.e., the number of non-polyp samples is much larger than that of polyp samples, and learning with those imbalanced datasets results in a detector biased toward a non-polyp class. In this paper, we propose a data sampling-based boosting framework to learn an unbiased polyp detector from the imbalanced datasets. In our learning scheme, we learn multiple weak classifiers with the datasets rebalanced by up/down sampling, and generate a polyp detector by combining them. In addition, for enhancing discriminability between polyps and non-polyps that have similar appearances, we propose an effective feature learning method using partial least square analysis, and use it for learning compact and discriminative features. Experimental results using challenging datasets show obvious performance improvement over other detectors. We further prove effectiveness and usefulness of the proposed methods with extensive evaluation.",
"title": ""
},
{
"docid": "46da9277d034aadc784f550ece3c3789",
"text": "Wireless communication has attracted considerable interest in the research community, and many wireless networks are evaluated using discrete event simulators like OMNeT++. Although OMNeT++ provides a powerful and clear simulation framework, it lacks of direct support and a concise modeling chain for wireless communication. Both is provided by MiXiM. MiXiM joins and extends several existing simulation frameworks developed for wireless and mobile simulations in OMNeT++. It provides detailed models of the wireless channel (fading, etc.), wireless connectivity, mobility models, models for obstacles and many communication protocols especially at the Medium Access Control (MAC) level. Further, it provides a user-friendly graphical representation of wireless and mobile networks in OMNeT++, supporting debugging and defining even complex wireless scenarios. Though still in development, MiXiM already is a powerful tool for performance analysis of wireless networks. Its extensive functionality and clear concept may motivate researches to contribute to this open-source project [4].",
"title": ""
},
{
"docid": "8d6ebefca528255bc14561e1106522af",
"text": "Constant power loads may yield instability due to the well-known negative impedance characteristic. This paper analyzes the factors that cause instability of a dc microgrid with multiple dc–dc converters. Two stabilization methods are presented for two operation modes: 1) constant voltage source mode; and 2) droop mode, and sufficient conditions for the stability of the dc microgrid are obtained by identifying the eigenvalues of the Jacobian matrix. The key is to transform the eigenvalue problem to a quadratic eigenvalue problem. When applying the methods in practical engineering, the salient feature is that the stability parameter domains can be estimated by the available constraints, such as the values of capacities, inductances, maximum load power, and distances of the cables. Compared with some classical methods, the proposed methods have wider stability region. The simulation results based on MATLAB/simulink platform verify the feasibility of the methods.",
"title": ""
},
{
"docid": "d5007c061227ec76a4e8ea795471db00",
"text": "The ramp loss is a robust but non-convex loss for classification. Compared with other non-convex losses, a local minimum of the ramp loss can be effectively found. The effectiveness of local search comes from the piecewise linearity of the ramp loss. Motivated by the fact that the `1-penalty is piecewise linear as well, the `1-penalty is applied for the ramp loss, resulting in a ramp loss linear programming support vector machine (rampLPSVM). The proposed ramp-LPSVM is a piecewise linear minimization problem and the related optimization techniques are applicable. Moreover, the `1-penalty can enhance the sparsity. In this paper, the corresponding misclassification error and convergence behavior are discussed. Generally, the ramp loss is a truncated hinge loss. Therefore ramp-LPSVM possesses some similar properties as hinge loss SVMs. A local minimization algorithm and a global search strategy are discussed. The good optimization capability of the proposed algorithms makes ramp-LPSVM perform well in numerical experiments: the result of rampLPSVM is more robust than that of hinge SVMs and is sparser than that of ramp-SVM, which consists of the ‖ · ‖K-penalty and the ramp loss.",
"title": ""
},
{
"docid": "d55343250b7e13caa787c5b6db52d305",
"text": "Analysis of the face is an essential component of facial plastic surgery. In training, we are taught standards and ideals based on neoclassical models of beauty from Greek and Roman art and architecture. In practice, we encounter a wide range of variation in patient desires and perceptions of beauty. Our goals seem to be ever shifting, yet our education has provided us with a foundation from which to draw ideals of beauty. Plastic surgeons must synthesize classical ideas of beauty with patient desires, cultural nuances, and ethnic considerations all the while maintaining a natural appearance and result. This article gives an overview of classical models of facial proportions and relationships, while also discussing unique ethnic and cultural considerations which may influence the goal for the individual patient.",
"title": ""
},
{
"docid": "373c89beb40ce164999892be2ccb8f46",
"text": "Recent advances in mobile technologies (esp., smart phones and tablets with built-in cameras, GPS and Internet access) made augmented reality (AR ) applications available for the broad public. While many researchers have examined the af fordances and constraints of AR for teaching and learning, quantitative evidence for it s effectiveness is still scarce. To contribute to filling this research gap, we designed and condu cted a pretest-posttest crossover field experiment with 101 participants at a mathematics exh ibition to measure the effect of AR on acquiring and retaining mathematical knowledge in a n informal learning environment. We hypothesized that visitors acquire more knowledge f rom augmented exhibits than from exhibits without AR. The theoretical rationale for our h ypothesis is that AR allows for the efficient and effective implementation of a subset of the des ign principles defined in the cognitive theory of multimedia. The empirical results we obtaine d show that museum visitors performed better on knowledge acquisition and retention tests related to augmented exhibits than to nonaugmented exhibits and that they perceived AR as a valuable and desirable add-on for museum exhibitions.",
"title": ""
},
{
"docid": "f64390896e5529f676484b9b0f4eab84",
"text": "Identifying the object that attracts human visual attention is an essential function for automatic services in smart environments. However, existing solutions can compute the gaze direction without providing the distance to the target. In addition, most of them rely on special devices or infrastructure support. This paper explores the possibility of using a smartphone to detect the visual attention of a user. By applying the proposed VADS system, acquiring the location of the intended object only requires one simple action: gazing at the intended object and holding up the smartphone so that the object as well as user's face can be simultaneously captured by the front and rear cameras. We extend the current advances of computer vision to develop efficient algorithms to obtain the distance between the camera and user, the user's gaze direction, and the object's direction from camera. The object's location can then be computed by solving a trigonometric problem. VADS has been prototyped on commercial off-the-shelf (COTS) devices. Extensive evaluation results show that VADS achieves low error (about 1.5° in angle and 0.15m in distance for objects within 12m) as well as short latency. We believe that VADS enables a large variety of applications in smart environments.",
"title": ""
},
{
"docid": "74d2d780291e9dbf2e725b55ccadd278",
"text": "Organizational climate and organizational culture theory and research are reviewed. The article is first framed with definitions of the constructs, and preliminary thoughts on their interrelationships are noted. Organizational climate is briefly defined as the meanings people attach to interrelated bundles of experiences they have at work. Organizational culture is briefly defined as the basic assumptions about the world and the values that guide life in organizations. A brief history of climate research is presented, followed by the major accomplishments in research on the topic with regard to levels issues, the foci of climate research, and studies of climate strength. A brief overview of the more recent study of organizational culture is then introduced, followed by samples of important thinking and research on the roles of leadership and national culture in understanding organizational culture and performance and culture as a moderator variable in research in organizational behavior. The final section of the article proposes an integration of climate and culture thinking and research and concludes with practical implications for the management of effective contemporary organizations. Throughout, recommendations are made for additional thinking and research.",
"title": ""
},
{
"docid": "a61c1e5c1eafd5efd8ee7021613cf90d",
"text": "A millimeter-wave (mmW) bandpass filter (BPF) using substrate integrated waveguide (SIW) is proposed in this work. A BPF with three resonators is formed by etching slots on the top metal plane of the single SIW cavity. The filter is investigated with the theory of electric coupling mechanism. The design procedure and design curves of the coupling coefficient (K) and quality factor (Q) are given and discussed here. The extracted K and Q are used to determine the filter circuit dimensions. In order to prove the validity, a SIW BPF operating at 140 GHz is fabricated in a single circuit layer using low temperature co-fired ceramic (LTCC) technology. The measured insertion loss is 1.913 dB at 140 GHz with a fractional bandwidth of 13.03%. The measured results are in good agreement with simulated results in such high frequency.",
"title": ""
},
{
"docid": "e5dc07c94c7519f730d03aa6c53ca98e",
"text": "Brown adipose tissue (BAT) is specialized to dissipate chemical energy in the form of heat as a defense against cold and excessive feeding. Interest in the field of BAT biology has exploded in the past few years because of the therapeutic potential of BAT to counteract obesity and obesity-related diseases, including insulin resistance. Much progress has been made, particularly in the areas of BAT physiology in adult humans, developmental lineages of brown adipose cell fate, and hormonal control of BAT thermogenesis. As we enter into a new era of brown fat biology, the next challenge will be to develop strategies for activating BAT thermogenesis in adult humans to increase whole-body energy expenditure. This article reviews the recent major advances in this field and discusses emerging questions.",
"title": ""
},
{
"docid": "f23ce789f76fe15e78a734caa5d2bc53",
"text": "The importance of location based services (LBS) is steadily increasing with progressive automation and interconnectedness of systems and processes. However, a comprehensive localization and navigation solution is still part of research. Especially for dynamic and harsh indoor environments, accurate and affordable localization and navigation remains a challenge. In this paper, we present a hybrid localization system providing position information and navigation aid to pedestrian in dynamic indoor environments, like construction sites, by combining an IMU and a spatial non-uniform UWB-network. The key contribution of this paper is a hybrid localization concept and experimental results, demonstrating in an application near scenario the enhancements introduced by the combination of an inertial navigation system (INS) and a spatial non-uniform UWB-network.",
"title": ""
},
{
"docid": "303098fa8e5ccd7cf50a955da7e47f2e",
"text": "This paper describes the SALSA corpus, a large German corpus manually annotated with role-semantic information, based on the syntactically annotated TIGER newspaper corpus (Brants et al., 2002). The first release, comprising about 20,000 annotated predicate instances (about half the TIGER corpus), is scheduled for mid-2006. In this paper we discuss the frame-semantic annotation framework and its cross-lingual applicability, problems arising from exhaustive annotation, strategies for quality control, and possible applications.",
"title": ""
},
{
"docid": "cdb87a9db48b78e193d9229282bd3b67",
"text": "While large-scale automatic grading of student programs for correctness is widespread, less effort has focused on automating feedback for good programming style:} the tasteful use of language features and idioms to produce code that is not only correct, but also concise, elegant, and revealing of design intent. We hypothesize that with a large enough (MOOC-sized) corpus of submissions to a given programming problem, we can observe a range of stylistic mastery from naïve to expert, and many points in between, and that we can exploit this continuum to automatically provide hints to learners for improving their code style based on the key stylistic differences between a given learner's submission and a submission that is stylistically slightly better. We are developing a methodology for analyzing and doing feature engineering on differences between submissions, and for learning from instructor-provided feedback as to which hints are most relevant. We describe the techniques used to do this in our prototype, which will be deployed in a residential software engineering course as an alpha test prior to deploying in a MOOC later this year.",
"title": ""
},
{
"docid": "6d80c1d1435f016b124b2d61ef4437a5",
"text": "Recent high profile developments of autonomous learning thermostats by companies such as Nest Labs and Honeywell have brought to the fore the possibility of ever greater numbers of intelligent devices permeating our homes and working environments into the future. However, the specific learning approaches and methodologies utilised by these devices have never been made public. In fact little information is known as to the specifics of how these devices operate and learn about their environments or the users who use them. This paper proposes a suitable learning architecture for such an intelligent thermostat in the hope that it will benefit further investigation by the research community. Our architecture comprises a number of different learning methods each of which contributes to create a complete autonomous thermostat capable of controlling a HVAC system. A novel state action space formalism is proposed to enable a Reinforcement Learning agent to successfully control the HVAC system by optimising both occupant comfort and energy costs. Our results show that the learning thermostat can achieve cost savings of 10% over a programmable thermostat, whilst maintaining high occupant comfort standards.",
"title": ""
}
] |
scidocsrr
|
824d1d9d894e467e28e2bb48f0f4bdf0
|
Bag-of-Audio-Words Approach for Multimedia Event Classification
|
[
{
"docid": "5f351dc1334f43ce1c80a1e78581d0f9",
"text": "Based on keypoints extracted as salient image patches, an image can be described as a \"bag of visual words\" and this representation has been used in scene classification. The choice of dimension, selection, and weighting of visual words in this representation is crucial to the classification performance but has not been thoroughly studied in previous work. Given the analogy between this representation and the bag-of-words representation of text documents, we apply techniques used in text categorization, including term weighting, stop word removal, feature selection, to generate image representations that differ in the dimension, selection, and weighting of visual words. The impact of these representation choices to scene classification is studied through extensive experiments on the TRECVID and PASCAL collection. This study provides an empirical basis for designing visual-word representations that are likely to produce superior classification performance.",
"title": ""
}
] |
[
{
"docid": "d1a8e3a67181cd43429a98dc38affd35",
"text": "Deep belief nets (DBNs) with multiple artificial neural networks (ANNs) have attracted many researchers recently. In this paper, we propose to compose restricted Boltzmann machine (RBM) and multi-layer perceptron (MLP) as a DBN to predict chaotic time series data, such as the Lorenz chaos and the Henon map. Experiment results showed that in the sense of prediction precision, the novel DBN performed better than the conventional DBN with RBMs.",
"title": ""
},
{
"docid": "4c8412dca4cbc9f65d29fffa95dee288",
"text": "This paper deals with fundamental change processes in socio-technical systems. It offers a typology of changes based on a multi-level perspective of innovation. Three types of change processes are identified: reproduction, transformation and transition. ‘Reproduction’ refers to incremental change along existing trajectories. ‘Transformation’ refers to a change in the direction of trajectories, related to a change in rules that guide innovative action. ‘Transition’ refers to a discontinuous shift to a new trajectory and system. Using the multi-level perspective, the underlying mechanisms of these change processes are identified. The transformation and transition processes are empirically illustrated by two contrasting case studies: the hygienic transition from cesspools to integrated sewer systems (1870–1930) and the transformation in waste management (1960–2000) in the Netherlands. r 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "767de215cc843a255aa31ee3b45cc373",
"text": "Breast cancer is the most frequently diagnosed cancer and leading cause of cancer-related death among females worldwide. In this article, we investigate the applicability of densely connected convolutional neural networks to the problems of histology image classification and whole slide image segmentation in the area of computer-aided diagnoses for breast cancer. To this end, we study various approaches for transfer learning and apply them to the data set from the 2018 grand challenge on breast cancer histology images (BACH).",
"title": ""
},
{
"docid": "eb3b1550daa111b1977ee7e4a3ec6e43",
"text": "This paper introduces an inexpensive prosthetic hand control system designed to reduce the cognitive burden on amputees. It is designed around a vision-based object recognition system with an embedded camera that automates grasp selection and switching, and an inexpensive mechanomyography (MMG) sensor for hand opening and closing. A prototype has been developed and implemented to select between two different grasp configurations for the Bebionic V2 hand, developed by RSLSteeper. Pick and place experiments on 6 different objects in `Power' and `Pinch' grasps were used to assess feasibility on which to base full system development. Experimentation demonstrated an overall accuracy of 84.4% for grasp selection between pairs of objects. The results showed that it was more difficult to classify larger objects due to their size relative to the camera resolution. The grasping task became more accurate with time, indicating learning capability when estimating the position and trajectory of the hand for correct grasp selection; however further experimentation is required to form a conclusion. The limitation of this involves the use of unnatural reaching trajectories for correct grasp selection. The success in basic experimentation provides the proof of concept required for further system development.",
"title": ""
},
{
"docid": "3dd6682c4307567e49b025d11b36b8a5",
"text": "Deep generative architectures provide a way to model not only images, but also complex, 3-dimensional objects, such as point clouds. In this work, we present a novel method to obtain meaningful representations of 3D shapes that can be used for clustering and reconstruction. Contrary to existing methods for 3D point cloud generation that train separate decoupled models for representation learning and generation, our approach is the first end-to-end solution that allows to simultaneously learn a latent space of representation and generate 3D shape out of it. To achieve this goal, we extend a deep Adversarial Autoencoder model (AAE) to accept 3D input and create 3D output. Thanks to our end-to-end training regime, the resulting method called 3D Adversarial Autoencoder (3dAAE) obtains either binary or continuous latent space that covers much wider portion of training data distribution, hence allowing smooth interpolation between the shapes. Finally, our extensive quantitative evaluation shows that 3dAAE provides state-of-theart results on a set of benchmark tasks.",
"title": ""
},
{
"docid": "d6d275b719451982fa67d442c55c186c",
"text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.",
"title": ""
},
{
"docid": "0b4251957cb90fd04fa6edc08334b0dd",
"text": "Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.",
"title": ""
},
{
"docid": "79a0ebde3638b17709e3f92f44ab715b",
"text": "The objective of this work is to provide analytical guidelines and financial justification for the design of shared-vehicle mobility-on-demand systems. Specifically, we consider the fundamental issue of determining the appropriate number of vehicles to field in the fleet, and estimate the financial benefits of several models of car sharing. As a case study, we consider replacing all modes of personal transportation in a city such as Singapore with a fleet of shared automated vehicles, able to drive themselves, e.g., to move to a customer’s location. Using actual transportation data, our analysis suggests a shared-vehicle mobility solution can meet the personal mobility needs of the entire population with a fleet whose size is approximately 1/3 of the total number of passenger vehicles currently in operation.",
"title": ""
},
{
"docid": "833e729e31f4d39984bf799c983080d6",
"text": "Recent work has shown that there may be disadvantages in the use of the chi-square-like goodness-of-fit tests for the logistic regression model proposed by Hosmer and Lemeshow that use fixed groups of the estimated probabilities. A particular concern with these grouping strategies based on estimated probabilities, fitted values, is that groups may contain subjects with widely different values of the covariates. It is possible to demonstrate situations where one set of fixed groups shows the model fits while the test rejects fit using a different set of fixed groups. We compare the performance by simulation of these tests to tests based on smoothed residuals proposed by le Cessie and Van Houwelingen and Royston, a score test for an extended logistic regression model proposed by Stukel, the Pearson chi-square and the unweighted residual sum-of-squares. These simulations demonstrate that all but one of Royston's tests have the correct size. An examination of the performance of the tests when the correct model has a quadratic term but a model containing only the linear term has been fit shows that the Pearson chi-square, the unweighted sum-of-squares, the Hosmer-Lemeshow decile of risk, the smoothed residual sum-of-squares and Stukel's score test, have power exceeding 50 per cent to detect moderate departures from linearity when the sample size is 100 and have power over 90 per cent for these same alternatives for samples of size 500. All tests had no power when the correct model had an interaction between a dichotomous and continuous covariate but only the continuous covariate model was fit. Power to detect an incorrectly specified link was poor for samples of size 100. For samples of size 500 Stukel's score test had the best power but it only exceeded 50 per cent to detect an asymmetric link function. The power of the unweighted sum-of-squares test to detect an incorrectly specified link function was slightly less than Stukel's score test. We illustrate the tests within the context of a model for factors associated with low birth weight.",
"title": ""
},
{
"docid": "5a071ee0aec4cc4d2f67384695a43df8",
"text": "The emerging field of soft robotics makes use of many classes of materials including metals, low glass transition temperature (Tg) plastics, andhighTgelastomers.Dependent on the specific design, all of these materials may result in extrinsically soft robots. Organic elastomers, however, have elastic moduli ranging from tens ofmegapascals down to kilopascals; robots composed of suchmaterials are intrinsically soft they are always compliant independent of their shape. This class of soft machines has been used to reduce control complexity and manufacturing cost of robots, while enabling sophisticated and novel functionalities often in direct contact with humans. This review focuses on a particular type of intrinsically soft, elastomeric robot those powered via fluidic pressurization.",
"title": ""
},
{
"docid": "6d0c19165e0ac33b39c9a39d706f7128",
"text": "dbDedup is a similarity-based deduplication scheme for on-line database management systems (DBMSs). Beyond block-level compression of individual database pages or operation log (oplog) messages, as used in today's DBMSs, dbDedup uses byte-level delta encoding of individual records within the database to achieve greater savings. dbDedup's single-pass encoding method can be integrated into the storage and logging components of a DBMS to provide two benefits: (1) reduced size of data stored on disk beyond what traditional compression schemes provide, and (2) reduced amount of data transmitted over the network for replication services. To evaluate our work, we implemented dbDedup in a distributed NoSQL DBMS and analyzed its properties using four real datasets. Our results show that dbDedup achieves up to 37x reduction in the storage size and replication traffic of the database on its own and up to 61x reduction when paired with the DBMS's block-level compression. dbDedup provides both benefits with negligible effect on DBMS throughput or client latency (average and tail).",
"title": ""
},
{
"docid": "b50c6702253a3b56acf42fca6d4af883",
"text": "Infusion therapy is one of the largest practised therapies in any healthcare organisation, and infusion pumps are used to deliver millions of infusions every year in the NHS. The aircraft industry downloads information from 'black boxes' to help design better systems and reduce risk; however, the same cannot be said about error logs and data logs from infusion pumps. This study downloaded and analysed approximately 360 000 hours of infusion pump error logs from 131 infusion pumps used for up to 2 years in one large acute hospital. Staff had to manage 260 129 alarms; this accounted for approximately 5% of total infusion time, costing about £1000 per pump per year. This paper describes many such insights, including numerous technical errors, propensity for certain alarms in clinical conditions, logistical issues and how infrastructure problems can lead to an increase in alarm conditions. Routine use of error log analysis, combined with appropriate management of pumps to help identify improved device design, use and application is recommended.",
"title": ""
},
{
"docid": "6c2ac0d096c1bcaac7fd70bd36a5c056",
"text": "The purpose of this review is to illustrate the ways in which molecular neurobiological investigations will contribute to an improved understanding of drug addiction and, ultimately, to the development of more effective treatments. Such molecular studies of drug addiction are needed to establish two general types of information: (1) mechanisms of pathophysiology, identification of the changes that drugs of abuse produce in the brain that lead to addiction; and (2) mechanisms of individual risk, identification of specific genetic and environmental factors that increase or decrease an individual's vulnerability for addiction. This information will one day lead to fundamentally new approaches to the treatment and prevention of addictive disorders.",
"title": ""
},
{
"docid": "d43b9ddc3f5d8b589190a5111a2f9d0e",
"text": "Crowdfunding represents an attractive new option for funding research projects, especially for students and early-career scientists or in the absence of governmental aid in some countries. The number of successful science-related crowdfunding campaigns is growing, which demonstrates the public's willingness to support and participate in scientific projects. Putting together a crowdfunding campaign is not trivial, however, so here is a guide to help you make yours a success.",
"title": ""
},
{
"docid": "6ff6dda12f07fd37be4027b41c4f5e58",
"text": "In this paper, a compact waveguide magic-T for high-power solid-state power combining is proposed. The coplanar arms of the <inline-formula> <tex-math notation=\"LaTeX\">$E$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula> ports are realized by the <inline-formula> <tex-math notation=\"LaTeX\">$E$ </tex-math></inline-formula>-plane power divider and ridge waveguide coupling structure. The input port of the <inline-formula> <tex-math notation=\"LaTeX\">$E$ </tex-math></inline-formula>-plane power divider is used to realize the difference port of the magic-T, and the ridge waveguide port is utilized to realize the sum port. Based on a theoretical analysis, a modified magic-T with two coaxial ports, one ridge port, and one rectangular port is designed and fabricated. Low-power tests show that from 7.8 to 9.4 GHz, when the difference port and the sum port are excited, the insertion loss of the magic-T is less than 0.2 dB. The isolation between the sum/difference ports and the two input ports is better than 40 and 26 dB. As for the in-phase and out-of-phase excitation, the amplitude and phase imbalances are less than ±0.05 dB and 1°. High-power experiments indicate that the power capacity is no less than 14 kW with a 1-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{s}$ </tex-math></inline-formula> pulsewidth. The measured results agree with the simulations.",
"title": ""
},
{
"docid": "0fc50684d7bb4b4eba85bbd474a6548e",
"text": "Failure of corollary discharge, a mechanism for distinguishing self-generated from externally generated percepts, has been posited to underlie certain positive symptoms of schizophrenia, including auditory hallucinations. Although originally described in the visual system, corollary discharge may exist in the auditory system, whereby signals from motor speech commands prepare auditory cortex for self-generated speech. While associated with sensorimotor systems, it might also apply to inner speech or thought, regarded as our most complex motor act. In this paper, we describe the results of a series of studies in which we have shown that: (1) event-related brain potentials (ERPs) can be used to demonstrate the corollary discharge phenomenon during talking, (2) corollary discharge is abnormal in patients with schizophrenia, (3) EEG gamma band coherence between frontal and temporal lobes is greater during talking than listening and is disrupted by distorted feedback during talking in normals, and (4) patients with schizophrenia do not show this pattern for EEG gamma coherence. While these studies have identified ERPs and EEG gamma coherence indices of the efference copy/corollary discharge system and documented abnormalities in these systems in patients with schizophrenia, we have so far had limited success in establishing a relationship between these neurobiologic indicators of corollary discharge abnormality and reports of hallucinations in patients.",
"title": ""
},
{
"docid": "5b1c38fccbd591e6ab00a66ef636eb5d",
"text": "There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed ‘big data’. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mechanism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the existing approaches using Brewer’s CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.",
"title": ""
},
{
"docid": "76aacf8fd5c24f64211015ce9c196bf0",
"text": "In industrially relevant Cu/ZnO/Al2 O3 catalysts for methanol synthesis, the strong metal support interaction between Cu and ZnO is known to play a key role. Here we report a detailed chemical transmission electron microscopy study on the nanostructural consequences of the strong metal support interaction in an activated high-performance catalyst. For the first time, clear evidence for the formation of metastable \"graphite-like\" ZnO layers during reductive activation is provided. The description of this metastable layer might contribute to the understanding of synergistic effects between the components of the Cu/ZnO/Al2 O3 catalysts.",
"title": ""
},
{
"docid": "765b524fe24c51360a921957333b2bb1",
"text": "A number of ontology repositories provide access to the growing collection of ontologies on the Semantic Web. Some repositories collect ontologies automatically by crawling the Web; in other repositories, users submit ontologies themselves. In addition to providing search across multiple ontologies, the added value of ontology repositories lies in the metadata that they may contain. This metadata may include information provided by ontology authors, such as ontologies’ scope and intended use; feedback provided by users such as their experiences in using the ontologies or reviews of the content; and mapping metadata that relates concepts from different ontologies. In this paper, we focus on the ontology-mapping metadata and on community-based method to collect ontology mappings. More specifically, we develop a model for representing mappings collected from the user community and the metadata associated with the mapping. We use the model to bring together more than 30,000 mappings from 7 sources. We also validate the model by extending BioPortal—a repository of biomedical ontologies that we have developed—to enable users to create single concept-toconcept mappings in its graphical user interface, to upload and download mappings created with other tools, to comment on the mappings and to discuss them, and to visualize the mappings and the corresponding metadata. 1 Ontology Mapping and the Wisdom of the Crowds As the number of ontologies available for Semantic Web applications grows, so does the number of ontology repositories that index and organize the ontologies. Some repositories crawl the Web to collect ontologies (e.g., Swoogle [4], Watson [3] and OntoSelect [2]). In other repositories, users submit their ontologies themselves (e.g., the DAML ontology library1 and SchemaWeb2). These repositories provide a gateway for users and application developers who need to find ontologies to use in their work. In our laboratory, we have developed BioPortal3—an open repository of biomedical ontologies. Researchers in biomedical informatics submit their ontologies to BioPortal and others can access the ontologies through the BioPortal user interface or through web services. The BioPortal users can browse and search the ontologies, update the ontologies in the repository by uploading new versions, comment on any ontology (or portion of an ontology) in the repository, evaluate it, describe their experience in using the ontology, 1 http://www.daml.org/ontologies/ 2 http://www.schemaweb.info/ 3 http://alpha.bioontology.org or make suggestions to ontology developers. At the time of this writing, BioPortal has 72 biomedical ontologies with more than 300,000 classes. While the BioPortal content focuses on the biomedical domain, the BioPortal technology is domain-independent. Ontologies in BioPortal, as in almost any ontology repository, overlap in coverage. Thus, mappings among ontologies in a repository constitute a key component that enables the use of the ontologies for data and information integration. For example, researchers can use the mappings to relate their data, which had been annotated with concepts from one ontology, to concepts in another ontology. We view ontology mappings as an essential part of the ontology repository: Mappings between ontology concepts are first-class objects in the BioPortal repository. Users can browse the mappings, create new mappings, upload the mappings created with other tools, download mappings that BioPortal has, or comment on the mappings and discuss them. The mapping repository in BioPortal address two key problems in ontology mapping. First, our implementation enables and encourages community participation in mapping creation. We enable users to add as many or as few mappings as they like or feel qualified to do. Users can use the discussion facilities that we integrated in the repository to reach consensus on controversial mappings or to understand the differences between their points of view. Most researchers agree that, even though there has been steady progress in the performance of the automatic alignment tools [5], experts will need to be involved in the mapping task for the foreseeable future. Enabling community participation in mapping creation, we hope to have more people contributing mappings and, hence, to get closer to the critical mass of users that we need to create and verify the mappings. Second, the integration of an ontology repository with a mapping repository provides users with a one-stop shopping for ontology resources. The BioPortal system integrates ontologies, ontology metadata, peer reviews of ontologies, resources annotated with ontology terms, and ontology mappings, adding value to each of the individual components. The services that use one of the resources can rely on the other resources in the system. For instance, we can use mappings when searching through OBR. Alternatively, we can use the OBR data to suggest new mappings. The BioPortal mapping repository contains not only the mappings created by the BioPortal users, but also (and, at the time of this writing, mostly) mappings created elsewhere and by other tools, and uploaded in bulk to BioPortal. In recent years, Semantic Web researchers explored community-based approaches to creating various ontology-based resources [16]. For example, SOBOLEO [26] uses an approach that is similar to collaborative tagging to have users create a simple ontology. Collaborative Protégé [15] enables users to create OWL ontologies collaboratively, discussing their design decisions, putting forward proposals, and reaching consensus. BioPortal harnesses collective intelligence to provide peer reviews of ontologies and to have users comment on ontologies and ontology components [21]. Researchers have also proposed using community-based approaches to create mappings [14]. For example, McCann and colleagues[12] asked users to identify mappings between database schemas as a “payment” for accessing some services on their web site. The authors then used these mappings to improve the performance of their mapping algorithms. They analyzed different characteristics of the user community and",
"title": ""
},
{
"docid": "a0892e2b1f63368211922325add9dfb5",
"text": "The probabilistic prediction of quantum theory is mystery. In this paper I solved the mystery by Torus Theory. In this theory an absolute value of a wave function is a radius of a new space. This suggests the unification between quantum theory and the theory of relativity.",
"title": ""
}
] |
scidocsrr
|
63a8336428573c7ebc00658f69108a58
|
Measuring Calorie and Nutrition From Food Image
|
[
{
"docid": "f4fbd925fb46f05c526b228993f5e326",
"text": "Obesity in the world has spread to epidemic proportions. In 2008 the World Health Organization (WHO) reported that 1.5 billion adults were suffering from some sort of overweightness. Obesity treatment requires constant monitoring and a rigorous control and diet to measure daily calorie intake. These controls are expensive for the health care system, and the patient regularly rejects the treatment because of the excessive control over the user. Recently, studies have suggested that the usage of technology such as smartphones may enhance the treatments of obesity and overweight patients; this will generate a degree of comfort for the patient, while the dietitian can count on a better option to record the food intake for the patient. In this paper we propose a smart system that takes advantage of the technologies available for the Smartphones, to build an application to measure and monitor the daily calorie intake for obese and overweight patients. Via a special technique, the system records a photo of the food before and after eating in order to estimate the consumption calorie of the selected food and its nutrient components. Our system presents a new instrument in food intake measuring which can be more useful and effective.",
"title": ""
}
] |
[
{
"docid": "bfcb1fd882a328daab503a7dd6b6d0a6",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several non-trivial examples.",
"title": ""
},
{
"docid": "3ebe9aecd4c84e9b9ed0837bd294b4ed",
"text": "A bond graph model of a hybrid electric vehicle (HEV) powertrain test cell is proposed. The test cell consists of a motor/generator coupled to a HEV powertrain and powered by a bidirectional power converter. Programmable loading conditions, including positive and negative resistive and inertial loads of any magnitude are modeled, avoiding the use of mechanical inertial loads involved in conventional test cells. The dynamics and control equations of the test cell are derived directly from the bond graph models. The modeling and simulation results of the dynamics of the test cell are validated through experiments carried out on a scaled-down system.",
"title": ""
},
{
"docid": "2313822a08269b3dd125190c4874b808",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "876dd0a985f00bb8145e016cc8593a84",
"text": "This paper presents how to synthesize a texture in a procedural way that preserves the features of the input exemplar. The exemplar is analyzed in both spatial and frequency domains to be decomposed into feature and non-feature parts. Then, the non-feature parts are reproduced as a procedural noise, whereas the features are independently synthesized. They are combined to output a non-repetitive texture that also preserves the exemplar’s features. The proposed method allows the user to control the extent of extracted features and also enables a texture to edited quite effectively.",
"title": ""
},
{
"docid": "9df6a4c0143cfc3a0b1263b1fa07e810",
"text": "In this paper, we propose a new fast dehazing method from single image based on filtering. The basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. We firstly obtain an initial atmosphere scattering light through median filtering, then refine it by guided joint bilateral filtering to generate a new atmosphere veil which removes the abundant texture information and recovers the depth edge information. Finally, we solve the scene radiance using the atmosphere attenuation model. Compared with exiting state of the art dehazing methods, our method could get a better dehazing effect at distant scene and places where depth changes abruptly. Our method is fast with linear complexity in the number of pixels of the input image; furthermore, as our method can be performed in parallel, thus it can be further accelerated using GPU, which makes our method applicable for real-time requirement.",
"title": ""
},
{
"docid": "6aaabe17947bc455d940047745ed7962",
"text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.",
"title": ""
},
{
"docid": "6c647c3260c0a31cac1a3cd412919aad",
"text": "Twitter is a micro-blogging site that allows users and companies to post brief pieces of information called Tweets . Some of the tweets contain keywords such as Hashtags denoted with a # , essentially one word summaries of either the topic or emotion of the tweet. The goal of this paper is to examine an approach to perform hashtag discovery on Twitter posts that do not contain user labeled hashtags. The process described in this paper is geared to be as automatic as possible, taking advantage of web information, sentiment analysis, geographic location, basic filtering and classification processes, to generate hashtags for tweets. Hashtags provide users and search queries a fast and simple basis to filter and find information that they are interested in.",
"title": ""
},
{
"docid": "a20a03fcb848c310cb966f6e6bc37c86",
"text": "A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model. Traditionally, hand-crafted priors along with iterative optimization methods have been used to solve such problems. In this paper we present unrolled optimization with deep priors, a principled framework for infusing knowledge of the image formation into deep networks that solve inverse problems in imaging, inspired by classical iterative methods. We show that instances of the framework outperform the state-of-the-art by a substantial margin for a wide variety of imaging problems, such as denoising, deblurring, and compressed sensing magnetic resonance imaging (MRI). Moreover, we conduct experiments that explain how the framework is best used and why it outperforms previous methods.",
"title": ""
},
{
"docid": "d5a772fa54c9a0d40e7a879831f79654",
"text": "It is almost certainly the case that many populations have always existed as metapopulations, leading to the conclusion that local extinctions are common and normally balanced by migrations. This conclusion has major consequences for biodiversity conservation in fragmented tropical forests and the agricultural matrices in which they are embedded. Here we make the argument that the conservation paradigm that focuses on setting aside pristine forests while ignoring the agricultural landscape is a failed strategy in light of what is now conventional wisdom in ecology. Given the fragmented nature of most tropical ecosystems, agricultural landscapes should be an essential component of any conservation strategy. We review the literature on biodiversity in tropical agricultural landscapes and present evidence that many tropical agricultural systems have high levels of biodiversity (planned and associated). These systems represent, not only habitat for biodiversity, but also a high-quality matrix that permits the movement of forest organisms among patches of natural vegetation. We review a variety of agroecosystem types and conclude that diverse, low-input systems using agroecological principles are probably the best option for a high-quality matrix. Such systems are most likely to be constructed by small farmers with land titles, who, in turn, are normally the consequence of grassroots social movements. Therefore, the new conservation paradigm should incorporate a landscape approach in which small farmers, through their social organizations, work with conservationists to create a landscape matrix dominated by productive agroecological systems that facilitate interpatch migration while promoting a sustainable and dignified livelihood for rural communities.",
"title": ""
},
{
"docid": "89dd97465c8373bb9dabf3cbb26a4448",
"text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.",
"title": ""
},
{
"docid": "0b1bb42b175ed925b357112d869d3ddd",
"text": "While location is one of the most important context information in mobile and ubiquitous computing, large-scale deployment of indoor localization system remains elusive.\n In this work, we propose PiLoc, an indoor localization system that utilizes opportunistically sensed data contributed by users. Our system does not require manual calibration, prior knowledge and infrastructure support. The key novelty of PiLoc is that it merges walking segments annotated with displacement and signal strength information from users to derive a map of walking paths annotated with radio signal strengths.\n We evaluate PiLoc over 4 different indoor areas. Evaluation shows that our system can achieve an average localization error of 1.5m.",
"title": ""
},
{
"docid": "27408da448d237ec9bfe7f2eeb94743c",
"text": "Background: Vaccinium arctostaphylos L. (Caucasian whortleberry) fruit is used as an antihyperglycemic agent for treatment of diabetes mellitus. Objective: The effects of whortleberry fruit and leaf extracts on the blood levels of fasting glucose, HbA1c (glycosylated hemoglobin), insulin, creatinine and liver enzymes SGOT and SGPT in alloxan-diabetic rats as well as LD50s of the extracts in rats were studied. Methods: The effects of 2 months daily gavage of each extract at the doses of 250 mg/kg, 500 mg/kg and 1000 mg/kg on the parameters after single alloxan intraperitoneal injection at a dose of 125 mg/kg in the rats were evaluated. To calculate LD50 (median lethal dose), each extract was gavaged to groups of 30 healthy male and female Wistar rats at various doses once and the number of dead animals in each group within 72 hours was determined. Results: Alloxan injection resulted in significant increase of fasting glucose and HbA1c levels but decreased insulin levels significantly. Oral administration of whortleberry fruit and leaf extracts (each at the doses of 250, 500 and 1000 mg/kg) significantly reduced the fasting glucose and HbA1c levels but significantly increased the insulin levels without any significant effects on the SGOT, SGPT and creatinine levels in the diabetic rats compared with the control diabetic rats. The LD50s of the extracts were more than 15 g/kg. Conclusion: Whortleberry fruits and leaves may have anti-hyperglycemic and blood insulin level elevating effects without hepatic and renal toxicities in the alloxan-diabetic rats and are relatively nontoxic in rats.",
"title": ""
},
{
"docid": "42c297b74abd95bbe70bb00ddb0aa925",
"text": "IMPASS (Intelligent Mobility Platform with Active Spoke System) is a novel locomotion system concept that utilizes rimless wheels with individually actuated spokes to provide the ability to step over large obstacles like legs, adapt to uneven surfaces like tracks, yet retaining the speed and simplicity of wheels. Since it lacks the complexity of legs and has a large effective (wheel) diameter, this highly adaptive system can move over extreme terrain with ease while maintaining respectable travel speeds. This paper presents the concept, preliminary kinematic analyses and design of an IMPASS based robot with two actuated spoke wheels and an articulated tail. The actuated spoke wheel concept allows multiple modes of motion, which give it the ability to assume a stable stance using three contact points per wheel, walk with static stability with two contact points per wheel, or stride quickly using one contact point per wheel. Straight-line motion and considerations for turning are discussed for the oneand two-point contact schemes followed by the preliminary design and recommendations for future study. Index Terms – IMPASS, rimless wheel, actuated spoke wheel, mobility, locomotion.",
"title": ""
},
{
"docid": "5cfc2b3a740d0434cf0b3c2812bd6e7a",
"text": "Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some a logical approach to discrete math references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?",
"title": ""
},
{
"docid": "8980bdf92581e8a0816364362fec409b",
"text": "OBJECTIVE\nPrenatal exposure to inappropriate levels of glucocorticoids (GCs) and maternal stress are putative mechanisms for the fetal programming of later health outcomes. The current investigation examined the influence of prenatal maternal cortisol and maternal psychosocial stress on infant physiological and behavioral responses to stress.\n\n\nMETHODS\nThe study sample comprised 116 women and their full term infants. Maternal plasma cortisol and report of stress, anxiety and depression were assessed at 15, 19, 25, 31 and 36 + weeks' gestational age. Infant cortisol and behavioral responses to the painful stress of a heel-stick blood draw were evaluated at 24 hours after birth. The association between prenatal maternal measures and infant cortisol and behavioral stress responses was examined using hierarchical linear growth curve modeling.\n\n\nRESULTS\nA larger infant cortisol response to the heel-stick procedure was associated with exposure to elevated concentrations of maternal cortisol during the late second and third trimesters. Additionally, a slower rate of behavioral recovery from the painful stress of a heel-stick blood draw was predicted by elevated levels of maternal cortisol early in pregnancy as well as prenatal maternal psychosocial stress throughout gestation. These associations could not be explained by mode of delivery, prenatal medical history, socioeconomic status or child race, sex or birth order.\n\n\nCONCLUSIONS\nThese data suggest that exposure to maternal cortisol and psychosocial stress exerts programming influences on the developing fetus with consequences for infant stress regulation.",
"title": ""
},
{
"docid": "57e70bca420ca75412758ef8591c99ab",
"text": "We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. To efficiently partition graphs, we experiment with spectral partitioning and also propose a modified multi-seed flood fill for fast processing of large scale graphs. We extensively test our model on a variety of semi-supervised node classification tasks. Experimental results indicate that GPNNs are either superior or comparable to state-of-the-art methods on a wide variety of datasets for graph-based semi-supervised classification. We also show that GPNNs can achieve similar performance as standard GNNs with fewer propagation steps.",
"title": ""
},
{
"docid": "2b9bc83596deb55302bb6f4314410269",
"text": "Collaborative Filtering: A Machine Learning Perspective Benjamin Marlin Master of Science Graduate Department of Computer Science University of Toronto 2004 Collaborative filtering was initially proposed as a framework for filtering information based on the preferences of users, and has since been refined in many different ways. This thesis is a comprehensive study of rating-based, pure, non-sequential collaborative filtering. We analyze existing methods for the task of rating prediction from a machine learning perspective. We show that many existing methods proposed for this task are simple applications or modifications of one or more standard machine learning methods for classification, regression, clustering, dimensionality reduction, and density estimation. We introduce new prediction methods in all of these classes. We introduce a new experimental procedure for testing stronger forms of generalization than has been used previously. We implement a total of nine prediction methods, and conduct large scale prediction accuracy experiments. We show interesting new results on the relative performance of these methods.",
"title": ""
},
{
"docid": "115b89c782465a740e5e7aa2cae52669",
"text": "Japan discards approximately 18 million tonnes of food annually, an amount that accounts for 40% of national food production. In recent years, a number of measures have been adopted at the institutional level to tackle this issue, showing increasing commitment of the government and other organizations. Along with the aim of environmental sustainability, food waste recycling, food loss prevention and consumer awareness raising in Japan are clearly pursuing another common objective. Although food loss and waste problems have been publicly acknowledged only very recently, strong implications arise from the economic and cultural history of the Japanese food system. Specific national concerns over food security have accompanied the formulation of current national strategies whose underlying causes and objectives add a unique facet to Japan’s efforts with respect to those of other developed countries’. Fighting Food Loss and Food Waste in Japan",
"title": ""
},
{
"docid": "c4dbfff3966e2694727aa171e29fa4bd",
"text": "The ability to recognize known places is an essential competence of any intelligent system that operates autonomously over longer periods of time. Approaches that rely on the visual appearance of distinct scenes have recently been developed and applied to large scale SLAM scenarios. FAB-Map is maybe the most successful of these systems. Our paper proposes BRIEF-Gist, a very simplistic appearance-based place recognition system based on the BRIEF descriptor. BRIEF-Gist is much more easy to implement and more efficient compared to recent approaches like FAB-Map. Despite its simplicity, we can show that it performs comparably well as a front-end for large scale SLAM. We benchmark our approach using two standard datasets and perform SLAM on the 66 km long urban St. Lucia dataset.",
"title": ""
}
] |
scidocsrr
|
6c13e8d7c22145c3e67adbfea4d3453f
|
Engaging Engineering Students with Gamification
|
[
{
"docid": "1ed692fd2da9c4f6d75fe3c15c7a3492",
"text": "The objective of this preliminary study is to investigate whether educational video games can be integrated into a classroom with positive effects for the teacher and students. The challenges faced when introducing a video game into a classroom are twofold: overcoming the notion that a \"toy\" does not belong in the school and developing software that has real educational value while stimulating the learner. We conducted an initial pilot study with 39 second grade students using our mathematic drill software Skills Arena. Early data from the pilot suggests that not only do teachers and students enjoy using Skills Arena, students have exceeded our expectations by doing three times more math problems in 19 days than they would have using traditional worksheets. Based on this encouraging qualitative study, future work that focuses on quantitative benefits should likely uncover additional positive results.",
"title": ""
}
] |
[
{
"docid": "5962b5655d389bbdc5274650d365cd37",
"text": "Swelling of the upper lip can result from various diseases such as salivary tumors, infectious and inflammatory diseases and cysts. Among the latter, dentigerous cysts, typically involving unerupted teeth, are sometimes associated with supernumerary teeth in the maxillary anterior incisors region called the mesiodens. We report an unusual case of a large dentigerous cyst associated with an impacted mesiodens in a 42-year-old male who presented with a slow-growing swelling in the upper lip.",
"title": ""
},
{
"docid": "1a5dd535269efb0fa31ef851655b2b13",
"text": "MIL-STD-1553 is a military standard that defines the physical and logical layers, and a command/response time division multiplexing of a communication bus used in military and aerospace avionic platforms for more than 40 years. As a legacy platform, MIL-STD-1553 was designed for high level of fault tolerance while less attention was taken with regard to security. Recent studies already addressed the impact of successful cyber attacks on aerospace vehicles that are implementing MIL-STD-1553. In this study we present a security analysis of MIL-STD-1553. In addition, we present a method for anomaly detection in MIL-STD-1553 communication bus and its performance in the presence of several attack scenarios implemented in a testbed, as well as results on real system data. Moreover, we propose a general approach towards an intrusion detection system (IDS) for a MIL-STD-1553 communication bus.",
"title": ""
},
{
"docid": "dd741d612ee466aecbb03f5e1be89b90",
"text": "To date, many of the methods for information extraction of biological information from scientific articles are restricted to the abstract of the article. However, full text articles in electronic version, which offer larger sources of data, are currently available. Several questions arise as to whether the effort of scanning full text articles is worthy, or whether the information that can be extracted from the different sections of an article can be relevant. In this work we addressed those questions showing that the keyword content of the different sections of a standard scientific article (abstract, introduction, methods, results, and discussion) is very heterogeneous. Although the abstract contains the best ratio of keywords per total of words, other sections of the article may be a better source of biologically relevant data.",
"title": ""
},
{
"docid": "6d8e78d8c48aab17aef0b9e608f13b99",
"text": "Optimal real-time distributed V2G and G2V management of electric vehicles Sonja Stüdli, Emanuele Crisostomi, Richard Middleton & Robert Shorten a Centre for Complex Dynamic Systems and Control, The University of Newcastle, New South Wales, Australia b Department of Energy, Systems, Territory and Constructions, University of Pisa, Pisa, Italy c IBM Research, Dublin, Ireland Accepted author version posted online: 10 Dec 2013.Published online: 05 Feb 2014.",
"title": ""
},
{
"docid": "ce2ff18063f16dca4c5d3aee414def8d",
"text": "Understanding 3D object structure from a single image is an important but challenging task in computer vision, mostly due to the lack of 3D object annotations to real images. Previous research tackled this problem by either searching for a 3D shape that best explains 2D annotations, or training purely on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Networks (3D-INN), an end-to-end trainable framework that sequentially estimates 2D keypoint heatmaps and 3D object skeletons and poses. Our system learns from both 2D-annotated real images and synthetic 3D data. This is made possible mainly by two technical innovations. First, heatmaps of 2D keypoints serve as an intermediate representation to connect real and synthetic data. 3D-INN is trained on real images to estimate 2D keypoint heatmaps from an input image; it then predicts 3D object structure from heatmaps using knowledge learned from synthetic 3D shapes. By doing so, 3D-INN benefits from the variation and abundance of synthetic 3D objects, without suffering from the domain difference between real and synthesized images, often due to imperfect rendering. Second, we propose a Projection Layer, mapping estimated 3D structure back to 2D. During training, it ensures 3D-INN to predict 3D structure whose projection is consistent with the 2D annotations to real images. Experiments show that the proposed system performs well on both 2D keypoint estimation and 3D structure recovery. We also demonstrate that the recovered 3D information has wide vision applications, such as image retrieval.",
"title": ""
},
{
"docid": "c194e9c91d4a921b42ddacfc1d5a214f",
"text": "Smartphone applications' energy efficiency is vital, but many Android applications suffer from serious energy inefficiency problems. Locating these problems is labor-intensive and automated diagnosis is highly desirable. However, a key challenge is the lack of a decidable criterion that facilitates automated judgment of such energy problems. Our work aims to address this challenge. We conducted an in-depth study of 173 open-source and 229 commercial Android applications, and observed two common causes of energy problems: missing deactivation of sensors or wake locks, and cost-ineffective use of sensory data. With these findings, wepropose an automated approach to diagnosing energy problems in Android applications. Our approach explores an application's state space by systematically executing the application using Java PathFinder (JPF). It monitors sensor and wake lock operations to detect missing deactivation of sensors and wake locks. It also tracks the transformation and usage of sensory data and judges whether they are effectively utilized by the application using our state-sensitive data utilization metric. In this way, our approach can generate detailed reports with actionable information to assist developers in validating detected energy problems. We built our approach as a tool, GreenDroid, on top of JPF. Technically, we addressed the challenges of generating user interaction events and scheduling event handlers in extending JPF for analyzing Android applications. We evaluated GreenDroid using 13 real-world popular Android applications. GreenDroid completed energy efficiency diagnosis for these applications in a few minutes. It successfully located real energy problems in these applications, and additionally found new unreported energy problems that were later confirmed by developers.",
"title": ""
},
{
"docid": "db02adcb4f8ace13ab1f6f4a79bf7232",
"text": "This paper presents a spectral and time-frequency analysis of EEG signals recorded on seven healthy subjects walking on a treadmill at three different speeds. An accelerometer was placed on the head of the subjects in order to record the shocks undergone by the EEG electrodes during walking. Our results indicate that up to 15 harmonics of the fundamental stepping frequency may pollute EEG signals, depending on the walking speed and also on the electrode location. This finding may call into question some conclusions drawn in previous EEG studies where low-delta band (especially around 1 Hz, the fundamental stepping frequency) had been announced as being the seat of angular and linear kinematics control of the lower limbs during walk. Additionally, our analysis reveals that EEG and accelerometer signals exhibit similar time-frequency properties, especially in frequency bands extending up to 150 Hz, suggesting that previous conclusions claiming the activation of high-gamma rhythms during walking may have been drawn on the basis of insufficiently cleaned EEG signals. Our results are put in perspective with recent EEG studies related to locomotion and extensively discussed in particular by focusing on the low-delta and high-gamma bands.",
"title": ""
},
{
"docid": "3b4607a6b0135eba7c4bb0852b78dda9",
"text": "Heart rate variability for the treatment of major depression is a novel, alternative approach that can offer symptom reduction with minimal-to-no noxious side effects. The following material will illustrate some of the work being conducted at our laboratory to demonstrate the efficacy of heart rate variability. Namely, results will be presented regarding our published work on an initial open-label study and subsequent results of a small, unfinished randomized controlled trial.",
"title": ""
},
{
"docid": "81ea96fd08b41ce6e526d614e9e46a7e",
"text": "BACKGROUND\nChronic alcoholism is known to impair the functioning of episodic and working memory, which may consequently reduce the ability to learn complex novel information. Nevertheless, semantic and cognitive procedural learning have not been properly explored at alcohol treatment entry, despite its potential clinical relevance. The goal of the present study was therefore to determine whether alcoholic patients, immediately after the weaning phase, are cognitively able to acquire complex new knowledge, given their episodic and working memory deficits.\n\n\nMETHODS\nTwenty alcoholic inpatients with episodic memory and working memory deficits at alcohol treatment entry and a control group of 20 healthy subjects underwent a protocol of semantic acquisition and cognitive procedural learning. The semantic learning task consisted of the acquisition of 10 novel concepts, while subjects were administered the Tower of Toronto task to measure cognitive procedural learning.\n\n\nRESULTS\nAnalyses showed that although alcoholic subjects were able to acquire the category and features of the semantic concepts, albeit slowly, they presented impaired label learning. In the control group, executive functions and episodic memory predicted semantic learning in the first and second halves of the protocol, respectively. In addition to the cognitive processes involved in the learning strategies invoked by controls, alcoholic subjects seem to attempt to compensate for their impaired cognitive functions, invoking capacities of short-term passive storage. Regarding cognitive procedural learning, although the patients eventually achieved the same results as the controls, they failed to automate the procedure. Contrary to the control group, the alcoholic groups' learning performance was predicted by controlled cognitive functions throughout the protocol.\n\n\nCONCLUSION\nAt alcohol treatment entry, alcoholic patients with neuropsychological deficits have difficulty acquiring novel semantic and cognitive procedural knowledge. Compared with controls, they seem to use more costly learning strategies, which are nonetheless less efficient. These learning disabilities need to be considered when treatment requiring the acquisition of complex novel information is envisaged.",
"title": ""
},
{
"docid": "fe8696477881ab694ee3ecfcc92bf81a",
"text": "Hallervorden-Spatz syndrome (HSS) is an autosomal recessive neurodegenerative disorder associated with iron accumulation in the brain. Clinical features include extrapyramidal dysfunction, onset in childhood, and a relentlessly progressive course. Histologic study reveals iron deposits in the basal ganglia. In this respect, HSS may serve as a model for complex neurodegenerative diseases, such as Parkinson disease, Alzheimer disease, Huntington disease and human immunodeficiency virus (HIV) encephalopathy, in which pathologic accumulation of iron in the brain is also observed. Thus, understanding the biochemical defect in HSS may provide key insights into the regulation of iron metabolism and its perturbation in this and other neurodegenerative diseases. Here we show that HSS is caused by a defect in a novel pantothenate kinase gene and propose a mechanism for oxidative stress in the pathophysiology of the disease.",
"title": ""
},
{
"docid": "c14b9a0092ed8ba6d59e741422dfa586",
"text": "An elaboration on (Das et al., 2010), this report formalizes frame-semantic parsing as a structure prediction problem and describes an implemented parser that transforms an English sentence into a frame-semantic representation. SEMAFOR 1.0 finds words that evoke FrameNet frames, selects frames for them, and locates the arguments for each frame. The system uses two feature-based, discriminative probabilistic (log-linear) models, one with latent variables to permit disambiguation of new predicate words. The parser is demonstrated to significantly outperform previously published results and is released for public use.",
"title": ""
},
{
"docid": "395362cb22b0416e8eca67ec58907403",
"text": "This paper presents an approach for labeling objects in 3D scenes. We introduce HMP3D, a hierarchical sparse coding technique for learning features from 3D point cloud data. HMP3D classifiers are trained using a synthetic dataset of virtual scenes generated using CAD models from an online database. Our scene labeling system combines features learned from raw RGB-D images and 3D point clouds directly, without any hand-designed features, to assign an object label to every 3D point in the scene. Experiments on the RGB-D Scenes Dataset v.2 demonstrate that the proposed approach can be used to label indoor scenes containing both small tabletop objects and large furniture pieces.",
"title": ""
},
{
"docid": "227fa1a36ba6b664e37e8c93e133dfd0",
"text": "The notion of complex number is intimately related to the Fundamental Theorem of Algebra and is therefore at the very foundation of mathematical analysis. The development of complex algebra, however, has been far from straightforward.1 The human idea of ‘number’ has evolved together with human society. The natural numbers (1, 2, . . . ∈ N) are straightforward to accept, and they have been used for counting in many cultures, irrespective of the actual base of the number system used. At a later stage, for sharing, people introduced fractions in order to answer a simple problem such as ‘if we catch U fish, I will have two parts 5 U and you will have three parts 3 5 U of the whole catch’. The acceptance of negative numbers and zero has been motivated by the emergence of economy, for dealing with profit and loss. It is rather impressive that ancient civilisations were aware of the need for irrational numbers such as √ 2 in the case of the Babylonians [77] and π in the case of the ancient Greeks.2 The concept of a new ‘number’ often came from the need to solve a specific practical problem. For instance, in the above example of sharing U number of fish caught, we need to solve for 2U = 5 and hence to introduce fractions, whereas to solve x2 = 2 (diagonal of a square) irrational numbers needed to be introduced. Complex numbers came from the necessity to solve equations such as x2 = −1.",
"title": ""
},
{
"docid": "0a7db914781aacb79a7139f3da41efbb",
"text": "This work studies the reliability behaviour of gate oxides grown by in situ steam generation technology. A comparison with standard steam oxides is performed, investigating interface and bulk properties. A reduced conduction at low fields and an improved reliability is found for ISSG oxide. The initial lower bulk trapping, but with similar degradation rate with respect to standard oxides, explains the improved reliability results. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a3b1e2499142514614a7ab01d1227827",
"text": "In this paper, we propose a simple but robust scheme to detect denial of service attacks (including distributed denial of service attacks) by monitoring the increase of new IP addresses. Unlike previous proposals for bandwidth attack detection schemes which are based on monitoring the traffic volume, our scheme is very effective for highly distributed denial of service attacks. Our scheme exploits an inherent feature of DDoS attacks, which makes it hard for the attacker to counter this detection scheme by changing their attack signature. Our scheme uses a sequential nonparametric change point detection method to improve the detection accuracy without requiring a detailed model of normal and attack traffic. We demonstrate that we can achieve high detection accuracy on a range of different network packet traces.",
"title": ""
},
{
"docid": "b5d18b82e084042a6f31cb036ee83af5",
"text": "In this paper, signal and power integrity of complete High Definition Multimedia Interface (HDMI) channel with IBIS-AMI model is presented. Gigahertz serialization and deserialization (SERDES) has become a leading inter-chip and inter-board data transmission technique in high-end computing devices. The IBIS-AMI model is used for circuit simulation of high-speed serial interfaces. A 3D frequency-domain simulator (FEM) was used to estimate the channel loss for data bus and HDMI connector. Compliance testing is performed for HDMI channels to ensure channel parameters are meeting HDMI specifications.",
"title": ""
},
{
"docid": "751c50f54e0292c0bd144704774fba65",
"text": "BACKGROUND\nThe medial hamstring muscle has the potential to prevent excessive dynamic valgus and external rotation of the knee joint during sports. Thus, specific training targeting the medial hamstring muscle seems important to avoid knee injuries.\n\n\nOBJECTIVE\nThe aim was to investigate the medial and lateral hamstring muscle activation balance during 14 selected therapeutic exercises.\n\n\nSTUDY DESIGN\nThe study design involved single-occasion repeated measures in a randomised manner. Sixteen female elite handball and soccer players with a mean (SD) age of 23 (3) years and no previous history of knee injury participated in the present study. Electromyographic (EMG) activity of the lateral (biceps femoris - BF) and medial (semitendinosus - ST) hamstring muscle was measured during selected strengthening and balance/coordination exercises, and normalised to EMG during isometric maximal voluntary contraction (MVC). A two-way analysis of variance was performed using the mixed procedure to determine whether differences existed in normalised EMG between exercises and muscles.\n\n\nRESULTS\nKettlebell swing and Romanian deadlift targeted specifically ST over BF (Δ17-22%, p<0.05) at very high levels of normalised EMG (73-115% of MVC). In contrast, the supine leg curl and hip extension specifically targeted the BF over the ST (Δ 20-23%, p<0.05) at very high levels of normalised EMG (75-87% of MVC).\n\n\nCONCLUSION\nSpecific therapeutic exercises targeting the hamstrings can be divided into ST dominant or BF dominant hamstring exercises. Due to distinct functions of the medial and lateral hamstring muscles, this is an important knowledge in respect to prophylactic training and physical therapist practice.",
"title": ""
},
{
"docid": "b0da1e769baab5585f33a1cc6ecd261d",
"text": "We explore the finite sample properties of several semiparametric estimators of average treatment effects, including propensity score reweighting, matching, double robust, and control function estimators. When there is good overlap in the distribution of propensity scores for treatment and control units, reweighting estimators are preferred on bias grounds and attain the semiparametric efficiency bound even for samples of size 100. Pair matching exhibits similarly good performance in terms of bias, but has notably higher variance. Local linear and ridge matching are competitive with reweighting in terms of bias and variance, but only once n = 500. Nearest-neighbor, kernel, and blocking matching are not competitive. When overlap is close to failing, none of the estimators examined perform well and √ n -asymptotics may be a poor guide to finite sample performance. Trimming rules, commonly used in the face of problems with overlap, are effective only in settings with homogeneous treatment effects. JEL Classification: C14, C21, C52.",
"title": ""
},
{
"docid": "301373338fe35426f5186f400f63dbd3",
"text": "OBJECTIVE\nThis paper describes state of the art, scientific publications and ongoing research related to the methods of analysis of respiratory sounds.\n\n\nMETHODS AND MATERIAL\nReview of the current medical and technological literature using Pubmed and personal experience.\n\n\nRESULTS\nThe study includes a description of the various techniques that are being used to collect auscultation sounds, a physical description of known pathologic sounds for which automatic detection tools were developed. Modern tools are based on artificial intelligence and on technics such as artificial neural networks, fuzzy systems, and genetic algorithms…\n\n\nCONCLUSION\nThe next step will consist in finding new markers so as to increase the efficiency of decision aid algorithms and tools.",
"title": ""
},
{
"docid": "fbd390ed58529fc5dc552d7550168546",
"text": "Recently, tuple-stores have become pivotal structures in many information systems. Their ability to handle large datasets makes them important in an era with unprecedented amounts of data being produced and exchanged. However, these tuple-stores typically rely on structured peer-to-peer protocols which assume moderately stable environments. Such assumption does not always hold for very large scale systems sized in the scale of thousands of machines. In this paper we present a novel approach to the design of a tuple-store. Our approach follows a stratified design based on an unstructured substrate. We focus on this substrate and how the use of epidemic protocols allow reaching high dependability and scalability.",
"title": ""
}
] |
scidocsrr
|
f1eadbcbafc358d629e62fbde229f7f6
|
The Recovery of Repeated-Sprint Exercise Is Associated with PCr Resynthesis, while Muscle pH and EMG Amplitude Remain Depressed
|
[
{
"docid": "7a5117c3f9add3198d262d8ac33817d9",
"text": "The relationship between changes in muscle metabolites and the contraction capacity was investigated in humans. Subjects (n = 13) contracted (knee extension) at a target force of 66% of the maximal voluntary contraction force (MVC) to fatigue, and the recovery in MVC and endurance (time to fatigue) were measured. Force recovered rapidly [half-time (t 1/2) less than 15 s] and after 2 min of recovery was not significantly different (P greater than 0.05) from the precontraction value. Endurance recovered more slowly (t 1/2 approximately 1.2 min) and was still significantly depressed after 2 and 4 min of recovery (P less than 0.05). In separate experiments (n = 10) muscle biopsy specimens were taken from the quadriceps femoris muscle before and after two successive contractions to fatigue at 66% of MVC with a recovery period of 2 or 4 min in between. The muscle content of high-energy phosphates and lactate was similar at fatigue after both contractions, whereas glucose 6-phosphate was lower after the second contraction (P less than 0.05). During recovery, muscle lactate decreased and was 74 and 43% of the value at fatigue after an elapsed period of 2 and 4 min, respectively. The decline in H+ due to lactate disappearance is balanced, however, by a release of H+ due to resynthesis of phosphocreatine, and after 2 min of recovery calculated muscle pH was found to remain at a low level similar to that at fatigue.(ABSTRACT TRUNCATED AT 250 WORDS)",
"title": ""
}
] |
[
{
"docid": "e20fd63eac8226c829efefdea5680228",
"text": "Optic disc segmentation in retinal fundus images plays a critical rule in diagnosing a variety of pathologies and abnormalities related to eye retina. Most of the abnormalities that are related to optic disc lead to structural changes in the inner and outer zones of optic disc. Optic disc segmentation on the level of whole retina image degrades the detection sensitivity for these zones. In this paper, we present an automated technique for the Region-Of-Interest segmentation of optic disc region in retinal images. Our segmentation technique reduces the processing area required for optic disc segmentation techniques leading to notable performance enhancement and reducing the amount of the required computational cost for each retinal image. DRIVE, DRISHTI-GS and DiaRetDB1 datasets were used to test and validate our proposed pre-processing technique.",
"title": ""
},
{
"docid": "fb4d8685bd880f44b489d7d13f5f36ed",
"text": "With the advancement in digitalization vast amount of Image data is uploaded and used via Internet in today’s world. With this revolution in uses of multimedia data, key problem in the area of Image processing, Computer vision and big data analytics is how to analyze, effectively process and extract useful information from such data. Traditional tactics to process such a data are extremely time and resource intensive. Studies recommend that parallel and distributed computing techniques have much more potential to process such data in efficient manner. To process such a complex task in efficient manner advancement in GPU based processing is also a candidate solution. This paper we introduce Hadoop-Mapreduce (Distributed system) and CUDA (Parallel system) based image processing. In our experiment using satellite images of different dimension we had compared performance or execution speed of canny edge detection algorithm. Performance is compared for CPU and GPU based Time Complexity.",
"title": ""
},
{
"docid": "7146615b79dd39e358dd148e57a01fdb",
"text": "Graphs are one of the key data structures for many real-world computing applications and the importance of graph analytics is ever-growing. While existing software graph processing frameworks improve programmability of graph analytics, underlying general purpose processors still limit the performance and energy efficiency of graph analytics. We architect a domain-specific accelerator, Graphicionado, for high-performance, energy-efficient processing of graph analytics workloads. For efficient graph analytics processing, Graphicionado exploits not only data structure-centric datapath specialization, but also memory subsystem specialization, all the while taking advantage of the parallelism inherent in this domain. Graphicionado augments the vertex programming paradigm, allowing different graph analytics applications to be mapped to the same accelerator framework, while maintaining flexibility through a small set of reconfigurable blocks. This paper describes Graphicionado pipeline design choices in detail and gives insights on how Graphicionado combats application execution inefficiencies on general-purpose CPUs. Our results show that Graphicionado achieves a 1.76-6.54x speedup while consuming 50-100x less energy compared to a state-of-the-art software graph analytics processing framework executing 32 threads on a 16-core Haswell Xeon processor.",
"title": ""
},
{
"docid": "b93c26fd45a733aca8729a1faa148135",
"text": "In this paper the implementation of PID controllers for the development of passive rehabilitation exercises are presented. For which it is designed and built an ankle rehabilitation prototype based on a structure of parallel robot with a mechanism of the type 2-RRSP (two closed kinematic chains and consisting of joints: revolute-revolute-sphere in slot- fixed post with sphere). Free software is used to develop the computer programs associated with rehabilitation exercises. Regarding the laboratory testing stage, the results of passive exercises are reported, in these exercises path planning is included and PID control is used, for inversion-eversion, dorsal-plantar flexion and combined movements.",
"title": ""
},
{
"docid": "f60297a06f32255ffd4e4d3bcf93e958",
"text": "We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point ‘Direct Linear Transform’ method by incorporating partial prior camera knowledge, while still allowing some unknown calibration parameters to be recovered. Only linear algebra is required, the solution is unique in non-degenerate cases, and additional points can be included for improved stability. Both methods fail for coplanar points, but we give an experimental eigendecomposition based one that handles both planar and nonplanar cases. Our methods use recent polynomial solving technology, and we give a brief summary of this. One of our aims was to try to understand the numerical behaviour of modern polynomial solvers on some relatively simple test cases, with a view to other vision applications.",
"title": ""
},
{
"docid": "c72940e6154fa31f6bedca17336f8a94",
"text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.",
"title": ""
},
{
"docid": "5c7678fae587ef784b4327d545a73a3e",
"text": "The vision of Future Internet based on standard communication protocols considers the merging of computer networks, Internet of Things (IoT), Internet of People (IoP), Internet of Energy (IoE), Internet of Media (IoM), and Internet of Services (IoS), into a common global IT platform of seamless networks and networked “smart things/objects”. However, with the widespread deployment of networked, intelligent sensor technologies, an Internet of Things (IoT) is steadily evolving, much like the Internet decades ago. In the future, hundreds of billions of smart sensors and devices will interact with one another without human intervention, on a Machine-to-Machine (M2M) basis. They will generate an enormous amount of data at an unprecedented scale and resolution, providing humans with information and control of events and objects even in remote physical environments. This paper will provide an overview of performance evaluation, challenges and opportunities of IOT results for machine learning presented by this new paradigm.",
"title": ""
},
{
"docid": "54637f78527032fef8f3bbc7c7766199",
"text": "In this paper, we study the resource allocation and user scheduling problem for a downlink non-orthogonal multiple access network where the base station allocates spectrum and power resources to a set of users. We aim to jointly optimize the sub-channel assignment and power allocation to maximize the weighted total sum-rate while taking into account user fairness. We formulate the sub-channel allocation problem as equivalent to a many-to-many two-sided user-subchannel matching game in which the set of users and sub-channels are considered as two sets of players pursuing their own interests. We then propose a matching algorithm, which converges to a two-side exchange stable matching after a limited number of iterations. A joint solution is thus provided to solve the sub-channel assignment and power allocation problems iteratively. Simulation results show that the proposed algorithm greatly outperforms the orthogonal multiple access scheme and a previous non-orthogonal multiple access scheme.",
"title": ""
},
{
"docid": "9bba22f8f70690bee5536820567546e6",
"text": "Graph clustering involves the task of dividing nodes into clusters, so that the edge density is higher within clusters as opposed to across clusters. A natural, classic, and popular statistical setting for evaluating solutions to this problem is the stochastic block model, also referred to as the planted partition model. In this paper, we present a new algorithm-a convexified version of maximum likelihood-for graph clustering. We show that, in the classic stochastic block model setting, it outperforms existing methods by polynomial factors when the cluster size is allowed to have general scalings. In fact, it is within logarithmic factors of known lower bounds for spectral methods, and there is evidence suggesting that no polynomial time algorithm would do significantly better. We then show that this guarantee carries over to a more general extension of the stochastic block model. Our method can handle the settings of semirandom graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated nodes, partially observed graphs, planted clique/coloring, and so on. In particular, our results provide the best exact recovery guarantees to date for the planted partition, planted k-disjoint-cliques and planted noisy coloring models with general cluster sizes; in other settings, we match the best existing results up to logarithmic factors.",
"title": ""
},
{
"docid": "e35dade31a197b35fbbb0691a78e4c00",
"text": "Plenoptic images are obtained from the projection of the light crossing a matrix of microlens arrays which replicates the scene from different direction into a camera device sensor. Plenoptic images have a different structure with respect to regular digital images, and novel algorithms for data compression are currently under research. This paper proposes an algorithm for the compression of plenoptic images. The micro images composing a plenoptic image are processed by an adaptive prediction tool, aiming at reducing data correlation before entropy coding takes place. The algorithm is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR. Obtained results demonstrate that the proposed algorithm improves the coding efficiency.",
"title": ""
},
{
"docid": "cb363cd47b5cdb3c9364a51d487de7cd",
"text": "Crowdsourcing has been part of the IR toolbox as a cheap and fast mechanism to obtain labels for system development and evaluation. Successful deployment of crowdsourcing at scale involves adjusting many variables, a very important one being the number of workers needed per human intelligence task (HIT). We consider the crowdsourcing task of learning the answer to simple multiple-choice HITs, which are representative of many relevance experiments. In order to provide statistically significant results, one often needs to ask multiple workers to answer the same HIT. A stopping rule is an algorithm that, given a HIT, decides for any given set of worker answers to stop and output an answer or iterate and ask one more worker. In contrast to other solutions that try to estimate worker performance and answer at the same time, our approach assumes the historical performance of a worker is known and tries to estimate the HIT difficulty and answer at the same time. The difficulty of the HIT decides how much weight to give to each worker's answer. In this paper we investigate how to devise better stopping rules given workers' performance quality scores. We suggest adaptive exploration as a promising approach for scalable and automatic creation of ground truth. We conduct a data analysis on an industrial crowdsourcing platform, and use the observations from this analysis to design new stopping rules that use the workers' quality scores in a non-trivial manner. We then perform a number of experiments using real-world datasets and simulated data, showing that our algorithm performs better than other approaches.",
"title": ""
},
{
"docid": "6f410e93fa7ab9e9c4a7a5710fea88e2",
"text": "We propose a fast, scalable locality-sensitive hashing method for the problem of retrieving similar physiological waveform time series. When compared to the naive k-nearest neighbor search, the method vastly speeds up the retrieval time of similar physiological waveforms without sacrificing significant accuracy. Our result shows that we can achieve 95% retrieval accuracy or better with up to an order of magnitude of speed-up. The extra time required in advance to create the optimal data structure is recovered when query quantity equals 15% of the repository, while the method incurs a trivial additional memory cost. We demonstrate the effectiveness of this method on an arterial blood pressure time series dataset extracted from the ICU physiological waveform repository of the MIMIC-II database.",
"title": ""
},
{
"docid": "10bac3f3ea70b341fd363e64859c1049",
"text": "A new maximum likelihood estimation approach for blind channel equalization, using variational autoencoders (VAEs), is introduced. Significant and consistent improvements in the error rate of the reconstructed symbols, compared to constant modulus equalizers, are demonstrated. In fact, for the channels that were examined, the performance of the new VAE blind channel equalizer was close to the performance of a nonblind adaptive linear minimum mean square error equalizer. The new equalization method enables a significantly lower latency channel acquisition compared to the constant modulus algorithm (CMA). The VAE uses a convolutional neural network with two layers and a very small number of free parameters. Although the computational complexity of the new equalizer is higher compared to CMA, it is still reasonable, and the number of free parameters to estimate is small.",
"title": ""
},
{
"docid": "ac2d4f4e6c73c5ab1734bfeae3a7c30a",
"text": "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.",
"title": ""
},
{
"docid": "ac8aea4d68b3a8e0a294d2b520412cd5",
"text": "Forest autotrophic respiration (R(a)) plays an important role in the carbon balance of forest ecosystems. However, its drivers at the global scale are not well known. Based on a global forest database, we explore the relationships of annual R(a) with mean annual temperature (MAT) and biotic factors including net primary productivity (NPP), total biomass, stand age, mean tree height, and maximum leaf area index (LAI). The results show that the spatial patterns of forest annual R(a) at the global scale are largely controlled by temperature. R(a) is composed of growth (R(g)) and maintenance respiration (R(m)). We used a modified Arrhenius equation to express the relationship between R(a) and MAT. This relationship was calibrated with our data and shows that a 10 degrees C increase in MAT will result in an increase of annual R(m) by a factor of 1.9-2.5 (Q10). We also found that the fraction of total assimilation (gross primary production, GPP) used in R(a) is lowest in the temperate regions characterized by a MAT of approximately 11 degrees C. Although we could not confirm a relationship between the ratio of R(a) to GPP and age across all forest sites, the R(a) to GPP ratio tends to significantly increase in response to increasing age for sites with MAT between 8 degrees and 12 degrees C. At the plant scale, direct up-scaled R(a) estimates were found to increase as a power function with forest total biomass; however, the coefficient of the power function (0.2) was much smaller than that expected from previous studies (0.75 or 1). At the ecosystem scale, R(a) estimates based on both GPP - NPP and TER - R(h) (total ecosystem respiration - heterotrophic respiration) were not significantly correlated with forest total biomass (P > 0.05) with either a linear or a power function, implying that the previous individual-based metabolic theory may be not suitable for the application at ecosystem scale.",
"title": ""
},
{
"docid": "160058dae12ea588352f5015483081fc",
"text": "Semiotics is the study of signs. Signs take the form of words, images, sounds, odours, flavours, acts or objects but such things have no intrinsic meaning and become signs only when we invest them with meaning. ‘Nothing is a sign unless it is interpreted as a sign,’ declares Peirce (Peirce, 1931). The two dominant models of a sign are the linguist Ferdinand de Saussure and the philosopher Charles Sanders Peirce. This paper attempts to study the role of semiotics in linguistics. How signs play an important role in studying the language? Index: Semioticstheory of signs and symbols Semanticsstudy of sentences Denotataan actual object referred to by a linguistic expression Divergentmove apart in different directions Linguisticsscientific study of language --------------------------------------------------------------------------------------------Introduction: Semiotics or semiology is the study of sign processes or signification and communication, signs and symbols. It is divided into the three following branches: Semantics: Relation between signs and the things to which they refer; their denotata Syntactics: Relations among signs in formal structures Pragmatics: Relation between signs and their effects on people who use them Syntactics is the branch of semiotics that deals with the formal properties of signs and symbols. It deals with the rules that govern how words are combined to form phrases and sentences. According to Charles Morris “semantics deals with the relation of signs to their designate and the objects which they may or do denote” (Foundations of the theory of science, 1938); and, pragmatics deals with the biotic aspects of semiosis, that is, with all the psychological, biological and sociological phenomena which occur in the functioning of signs. The term, which was spelled semeiotics was first used in English by Henry Stubbes in a very precise sense to denote the branch of medical science relating to the interpretation of signs. Semiotics is not widely institutionalized as an academic discipline. It is a field of study involving many different theoretical stances and methodological tools. One of the broadest definitions is that of Umberto Eco, who states that ‘semiotics is concerned with everything that can be taken as a sign’ (A Theory of Semiotics, 1979). Semiotics involves the study not only of what we refer to as ‘signs’ in everyday speech, but of anything which ‘stands for’ something else. In a semiotic sense, signs take the form of words, images, sounds, gestures and objects. Whilst for the linguist Saussure, ‘semiology’ was ‘a science which studies the role of signs as part of social life’, (Nature of the linguistics sign, 1916) for the philosopher Charles Pierce ‘semiotic’ was the ‘formal doctrine of signs’ which was closely related to logic. For him, ‘a sign... is something which stands to somebody for something in some respect or capacity’. He declared that ‘every thought is a sign.’ Literature review: Semiotics is often employed in the analysis of texts, although it is far more than just a mode of textual analysis. Here it should perhaps be noted that a ‘text’ can IJSER International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 2135",
"title": ""
},
{
"docid": "f53dc3977a9e8c960e0232ef59c0e7fd",
"text": "The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.",
"title": ""
},
{
"docid": "bb6dfed56811136cb3efbb5e3939a386",
"text": "Advancements in IC manufacturing technologies allow for building very large devices with billions of transistors and with complex interactions between them encapsulated in a huge number of design rules. To ease designers' efforts in dealing with electrical and manufacturing problems, regular layout style seems to be a viable option. In this paper we analyze regular layouts in an IC manufacturability context and define their desired properties. We introduce the OPC-free IC design methodology and study properties of cells designed for this layout style that have various degrees of regularity.",
"title": ""
},
{
"docid": "6b4e1e45ef1b91b7694c62bd5d3cd9fc",
"text": "Recently, academia and law enforcement alike have shown a strong demand for data that is collected from online social networks. In this work, we present a novel method for harvesting such data from social networking websites. Our approach uses a hybrid system that is based on a custom add-on for social networks in combination with a web crawling component. The datasets that our tool collects contain profile information (user data, private messages, photos, etc.) and associated meta-data (internal timestamps and unique identifiers). These social snapshots are significant for security research and in the field of digital forensics. We implemented a prototype for Facebook and evaluated our system on a number of human volunteers. We show the feasibility and efficiency of our approach and its advantages in contrast to traditional techniques that rely on application-specific web crawling and parsing. Furthermore, we investigate different use-cases of our tool that include consensual application and the use of sniffed authentication cookies. Finally, we contribute to the research community by publishing our implementation as an open-source project.",
"title": ""
},
{
"docid": "1f4985ca0e188bfbf9145875cd7acfc5",
"text": "Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.",
"title": ""
}
] |
scidocsrr
|
644cf986a676fbbb7de0053c3e3346da
|
Open Information Extraction with Tree Kernels
|
[
{
"docid": "cbda9744930c6d7282bca3f0083da8a3",
"text": "Open Information Extraction extracts relations from text without requiring a pre-specified domain or vocabulary. While existing techniques have used only shallow syntactic features, we investigate the use of semantic role labeling techniques for the task of Open IE. Semantic role labeling (SRL) and Open IE, although developed mostly in isolation, are quite related. We compare SRL-based open extractors, which perform computationally expensive, deep syntactic analysis, with TextRunner, an open extractor, which uses shallow syntactic analysis but is able to analyze many more sentences in a fixed amount of time and thus exploit corpus-level statistics. Our evaluation answers questions regarding these systems, including, can SRL extractors, which are trained on PropBank, cope with heterogeneous text found on the Web? Which extractor attains better precision, recall, f-measure, or running time? How does extractor performance vary for binary, n-ary and nested relations? How much do we gain by running multiple extractors? How do we select the optimal extractor given amount of data, available time, types of extractions desired?",
"title": ""
},
{
"docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
}
] |
[
{
"docid": "e8dcfb4a327890a965557517a2336f24",
"text": "Wine industry is hungry for item-level UHF RFID solutions. In this paper, a conformal UHF tag antenna directly mountable on the wine bottle neck is proposed. 360-degree readability is achieved by taking advantage of the antenna curving, and good impedance matching is realized by following a co-design approach. Simulation and measurement results demonstrate that the tag antenna provides a 10-dB bandwidth of 96 MHz, and a read range of 3~7 m around a full winebottle.",
"title": ""
},
{
"docid": "c9ca8d6f38c44bde6983e401a967c399",
"text": "The validation and verification of cognitive skills of highly automated vehicles is an important milestone for legal and public acceptance of advanced driver assistance systems (ADAS). In this paper, we present an innovative data-driven method in order to create critical traffic situations from recorded sensor data. This concept is completely contrary to previous approaches using parametrizable simulation models. We demonstrate our concept at the example of parametrizing lane change maneuvers: Firstly, the road layout is automatically derived from observed vehicle trajectories. The road layout is then used in order to detect vehicle maneuvers, which is shown exemplarily on lane change maneuvers. Then, the maneuvers are parametrized using data operators in order to create critical traffic scenarios. Finally, we demonstrate our concept using LIDAR-captured traffic situations on urban and highway scenes, creating critical scenarios out of safely recorded data.",
"title": ""
},
{
"docid": "4bf9ec9d1600da4eaffe2bfcc73ee99f",
"text": "Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Nowadays, large amount of data and information are available, Data can now be stored in many different kinds of databases and information repositories, being available on the Internet. There is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. There are data mining, web mining and knowledge discovery tools and software packages such as WEKA Tool and RapidMiner tool. The work deals with analysis of WEKA, RapidMiner and NetTools spider tools KNIME and Orange. There are various tools available for data mining and web mining. Therefore awareness is required about the quantitative investigation of these tools. This paper focuses on various functional, practical, cognitive as well as analysis aspects that users may be looking for in the tools. Complete study addresses the usefulness and importance of these tools including various aspects. Analysis presents various benefits of these data mining tools along with desired aspects and the features of current tools. KEYWORDSData Mining, KDD, Data Mining Tools.",
"title": ""
},
{
"docid": "9228b9d51d15830316b820db6abb5a22",
"text": "The growing interest of the requirements engineering (RE) community to elicit user requirements from large amounts of available online user feedback about software-intensive products resulted in identification of such data as a sensible source of user requirements. Some researchers proposed automation approaches for extracting the requirements from user reviews. Although there is a common assumption that manually analyzing large amounts of user reviews is challenging, no benchmarking has yet been performed that compares the manual and the automated approaches conderning their efficiency. We performed an expert-based manual analysis of 4,006 sentences from typical user feedback contents and formats and measured the amount of time required for each step. Then, we conducted an automated analysis of the same dataset to identify the degree to which automation makes the analysis more scalable. We found that a manual analysis indeed does not scale well and that an automated analysis is many times faster, and scales well to increasing numbers of user reviews.",
"title": ""
},
{
"docid": "f1e293b4b896547b17b5becb1e06cb47",
"text": "Occupational therapy has been an invisible profession, largely because the public has had difficulty grasping the concept of occupation. The emergence of occupational science has the potential of improving this situation. Occupational science is firmly rooted in the founding ideas of occupational therapy. In the future, the nature of human occupation will be illuminated by the development of a basic theory of occupational science. Occupational science, through research and theory development, will guide the practice of occupational therapy. Applications of occupational science to the practice of pediatric occupational therapy are presented. Ultimately, occupational science will prepare pediatric occupational therapists to better meet the needs of parents and their children.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "446557efe628a8683832351d6bd706fd",
"text": "After inequalities in the health sector are measured, a natural next step is to seek to explain them. Why do inequalities in health exist between the poor and better-off in many countries despite health systems explicitly aimed at eliminating inequalities in access to health care? Why is inequality in the incidence of health sector subsidies greater in one country than in another? Why has the distribution of health or health care changed over time? In this chapter and the next, we consider methods of decomposing inequality in health or health care into contributing factors. The core idea is to explain the distribution of the outcome variable in question by a set of factors that vary systematically with socioeconomic status. For example, variations in health may be explained by variations in education, income, insurance coverage, distance to health facilities, and quality of care at local facilities. Even if policy makers have managed to eliminate inequalities in some of these dimensions, inequalities between the poor and better-off may remain in others. The decomposition methods reveal how far inequalities in health can be explained by inequalities in, say, insurance coverage rather than inequalities in, say, distance to health facilities. The decompositions in this chapter and the next are based on regression analysis of the relationships between the health variable of interest and its correlates. Such analyses are usually purely descriptive, revealing the associations that characterize the health inequality, but if data are suffi cient to allow the estimation of causal effects, then it is possible to identify the factors that generate inequality in the variable of interest. In cases in which causal effects have not been obtained, the decomposition provides an explanation in the statistical sense, and the results will not necessarily be a good guide to policy making. For example, the results will not help us predict how inequalities in Y would change if policy makers were to reduce inequalities in X, or reduce the effect of X and Y (e.g., by expanding facilities serving remote populations if X were distance to provider). By contrast, if causal effects have been obtained, the decomposition results ought to shed light on such issues. The decomposition method outlined in this chapter, known as the Oaxaca decomposition (Oaxaca 1973), explains the gap in the means of an outcome variable between two groups (e.g., between the poor and the nonpoor). The gap is decomposed into that part …",
"title": ""
},
{
"docid": "8dfeae1304eb97bc8f7d872af7aaa795",
"text": "Encouraged by the recent progress in pedestrian detection, we investigate the gap between current state-of-the-art methods and the \"perfect single frame detector\". We enable our analysis by creating a human baseline for pedestrian detection (over the Caltech dataset), and by manually clustering the recurrent errors of a top detector. Our results characterise both localisation and background-versusforeground errors. To address localisation errors we study the impact of training annotation noise on the detector performance, and show that we can improve even with a small portion of sanitised training data. To address background/foreground discrimination, we study convnets for pedestrian detection, and discuss which factors affect their performance. Other than our in-depth analysis, we report top performance on the Caltech dataset, and provide a new sanitised set of training and test annotations.",
"title": ""
},
{
"docid": "2ecd0bf132b3b77dc1625ef8d09c925b",
"text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.",
"title": ""
},
{
"docid": "7e0329d95d2d1c46eeaf136b06fdf267",
"text": "The National Renewable Energy Laboratory has recently publicly released its second-generation advanced vehicle simulator called ADVISOR 2.0. This software program was initially developed four years ago, and after several years of in-house usage and evolution, this powerful tool is now available to the public through a new vehicle systems analysis World Wide Web page. ADVISOR has been applied to many different systems analysis problems, such as helping to develop the SAE J1711 test procedure for hybrid vehicles and helping to evaluate new technologies as part of the Partnership for a New Generation of Vehicles (PNGV) technology selection process. The model has been and will continue to be benchmarked and validated with other models and with real vehicle test data. After two months of being available on the Web, more than 100 users have downloaded ADVISOR. ADVISOR 2.0 has many new features, including an easy-to-use graphical user interface, a detailed exhaust aftertreatment thermal model, and complete browser-based documentation. Future work will include adding to the library of components available in ADVISOR, including optimization functionality, and linking with a more detailed fuel cell model.",
"title": ""
},
{
"docid": "7cb5e9779a857636e32e7bd753d25315",
"text": "The effect of 10 wk of protein-supplement timing on strength, power, and body composition was examined in 33 resistance-trained men. Participants were randomly assigned to a protein supplement either provided in the morning and evening (n = 13) or provided immediately before and immediately after workouts (n = 13). In addition, 7 participants agreed to serve as a control group and did not use any protein or other nutritional supplement. During each testing session participants were assessed for strength (one-repetition-maximum [1RM] bench press and squat), power (5 repetitions performed at 80% of 1RM in both the bench press and the squat), and body composition. A significant main effect for all 3 groups in strength improvement was seen in 1RM bench press (120.6 +/- 20.5 kg vs. 125.4 +/- 16.7 at Week 0 and Week 10 testing, respectively) and 1RM squat (154.5 +/- 28.4 kg vs. 169.0 +/- 25.5 at Week 0 and Week 10 testing, respectively). However, no significant between-groups interactions were seen in 1RM squat or 1RM bench press. Significant main effects were also seen in both upper and lower body peak and mean power, but no significant differences were seen between groups. No changes in body mass or percent body fat were seen in any of the groups. Results indicate that the time of protein-supplement ingestion in resistance-trained athletes during a 10-wk training program does not provide any added benefit to strength, power, or body-composition changes.",
"title": ""
},
{
"docid": "b8c92f2be87e0e7bb270a966f829d561",
"text": "In order to enhance the instantaneity of SLAM for indoor mobile robot, a RGBD SLAM method based on Kinect was proposed. In the method, oriented FAST and rotated BRIEF(ORB) algorithm was combined with progressive sample consensus(PROSAC) algorithm to execute feature extracting and matching. More specifically, ORB algorithm which has better property than many other feature descriptors was used for extracting feature. At the same time, ICP algorithm was adopted for coarse registration of the point clouds, and PROSAC algorithm which is superior than RANSAC in outlier removal was employed to eliminate incorrect matching. To make the result more accurate, pose-graph optimization was achieved based on g2o framework. In the end, a 3D volumetric map which can be directly used to the navigation of robots was created.",
"title": ""
},
{
"docid": "3c7f1dac6a7f2f73e29a29788acb02ce",
"text": "Rhinitis can be divided into allergic and non-allergic rhinitis. Rhinitis, particularly allergic rhinitis, has been shown to be associated with obstructive sleep apnea; a condition characterized by repetitive upper airway obstruction during sleep. Allergic rhinitis increases the risk of developing obstructive sleep apnea by two major mechanisms: 1) increase in airway resistance due to higher nasal resistance and 2) reduction in pharyngeal diameter from mouth breathing that moves the mandible inferiorly. Other inflammatory mediators including histamine, CysLTs, IL 1β and IL-4 found in high levels in allergic rhinitis, have also been shown to worsen sleep quality in obstructive sleep apnea. Prior studies have shown that treatment of allergic rhinitis, particularly when intranasal steroid are used, improved obstructive sleep apnea. Leukotriene receptor antagonists were also associated with positive results on obstructive sleep apnea in adult patients with concomitant allergic rhinitis but current data are limited in the case of children.",
"title": ""
},
{
"docid": "e546f1bc6476a0d427caf6563aa41ac5",
"text": "Analysis and reconstruction of range images usually focuses on complex objects completely contained in the field of view; little attention has been devoted so far to the reconstruction of partially occluded simple-shaped wide areas like parts of a wall hidden behind furniture pieces in an indoor range image. The work in this paper is aimed at such reconstruction. First of all the range image is partitioned and surfaces are fitted to these partitions. A further step lo cates possibly occluded areas, while a final step determines which areas are actually occluded. The reconstruction of data occurs in this last step.",
"title": ""
},
{
"docid": "62a0b14c86df32d889d43eb484eadcda",
"text": "Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.",
"title": ""
},
{
"docid": "6b1c17b9c4462aebbe7f908f4c88381b",
"text": "This study examined neural activity associated with establishing causal relationships across sentences during on-line comprehension. ERPs were measured while participants read and judged the relatedness of three-sentence scenarios in which the final sentence was highly causally related, intermediately related, and causally unrelated to its context. Lexico-semantic co-occurrence was matched across the three conditions using a Latent Semantic Analysis. Critical words in causally unrelated scenarios evoked a larger N400 than words in both highly causally related and intermediately related scenarios, regardless of whether they appeared before or at the sentence-final position. At midline sites, the N400 to intermediately related sentence-final words was attenuated to the same degree as to highly causally related words, but otherwise the N400 to intermediately related words fell in between that evoked by highly causally related and intermediately related words. No modulation of the late positivity/P600 component was observed across conditions. These results indicate that both simple and complex causal inferences can influence the earliest stages of semantically processing an incoming word. Further, they suggest that causal coherence, at the situation level, can influence incremental word-by-word discourse comprehension, even when semantic relationships between individual words are matched.",
"title": ""
},
{
"docid": "db0b55cd4064799b9d7c52c6f3da6aac",
"text": "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-toend to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.",
"title": ""
},
{
"docid": "f5b02bdd74772ff2454a475e44077c8e",
"text": "This paper presents a new method - adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts. Then, we incorporate the discriminator as another critic into the advantage actor-critic (A2C) framework, to encourage the dialogue agent to explore state-action within the regions where the agent takes actions similar to those of the experts. Experimental results in a movie-ticket booking domain show that the proposed Adversarial A2C can accelerate policy exploration efficiently.",
"title": ""
},
{
"docid": "d24fcc42617f19f06fbc7ed65134c3b6",
"text": "With the advent of social network sites (SNSs), people can efficiently maintain preexisting social relationships and make online friendships without offline encounters. While such technological features of SNSs hold a variety of potential for individual and collective benefits, some scholars warn that use of SNSs might lead to socially negative consequences, such as social isolation, erosion of social cohesion, or SNS addiction. This study distinguishes types of SNS relationships, and investigates their relationships with social isolation, interpersonal trust, and SNS addiction. We classify SNS relationships into two types: (a) social relationships based on reciprocity between a user and his/her friends, and (b) parasocial relationships in which an ordinary user is aware of activities of a celebrity (e.g., famous actors, athletes, and others) but not vice versa. Based on achievements in studies of media effect and social psychology, we constructed a set of hypotheses, and tested them using a subsample of SNS users drawn from representative survey data in South Korea. We found that dependency on parasocial relationships is positively related with loneliness but negatively correlated with interpersonal distrust, while dependency on social relationship is negatively correlated with loneliness but positively related with trust. However, more dependency on both social and parasocial relationships are positively related with SNS addiction. Implications based on findings are also discussed.",
"title": ""
},
{
"docid": "7f06370a81e7749970cd0359c5b5f993",
"text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.",
"title": ""
}
] |
scidocsrr
|
69293a4cb7fbac3fa383d843b25f69ca
|
The impact of industrial wearable system on industry 4.0
|
[
{
"docid": "be3721ebf2c55972146c3e87aee475ba",
"text": "Advances in computation and communication are taking shape in the form of the Internet of Things, Machine-to-Machine technology, Industry 4.0, and Cyber-Physical Systems (CPS). The impact on engineering such systems is a new technical systems paradigm based on ensembles of collaborating embedded software systems. To successfully facilitate this paradigm, multiple needs can be identified along three axes: (i) online configuring an ensemble of systems, (ii) achieving a concerted function of collaborating systems, and (iii) providing the enabling infrastructure. This work focuses on the collaborative function dimension and presents a set of concrete examples of CPS challenges. The examples are illustrated based on a pick and place machine that solves a distributed version of the Towers of Hanoi puzzle. The system includes a physical environment, a wireless network, concurrent computing resources, and computational functionality such as, service arbitration, various forms of control, and processing of streaming video. The pick and place machine is of medium-size complexity. It is representative of issues occurring in industrial systems that are coming online. The entire study is provided at a computational model level, with the intent to contribute to the model-based research agenda in terms of design methods and implementation technologies necessary to make the next generation systems a reality.",
"title": ""
}
] |
[
{
"docid": "cd4e04370b1e8b1f190a3533c3f4afe2",
"text": "Perception of depth is a central problem m machine vision. Stereo is an attractive technique for depth perception because, compared with monocular techniques, it leads to more direct, unambiguous, and quantitative depth measurements, and unlike \"active\" approaches such as radar and laser ranging, it is suitable in almost all application domains. Computational stereo is broadly defined as the recovery of the three-dimensional characteristics of a scene from multiple images taken from different points of view. First, each of the functional components of the computational stereo paradigm--image acquLsition, camera modeling, feature acquisition, image matching, depth determination, and interpolation--is identified and discussed. Then, the criteria that are important for evaluating the effectiveness of various computational stereo techniques are presented. Finally a representative sampling of computational stereo research is provided.",
"title": ""
},
{
"docid": "9d33565dbd5148730094a165bb2e968f",
"text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.",
"title": ""
},
{
"docid": "b37db75dcd62cc56977d1a28a81be33e",
"text": "In this article we report on a new digital interactive self-report method for the measurement of human affect. The AffectButton (Broekens & Brinkman, 2009) is a button that enables users to provide affective feedback in terms of values on the well-known three affective dimensions of Pleasure (Valence), Arousal and Dominance. The AffectButton is an interface component that functions and looks like a medium-sized button. The button presents one dynamically changing iconic facial expression that changes based on the coordinates of the user’s pointer in the button. To give affective feedback the user selects the most appropriate expression by clicking the button, effectively enabling 1-click affective self-report on 3 affective dimensions. Here we analyze 5 previously published studies, and 3 novel large-scale studies (n=325, n=202, n=128). Our results show the reliability, validity, and usability of the button for acquiring three types of affective feedback in various domains. The tested domains are holiday preferences, real-time music annotation, emotion words, and textual situation descriptions (ANET). The types of affective feedback tested are preferences, affect attribution to the previously mentioned stimuli, and self-reported mood. All of the subjects tested were Dutch and aged between 15 and 56 years. We end this article with a discussion of the limitations of the AffectButton and of its relevance to areas including recommender systems, preference elicitation, social computing, online surveys, coaching and tutoring, experimental psychology and psychometrics, content annotation, and game consoles.",
"title": ""
},
{
"docid": "c11fe7d0d9786845cadf633a8ceea46d",
"text": "Introduction. Circumcision is a common procedure carried out around the world. Due to religious reasons, it is routinely done in Bangladesh, by both traditional as well as medically trained circumcisers. Complications include excessive bleeding, loss of foreskin, infection, and injury to the glans penis. Myiasis complicating male circumcision appears to be very rare. Case Presentation. In 2010, a 10-year-old boy presented to the OPD of Dhaka Medical College Hospital with severe pain in his penile region following circumcision 7-days after. The procedure was carried out by a traditional circumciser using unsterilized instruments and dressing material. After examination, unhealthy granulation tissue was seen and maggots started coming out from the site of infestation, indicating presence of more maggots underneath the skin. An emergency operation was carried out to remove the maggots and reconstruction was carried out at the plastic surgery department. Conclusion. There is scarcity of literature regarding complications following circumcision in developing countries. Most dangerous complications are a result of procedure carried out by traditional circumcisers who are inadequately trained. Incidence of such complications can be prevented by establishing a link between the formal and informal sections of healthcare to improve the safety of the procedure.",
"title": ""
},
{
"docid": "0b97ba6017a7f94ed34330555095f69a",
"text": "In response to stress, the brain activates several neuropeptide-secreting systems. This eventually leads to the release of adrenal corticosteroid hormones, which subsequently feed back on the brain and bind to two types of nuclear receptor that act as transcriptional regulators. By targeting many genes, corticosteroids function in a binary fashion, and serve as a master switch in the control of neuronal and network responses that underlie behavioural adaptation. In genetically predisposed individuals, an imbalance in this binary control mechanism can introduce a bias towards stress-related brain disease after adverse experiences. New candidate susceptibility genes that serve as markers for the prediction of vulnerable phenotypes are now being identified.",
"title": ""
},
{
"docid": "386feb461948b94809c0cc075e2b4002",
"text": "GPflow is a Gaussian process library that uses TensorFlow for its core computations and Python for its front end. The distinguishing features of GPflow are that it uses variational inference as the primary approximation method, provides concise code through the use of automatic differentiation, has been engineered with a particular emphasis on software testing and is able to exploit GPU hardware. 1. GPflow and TensorFlow are available as open source software under the Apache 2.0 license. c ©2017 Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v18/16-537.html.",
"title": ""
},
{
"docid": "1af5c5e20c1ce827f899dc70d0495bdc",
"text": "High power sources and high sensitivity detectors are highly in demand for terahertz imaging and sensing systems. Use of nano-antennas and nano-plasmonic light concentrators in photoconductive terahertz sources and detectors has proven to offer significantly higher terahertz radiation powers and detection sensitivities by enhancing photoconductor quantum efficiency while maintaining its ultrafast operation. This is because of the unique capability of nano-antennas and nano-plasmonic structures in manipulating the concentration of photo-generated carriers within the device active area, allowing a larger number of photocarriers to efficiently contribute to terahertz radiation and detection. An overview of some of the recent advancements in terahertz optoelectronic devices through use of various types of nano-antennas and nano-plasmonic light concentrators is presented in this article.",
"title": ""
},
{
"docid": "828d88119a34b73044ce407de98e37f8",
"text": "We propose a novel modular underwater robot which can self-reconfigure by stacking and unstacking its component modules. Applications for this robot include underwater monitoring, exploration, and surveillance. Our current prototype is a single module which contains several subsystems that later will be segregated into different modules. This robot functions as a testbed for the subsystems which are needed in the modular implementation. We describe the module design and discuss the propulsion, docking, and optical ranging subsystems in detail. Experimental results demonstrate depth control, linear motion, target module detection, and docking capabilities.",
"title": ""
},
{
"docid": "22629b96f1172328e654ea6ed6dccd92",
"text": "This paper uses the case of contract manufacturing in the electronics industry to illustrate an emergent American model of industrial organization, the modular production network. Lead firms in the modular production network concentrate on the creation, penetration, and defense of markets for end products—and increasingly the provision of services to go with them—while manufacturing capacity is shifted out-of-house to globally-operating turn-key suppliers. The modular production network relies on codified inter-firm links and the generic manufacturing capacity residing in turn-key suppliers to reduce transaction costs, build large external economies of scale, and reduce risk for network actors. I test the modular production network model against some of the key theoretical tools that have been developed to predict and explain industry structure: Joseph Schumpeter's notion of innovation in the giant firm, Alfred Chandler's ideas about economies of speed and the rise of the modern corporation, Oliver Williamson's transaction cost framework, and a range of other production network models that appear in the literature. I argue that the modular production network yields better economic performance in the context of globalization than more spatially and socially embedded network models. I view the emergence of the modular production network as part of a historical process of industrial transformation in which nationally-specific models of industrial organization co-evolve in intensifying rounds of competition, diffusion, and adaptation.",
"title": ""
},
{
"docid": "7afa24cc5aa346b79436c1b9b7b15b23",
"text": "Humans demonstrate remarkable abilities to predict physical events in complex scenes. Two classes of models for physical scene understanding have recently been proposed: “Intuitive Physics Engines”, or IPEs, which posit that people make predictions by running approximate probabilistic simulations in causal mental models similar in nature to video-game physics engines, and memory-based models, which make judgments based on analogies to stored experiences of previously encountered scenes and physical outcomes. Versions of the latter have recently been instantiated in convolutional neural network (CNN) architectures. Here we report four experiments that, to our knowledge, are the first rigorous comparisons of simulation-based and CNN-based models, where both approaches are concretely instantiated in algorithms that can run on raw image inputs and produce as outputs physical judgments such as whether a stack of blocks will fall. Both approaches can achieve super-human accuracy levels and can quantitatively predict human judgments to a similar degree, but only the simulation-based models generalize to novel situations in ways that people do, and are qualitatively consistent with systematic perceptual illusions and judgment asymmetries that people show.",
"title": ""
},
{
"docid": "8d67dab61a3085c98e5baba614ad0930",
"text": "In this paper, we propose a vehicle type classification method using a semisupervised convolutional neural network from vehicle frontal-view images. In order to capture rich and discriminative information of vehicles, we introduce sparse Laplacian filter learning to obtain the filters of the network with large amounts of unlabeled data. Serving as the output layer of the network, the softmax classifier is trained by multitask learning with small amounts of labeled data. For a given vehicle image, the network can provide the probability of each type to which the vehicle belongs. Unlike traditional methods by using handcrafted visual features, our method is able to automatically learn good features for the classification task. The learned features are discriminative enough to work well in complex scenes. We build the challenging BIT-Vehicle dataset, including 9850 high-resolution vehicle frontal-view images. Experimental results on our own dataset and a public dataset demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "5b6dab3a50df0cbbc086a46caf7f32d2",
"text": "Traveling to unfamiliar regions require a significant effort from novice travelers to plan where to go within a limited duration. In this paper, we propose a smart recommendation for highly efficient and balanced itineraries based on multiple user-generated GPS trajectories. Users only need to provide a minimal query composed of a start point, an end point and travel duration to receive an itinerary recommendation. To differentiate good itinerary candidates from less fulfilling ones, we describe how we model and define itinerary in terms of several characteristics mined from user-generated GPS trajectories. Further, we evaluated the efficiency of our method based on 17,745 user-generated GPS trajectories contributed by 125 users in Beijing, China. Also we performed a user study where current residents of Beijing used our system to review and give ratings to itineraries generated by our algorithm and baseline algorithms for comparison.",
"title": ""
},
{
"docid": "cc23c9f5d2c717a0e4c8f97668029abc",
"text": "We introduce a new representation learning algorithm suited to the context of domain adaptation, in which data at training and test time come from similar but different distributions. Our algorithm is directly inspired by theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on a data representation that cannot discriminate between the training (source) and test (target) domains. We propose a training objective that implements this idea in the context of a neural network, whose hidden layer is trained to be predictive of the classification task, but uninformative as to the domain of the input. Our experiments on a sentiment analysis classification benchmark, where the target domain data available at training time is unlabeled, show that our neural network for domain adaption algorithm has better performance than either a standard neural network or an SVM, even if trained on input features extracted with the state-of-theart marginalized stacked denoising autoencoders of Chen et al. (2012).",
"title": ""
},
{
"docid": "b7fa50099584f8d59b3bfb0cf35674fa",
"text": "A new modified ultra-wideband in-phase power divider for frequency band 2-18 GHz has been designed, successfully fabricated and tested. The power divider is based on the coupled strip-lines. Only two balanced resistors are used in the proposed structure. So the power divider has very low insertion loss. The capacitive strips placed over the resistors have been introduced in the suggested design as the novel elements. Due to the introduced capacitive strips the isolation and impedance matching of the divider outputs were improved at the high frequencies. The manufactured power divider shows very high measured performances (amplitude imbalance is ±0.2 dB, phase imbalance is 5°, insertion loss is 0.4 dB, isolation is -18 dB, VSWR = 1.5.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "27b2148c05febeb1051c1d1229a397d6",
"text": "Modern database management systems essentially solve the problem of accessing and managing large volumes of related data on a single platform, or on a cluster of tightly-coupled platforms. But many problems remain when two or more databases need to work together. A fundamental problem is raised by semantic heterogeneity the fact that data duplicated across multiple databases is represented differently in the underlying database schemas. This tutorial describes fundamental problems raised by semantic heterogeneity and surveys theoretical frameworks that can provide solutions for them. The tutorial considers the following topics: (1) representative architectures for supporting database interoperation; (2) notions for comparing the “information capacity” of database schemas; (3) providing support for read-only integrated views of data, including the .virtual and materialized approaches; (4) providing support for read-write integrated views of data, including the issue of workflows on heterogeneous databases; and (5) research and tools for accessing and effectively using meta-data, e.g., to identify the relationships between schemas of different databases.",
"title": ""
},
{
"docid": "e0217457b00d4c1ba86fc5d9faede342",
"text": "This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones.",
"title": ""
},
{
"docid": "780095276d7ac3cae1b95b7a1ceee8b3",
"text": "This work presents a systematic study toward the design and first demonstration of high-performance n-type monolayer tungsten diselenide (WSe2) field effect transistors (FET) by selecting the contact metal based on understanding the physics of contact between metal and monolayer WSe2. Device measurements supported by ab initio density functional theory (DFT) calculations indicate that the d-orbitals of the contact metal play a key role in forming low resistance ohmic contacts with monolayer WSe2. On the basis of this understanding, indium (In) leads to small ohmic contact resistance with WSe2 and consequently, back-gated In-WSe2 FETs attained a record ON-current of 210 μA/μm, which is the highest value achieved in any monolayer transition-metal dichalcogenide- (TMD) based FET to date. An electron mobility of 142 cm(2)/V·s (with an ON/OFF current ratio exceeding 10(6)) is also achieved with In-WSe2 FETs at room temperature. This is the highest electron mobility reported for any back gated monolayer TMD material till date. The performance of n-type monolayer WSe2 FET was further improved by Al2O3 deposition on top of WSe2 to suppress the Coulomb scattering. Under the high-κ dielectric environment, electron mobility of Ag-WSe2 FET reached ~202 cm(2)/V·s with an ON/OFF ratio of over 10(6) and a high ON-current of 205 μA/μm. In tandem with a recent report of p-type monolayer WSe2 FET ( Fang , H . et al. Nano Lett. 2012 , 12 , ( 7 ), 3788 - 3792 ), this demonstration of a high-performance n-type monolayer WSe2 FET corroborates the superb potential of WSe2 for complementary digital logic applications.",
"title": ""
},
{
"docid": "fe116849575dd91759a6c1ef7ed239f3",
"text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"title": ""
},
{
"docid": "bdc82fead985055041171d63415f9dde",
"text": "We introduce a new corpus of sentence-level agreement and disagreement annotations over LiveJournal and Wikipedia threads. This is the first agreement corpus to offer full-document annotations for threaded discussions. We provide a methodology for coding responses as well as an implemented tool with an interface that facilitates annotation of a specific response while viewing the full context of the thread. Both the results of an annotator questionnaire and high inter-annotator agreement statistics indicate that the annotations collected are of high quality.",
"title": ""
}
] |
scidocsrr
|
a98a2b03bc862283e749f32b3a52ede4
|
Advancing NLP via a distributed-messaging approach
|
[
{
"docid": "02d8ad18b07d08084764d124dc74a94c",
"text": "The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"title": ""
},
{
"docid": "c7f1e26d27c87bfa0da637c28dbcdeda",
"text": "There has recently been an increased interest in named entity recognition and disambiguation systems at major conferences such as WWW, SIGIR, ACL, KDD, etc. However, most work has focused on algorithms and evaluations, leaving little space for implementation details. In this paper, we discuss some implementation and data processing challenges we encountered while developing a new multilingual version of DBpedia Spotlight that is faster, more accurate and easier to configure. We compare our solution to the previous system, considering time performance, space requirements and accuracy in the context of the Dutch and English languages. Additionally, we report results for 9 additional languages among the largest Wikipedias. Finally, we present challenges and experiences to foment the discussion with other developers interested in recognition and disambiguation of entities in natural language text.",
"title": ""
},
{
"docid": "020799a5f143063b843aaf067f52cf29",
"text": "In this paper we propose a novel entity annotator for texts which hinges on TagME's algorithmic technology, currently the best one available. The novelty is twofold: from the one hand, we have engineered the software in order to be modular and more efficient; from the other hand, we have improved the annotation pipeline by re-designing all of its three main modules: spotting, disambiguation and pruning. In particular, the re-design has involved the detailed inspection of the performance of these modules by developing new algorithms which have been in turn tested over all publicly available datasets (i.e. AIDA, IITB, MSN, AQUAINT, and the one of the ERD Challenge). This extensive experimentation allowed us to derive the best combination which achieved on the ERD development dataset an F1 score of 74.8%, which turned to be 67.2% F1 for the test dataset. This final result was due to an impressive precision equal to 87.6%, but very low recall 54.5%. With respect to classic TagME on the development dataset the improvement ranged from 1% to 9% on the D2W benchmark, depending on the disambiguation algorithm being used. As a side result, the final software can be interpreted as a flexible library of several parsing/disambiguation and pruning modules that can be used to build up new and more sophisticated entity annotators. We plan to release our library to the public as an open-source project.",
"title": ""
}
] |
[
{
"docid": "2ddf38f09b92f5137f0a741d4a6e3004",
"text": "Supply chain management must adopt different and more innovative strategies that support a better response to customer needs in an uncertain environment. Supply chains must be more agile and be more capable of coping with disturbances, meaning that supply chains must be more resilient. The simultaneous deployment of agile and resilient approaches will enhance supply chain performance and competitiveness. Accordingly, the main objective of this paper is to propose a conceptual framework for the analysis of relationships between agile and resilient approaches, supply chain competitiveness and performance. Operational and economic performance measures are proposed to facilitate the monitoring of the influence of these practices on supply chain performance. The influence of the proposed agile and resilient practices on supply chain competitiveness is also examined in terms of time to market, product quality and customer service.",
"title": ""
},
{
"docid": "af88e11c296ba61cbcb65c7c586dd6ac",
"text": "This essay draws from the emerging positive psychology movement and the author’s recent articles on the need for and meaning of a positive approach to organizational behavior. Specifically, the argument is made that at this time, the OB field needs a proactive, positive approach emphasizing strengths, rather than continuing in the downward spiral of negativity trying to fix weaknesses. However, to avoid the surface positivity represented by the non-sustainable best-sellers, the case is made for positive organizational behavior (POB) to take advantage of the OB field’s strength of being theory and research driven. Additional criteria for this version of POB are to identify unique, state-like psychological capacities that can not only be validly measured, but also be open to development and performance management. Confidence, hope, and resiliency are offered as meeting such POB inclusion criteria. The overall intent of the essay is to generate some positive thinking and excitement for the OB field and ‘hopefully’ stimulate some new theory building, research, and effective application.",
"title": ""
},
{
"docid": "ae3ccd3698a5b96243a223d41ee4ece4",
"text": "In this paper, we introduce a new approach to image retrieval. This new approach takes the best from two worlds, combines image features (content) and words from collateral text (context) into one semantic space. Our approach uses Latent Semantic Indexing, a method that uses co-occurrence statistics to uncover hidden semantics. This paper shows how this method, that has proven successful in both monolingual and cross lingual text retrieval, can be used for multi-modal and cross-modal information retrieval. Experiments with an on-line newspaper archive show that Latent Semantic Indexing can outperform both content based and context based approaches and that it is a promising approach for indexing visual and multi-modal data.",
"title": ""
},
{
"docid": "b40b81e25501b08a07c64f68c851f4a6",
"text": "Variable reluctance (VR) resolver is widely used in traction motor for battery electric vehicle as well as hybrid electric vehicle as a rotor position sensor. VR resolver generates absolute position signal by using resolver-to-digital converter (RDC) in order to deliver exact position of permanent magnets in a rotor of traction motor to motor controller. This paper deals with fault diagnosis of VR resolver by using co-simulation analysis with RDC for position angle detection. As fault conditions, eccentricity of VR resolver, short circuit condition of excitation coil and output signal coils, and material problem of silicon steel in a view point of permeability are considered. 2D FEM is used for the output signal waveforms of SIN, COS and these waveforms are converted into absolute position angle by using the algorithm of RDC. For the verification of proposed analysis results, experiment on fault conditions was conducted and compared with simulation ones.",
"title": ""
},
{
"docid": "dd0a1a3d6de377efc0a97004376749b6",
"text": "Time series often have a temporal hierarchy, with information that is spread out over multiple time scales. Common recurrent neural networks, however, do not explicitly accommodate such a hierarchy, and most research on them has been focusing on training algorithms rather than on their basic architecture. In this paper we study the effect of a hierarchy of recurrent neural networks on processing time series. Here, each layer is a recurrent network which receives the hidden state of the previous layer as input. This architecture allows us to perform hierarchical processing on difficult temporal tasks, and more naturally capture the structure of time series. We show that they reach state-of-the-art performance for recurrent networks in character-level language modeling when trained with simple stochastic gradient descent. We also offer an analysis of the different emergent time scales.",
"title": ""
},
{
"docid": "b12925c3dd50b2d350c705b8bbc982a3",
"text": "K-means clustering has been widely used in processing large datasets in many fields of studies. Advancement in many data collection techniques has been generating enormous amount of data, leaving scientists with the challenging task of processing them. Using General Purpose Processors or GPPs to process large datasets may take a long time, therefore many acceleration methods have been proposed in the literature to speed-up the processing of such large datasets. In this work, we propose a parameterized Field Programmable Gate Array (FPGA) implementation of the Kmeans algorithm and compare it with previous FPGA implementation as well as recent implementations on Graphics Processing Units (GPUs) and with GPPs. The proposed FPGA implementation has shown higher performance in terms of speed-up over previous FPGA GPU and GPP implementations, and is more energy efficient.",
"title": ""
},
{
"docid": "faf53f190fe226ce14f32f9d44d551b5",
"text": "We present a study of how Linux kernel developers respond to bug reports issued by a static analysis tool. We found that developers prefer to triage reports in younger, smaller, and more actively-maintained files ( §2), first address easy-to-fix bugs and defer difficult (but possibly critical) bugs ( §3), and triage bugs in batches rather than individually (§4). Also, although automated tools cannot find many types of bugs, they can be effective at directing developers’ attentions towards parts of the codebase that contain up to 3X more user-reported bugs ( §5). Our insights into developer attitudes towards static analysis tools allow us to make suggestions for improving their usability and effectiveness. We feel that it could be effective to run static analysis tools continuously while programming and before committing code, to rank reports so that those most likely to be triaged are shown to developers first, to show the easiest reports to new developers, to perform deeper analysis on more actively-maintained code, and to use reports as indirect indicators of code quality and importance.",
"title": ""
},
{
"docid": "a2e2117e3d2a01f2f28835350ba1d732",
"text": "Previously, several natural integral transforms of Minkowski question mark function F (x) were introduced by the author. Each of them is uniquely characterized by certain regularity conditions and the functional equation, thus encoding intrinsic information about F (x). One of them the dyadic period function G(z) was defined via certain transcendental integral. In this paper we introduce a family of “distributions” Fp(x) for R p ≥ 1, such that F1(x) is the question mark function and F2(x) is a discrete distribution with support on x = 1. Thus, all the aforementioned integral transforms are calculated for such p. As a consequence, the generating function of moments of F p(x) satisfies the three term functional equation. This has an independent interest, though our main concern is the information it provides about F (x). This approach yields certain explicit series for G(z). This also solves the problem in expressing the moments of F (x) in closed form.",
"title": ""
},
{
"docid": "df38d14091d6a350d8f04f8cb061428c",
"text": "Although semi-supervised variational autoencoder (SemiVAE) works in image classification task, it fails in text classification task if using vanilla LSTM as its decoder. From a perspective of reinforcement learning, it is verified that the decoder’s capability to distinguish between different categorical labels is essential. Therefore, Semi-supervised Sequential Variational Autoencoder (SSVAE) is proposed, which increases the capability by feeding label into its decoder RNN at each time-step. Two specific decoder structures are investigated and both of them are verified to be effective. Besides, in order to reduce the computational complexity in training, a novel optimization method is proposed, which estimates the gradient of the unlabeled objective function by sampling, along with two variance reduction techniques. Experimental results on Large Movie Review Dataset (IMDB) and AG’s News corpus show that the proposed approach significantly improves the classification accuracy compared with pure-supervised classifiers, and achieves competitive performance against previous advanced methods. State-of-the-art results can be obtained by integrating other pretraining-based methods.",
"title": ""
},
{
"docid": "6d8239638a5581958071f4fb78f0596b",
"text": "This article presents the formal semantics of a large subset of the C language called Clight. Clight includes pointer arithmetic, struct and union types, C loops and structured switch statements. Clight is the source language of the CompCert verified compiler. The formal semantics of Clight is a big-step operational semantics that observes both terminating and diverging executions and produces traces of input/output events. The formal semantics of Clight is mechanized using the Coq proof assistant. In addition to the semantics of Clight, this article describes its integration in the CompCert verified compiler and several ways by which the semantics was validated.",
"title": ""
},
{
"docid": "a0a01e96e9fe31a797c55b94e9a12cea",
"text": "This thesis broadens the space of rich yet practical models for structured prediction. We introduce a general framework for modeling with four ingredients: (1) latent variables, (2) structural constraints, (3) learned (neural) feature representations of the inputs, and (4) training that takes the approximations made during inference into account. The thesis builds up to this framework through an empirical study of three NLP tasks: semantic role labeling, relation extraction, and dependency parsing—obtaining state-of-the-art results on the former two. We apply the resulting graphical models with structured and neural factors, and approximation-aware learning to jointly model part-of-speech tags, a syntactic dependency parse, and semantic roles in a low-resource setting where the syntax is unobserved. We present an alternative view of these models as neural networks with a topology inspired by inference on graphical models that encode our intuitions about the data.",
"title": ""
},
{
"docid": "31be3d5db7d49d1bfc58c81efec83bdc",
"text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.",
"title": ""
},
{
"docid": "910b687e300134fd6f56415c32be20ff",
"text": "We introduce a new methodology for intrinsic evaluation of word representations. Specifically, we identify four fundamental criteria based on the characteristics of natural language that pose difficulties to NLP systems; and develop tests that directly show whether or not representations contain the subspaces necessary to satisfy these criteria. Current intrinsic evaluations are mostly based on the overall similarity or full-space similarity of words and thus view vector representations as points. We show the limits of these point-based intrinsic evaluations. We apply our evaluation methodology to the comparison of a count vector model and several neural network models and demonstrate important properties of these models.",
"title": ""
},
{
"docid": "5f20df3abf9a4f7944af6b3afd16f6f8",
"text": "An important step towards the successful integration of information and communication technology (ICT) in schools is to facilitate their capacity to develop a school-based ICT policy resulting in an ICT policy plan. Such a plan can be defined as a school document containing strategic and operational elements concerning the integration of ICT in education. To write such a plan in an efficient way is challenging for schools. Therefore, an online tool [Planning for ICT in Schools (pICTos)] has been developed to guide schools in this process. A multiple case study research project was conducted with three Flemish primary schools to explore the process of developing a school-based ICT policy plan and the supportive role of pICTos within this process. Data from multiple sources (i.e. interviews with school leaders and ICT coordinators, school policy documents analysis and a teacher questionnaire) were collected and analysed. The results indicate that schools shape their ICT policy based on specific school data collected and presented by the pICTos environment. School teams learned about the actual and future place of ICT in teaching and learning. Consequently, different policy decisions were made according to each school’s vision on ‘good’ education and ICT integration.",
"title": ""
},
{
"docid": "fb898ef1b13d68ca3b5973b77237de74",
"text": "We present a nonrigid alignment algorithm for aligning high-resolution range data in the presence of low-frequency deformations, such as those caused by scanner calibration error. Traditional iterative closest points (ICP) algorithms, which rely on rigid-body alignment, fail in these cases because the error appears as a nonrigid warp in the data. Our algorithm combines the robustness and efficiency of ICP with the expressiveness of thin-plate splines to align high-resolution scanned data accurately, such as scans from the Digital Michelangelo Project [M. Levoy et al. (2000)]. This application is distinguished from previous uses of the thin-plate spline by the fact that the resolution and size of warping are several orders of magnitude smaller than the extent of the mesh, thus requiring especially precise feature correspondence.",
"title": ""
},
{
"docid": "5abc2b1536d989ff77e23ee9db22f625",
"text": "Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.",
"title": ""
},
{
"docid": "76fa3d86cbadce895d7553cfd1214cbe",
"text": "a r t i c l e i n f o This study examines a comprehensive model comprising of various relationships between transformational and transactional leadership, knowledge management (KM) process, and organizational performance. Data are collected from human resource managers and general managers working in 119 service firms. Exploratory factor analysis and hierarchical regression analysis are used to analyze the proposed hypotheses. The results indicate that transformational leadership has strong and positive effects on KM process and organizational performance after controlling for the effects of transactional leadership. Further, KM process partially mediates the relationship between transformational leadership and organizational performance after controlling for the effects of transactional leadership. Implications and directions for future research are also discussed. Researchers always emphasized the importance of developing unique knowledge within firms to deliver new products/services and to distinguish it from competitors for achieving advantage (Menguc, Auh, & Shih, 2007). Delivering unique products/services to customers helps to improve customer satisfaction and sales volume, and so firms have observed the influence of knowledge development over performance (Bogner & Bansal, 2007; Tanriverdi, 2005). Since knowledge resides within the brain of employees, firms develop various strategies to create organizational knowledge through leveraging employees' knowledge. Human resource managers are get involved in the activities of finding suitable leadership style that supports implementation of knowledge management (KM) programs to augment organizational performance. Identification of suitable leadership style is essential in this turbulent environment since researchers have reported that different leadership styles have varying impacts on implementation of KM process (Bryant, 2003). Transformational leadership theory postulates that leaders exhibit certain behaviors that accelerate employees' level of innovative thinking through which they improve individual employee performance, organizational innovation, and organizational perfor-Since transformational leaders greatly influence employees, whose engagement is enormously required for implementation of KM process, the role of transformational leadership is focused on the implementation of KM process to improve organizational performance. To date, scholars have empirically investigated the positive impacts Though these studies explained the direct impact on organizational performance, the following research questions are still unanswered: (1) Do transformational leadership behaviors influence performance of service firms after controlling for transactional leadership behaviors?; (2) Do transformational leadership behaviors help to implement KM process in service firms after controlling for transactional leadership behaviors?; and (3) Will KM process mediate the relationship between transformational leadership and organizational performance in the service firms after controlling for transactional leadership? In order to answer …",
"title": ""
},
{
"docid": "fbac8859e581fd1622bad0b50ac0a3f5",
"text": "OBJECTIVE\nThis preliminary study sought to determine whether the imagery perspective used during mental practice (MP) differentially influenced performance outcomes after stroke.\n\n\nMETHOD\nNineteen participants with unilateral subacute stroke (9 men and 10 women, ages 28-77) were randomly allocated to one of three groups. All groups received 30-min occupational therapy sessions 2×/wk for 6 wk. Experimental groups received MP training in functional tasks using either an internal or an external perspective; the control group received relaxation imagery training. Participants were pre- and posttested using the Fugl-Meyer Motor Assessment (FMA), the Jebsen-Taylor Test of Hand Function (JTTHF), and the Canadian Occupational Performance Measure (COPM).\n\n\nRESULTS\nAt posttest, the internal and external experimental groups showed statistically similar improvements on the FMA and JTTHF (p < .05). All groups improved on the COPM (p < .05).\n\n\nCONCLUSION\nMP combined with occupational therapy improves upper-extremity recovery after stroke. MP does not appear to enhance self-perception of performance. This preliminary study suggests that imagery perspective may not be an important variable in MP interventions.",
"title": ""
},
{
"docid": "8fc05d9e26c0aa98ffafe896d8c5a01b",
"text": "We describe our clinical question answering system implemented for the Text Retrieval Conference (TREC 2016) Clinical Decision Support (CDS) track. We submitted five runs using a combination of knowledge-driven (based on a curated knowledge graph) and deep learning-based (using key-value memory networks) approaches to retrieve relevant biomedical articles for answering generic clinical questions (diagnoses, treatment, and test) for each clinical scenario provided in three forms: notes, descriptions, and summaries. The submitted runs were varied based on the use of notes, descriptions, or summaries in association with different diagnostic inferencing methodologies applied prior to biomedical article retrieval. Evaluation results demonstrate that our systems achieved best or close to best scores for 20% of the topics and better than median scores for 40% of the topics across all participants considering all evaluation measures. Further analysis shows that on average our clinical question answering system performed best with summaries using diagnostic inferencing from the knowledge graph whereas our key-value memory network model with notes consistently outperformed the knowledge graph-based system for notes and descriptions. ∗The author is also affiliated with Worcester Polytechnic Institute (szhao@wpi.edu). †The author is also affiliated with Northwestern University (kathy.lee@eecs.northwestern.edu). ‡The author is also affiliated with Brandeis University (aprakash@brandeis.edu).",
"title": ""
},
{
"docid": "c6a30835ce21b418f5f097e6e4533332",
"text": "© 2000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.",
"title": ""
}
] |
scidocsrr
|
6a9cd02b0170a9e0ebe6eacbe15697fb
|
An Investigation of Performance Analysis of Anomaly Detection Techniques for Big Data in SCADA Systems
|
[
{
"docid": "f35d164bd1b19f984b10468c41f149e3",
"text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.",
"title": ""
}
] |
[
{
"docid": "18dfd9865271ae6994d4c9f84ffa49c3",
"text": "Clustering is a division of data into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar compared to objects of other groups. This paper is intended to study and compare different data clustering algorithms. The algorithms under investigation are: k-means algorithm, hierarchical clustering algorithm, self-organizing maps algorithm, and expectation maximization clustering algorithm. All these algorithms are compared according to the following factors: size of dataset, number of clusters, type of dataset and type of software used. Some conclusions that are extracted belong to the performance, quality, and accuracy of the clustering algorithms.",
"title": ""
},
{
"docid": "e490c50919edc27d828428d230a5e25d",
"text": "Database auditing is a prerequisite in the process of database forensics. Log files of different types and purposes are used in correlating evidence related to forensic investigation. In this paper, a new framework is proposed to explore and implement auditing features and DBMS-specific built-in utilities to aid in carrying out database forensics. The new framework is implemented in three phases, where ideal forensic auditing settings are suggested, techniques and approaches to conduct forensics are evaluated, and finally database forensic tools are investigated and evaluated. The research findings serve as guidelines toward focusing on database forensics. There is a crucial need to fill in the gap where forensic tools are few and not database specific.",
"title": ""
},
{
"docid": "b8f66ef5e046f0c9e7772b2233571594",
"text": "Cascaded classifiers have been widely used in pedestrian detection and achieved great success. These classifiers are trained sequentially without joint optimization. In this paper, we propose a new deep model that can jointly train multi-stage classifiers through several stages of back propagation. It keeps the score map output by a classifier within a local region and uses it as contextual information to support the decision at the next stage. Through a specific design of the training strategy, this deep architecture is able to simulate the cascaded classifiers by mining hard samples to train the network stage-by-stage. Each classifier handles samples at a different difficulty level. Unsupervised pre-training and specifically designed stage-wise supervised training are used to regularize the optimization problem. Both theoretical analysis and experimental results show that the training strategy helps to avoid over fitting. Experimental results on three datasets (Caltech, ETH and TUD-Brussels) show that our approach outperforms the state-of-the-art approaches.",
"title": ""
},
{
"docid": "443fb61dbb3cc11060104ed6ed0c645c",
"text": "An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.",
"title": ""
},
{
"docid": "d5a6e7add07b104e2a285c139ae1b727",
"text": "Breakfast skipping is common in adolescents, but research on the effects of breakfast skipping on school performance is scarce. This current cross-sectional survey study of 605 adolescents aged 11–18 years investigated whether adolescents who habitually skip breakfast have lower endof-term grades than adolescents who eat breakfast daily. Additionally, the roles of sleep behavior, namely chronotype, and attention were explored. Results showed that breakfast skippers performed lower at school than breakfast eaters. The findings were similar for younger and older adolescents and for boys and girls. Adolescents with an evening chronotype were more likely to skip breakfast, but chronotype was unrelated to school performance. Furthermore, attention problems partially mediated the relation between breakfast skipping and school performance. This large-scale study emphasizes the importance of breakfast as a determinant for school performance. The results give reason to investigate the mechanisms underlying the relation between skipping breakfast, attention, and school performance in more detail. Proper nutrition is commonly believed to be important for school performance; it is considered to be an essential prerequisite for the potential to learn in children (Taras, 2005). In the Western world, where most school-aged children are well nourished, emphasis is placed on eating breakfast for optimal school performance. Eating breakfast might be particularly important during adolescence. Adolescents have 1Department of Educational Neuroscience, VU University Amsterdam 2Centre for Learning Sciences and Technology, Open Universiteit Nederland 3School for Mental Health and Neuroscience, Maastricht University Address correspondence to Annemarie Boschloo, Faculty of Psychology and Education, VU University Amsterdam, Van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands; e-mail: a.m.boschloo@vu.nl. high nutritional needs, due to brain development processes and physical growth, while at the same time they have the highest rate of breakfast skipping among school-aged children (Hoyland, Dye, & Lawton, 2009; Rampersaud, 2009). However, not much is known about the effects of breakfast skipping on their school performance. Reviews indicate that only few studies have investigated the relationship between breakfast skipping and school performance in adolescents (Ells et al., 2008; Hoyland et al., 2009; Rampersaud, 2009; Taras, 2005). Therefore, the current study investigated the relation between habitual breakfast consumption and school performance in adolescents attending secondary school (age range 11–18 years). In addition, we explored two potentially important mechanisms underlying this relationship by investigating the roles of sleep behavior and attention. Depending on the definition of breakfast skipping, 10–30% of the adolescents (age range 11–18 years) can be classified as breakfast skippers (Rampersaud, Pereira, Girard, Adams, & Metzl, 2005). Adolescent breakfast skippers are more often girls and more often have a lower level of education (Keski-Rahkonen, Kaprio, Rissanen, Virkkunen, & Rose, 2003; Rampersaud et al., 2005; Shaw, 1998). Adolescent breakfast skippers are characterized by an unhealthy lifestyle, with behaviors such as smoking, irregular exercise, and alcohol and drug use. They make more unhealthy food choices and have a higher body mass index than breakfast eaters. Furthermore, they show more disinhibited behavior (Keski-Rahkonen et al., 2003; Rampersaud et al., 2005). Reasons adolescents give for skipping breakfast are that they are not hungry or do not have enough time (Shaw, 1998), although dieting seems to play a role as well (Rampersaud et al., 2005; Shaw, 1998). Experimental studies have investigated the relationship between breakfast skipping and cognitive functioning, which is assumed to underlie school performance. Breakfast skipping in children and adolescents appeared to affect memory and attention, especially toward the end of the morning (Ells et al., 2008; Hoyland et al., 2009; Rampersaud et al., 2005).",
"title": ""
},
{
"docid": "7a1541f523273fa0e8fbb53bf4259698",
"text": "We derive the proper form of the Akaike information criterion for variable selection for mixture cure models, which are often fit via the expectation-maximization algorithm. Separate covariate sets may be used in the mixture components. The selection criteria are applicable to survival models for right-censored data with multiple competing risks and allow for the presence of an insusceptible group. The method is illustrated on credit loan data, with pre-payment and default as events and maturity as the insusceptible case and is used in a simulation study.",
"title": ""
},
{
"docid": "ab2b079d7f8c4cda71fb713601c7c41b",
"text": "This article examines the relationship between abstract and morphological case, arguing that morphological case realizes abstract Case features in a postsyntactic morphology, according to the Elsewhere Condition. A class of prima facie ergative-absolutive languages is identified wherein intransitive subjects receive abstract nominative Case and transitive objects receive abstract accusative Case; these are realized through a morphological default, which is often mislabeled as absolutive. Further support comes from split ergativity based on a nominal hierarchy, which is shown to have a morphological source. Proposals that case and agreement are purely morphological phenomena are critiqued.",
"title": ""
},
{
"docid": "2059f42692358bb141fc716cc58510d2",
"text": "Airline ticket purchase timing is a strategic problem that requires both historical data and domain knowledge to solve consistently. Even with some historical information (often a feature of modern travel reservation web sites), it is difficult for consumers to make true cost-minimizing decisions. To address this problem, we introduce an automated agent which is able to optimize purchase timing on behalf of customers and provide performance estimates of its computed action policy based on past performance. We apply machine learning to recent ticket price quotes from many competing airlines for the target flight route. Our novelty lies in extending this using a systematic feature extraction technique incorporating elementary user-provided domain knowledge that greatly enhances the performance of machine learning algorithms. Using this technique, our agent achieves much closer to the optimal purchase policy than other proposed decision theoretic approaches for this domain.",
"title": ""
},
{
"docid": "7ffa0866be2de7d111cd763f4e6e3dce",
"text": "The research of imbalance data classification is the hot point in the field of data mining. Conventional classifiers are not suitable to the imbalanced learning tasks since they tend to classify the instances to the majority class which is the less important class. This paper pays close attention to the uniqueness of uneven data distribution in imbalance classification problems. Without change the original imbalance training data, this paper indicated the advantages of proximal classifier for imbalance data classification. In order to improve the accuracy of classification, this paper proposed a new model named LSNPPC, based the classical proximal SVM models which find two nonparallel planes for data classification. The LS-NPPC model is applied to six UCI datasets and one real application. The results indicate the effectiveness of the proposed model for imbalanced data classification problems.",
"title": ""
},
{
"docid": "aeccfd7c8f77e58375890a7c2bb87de7",
"text": "The aim of this paper is to determine acceptance of internet banking systems among potential young users, specifically future marketers, who significantly affect the continuous usage of internet banking service. It attempted to examine the impact of Computer Self Efficacy (CSE) and extended Technology Acceptance Model (TAM) on the behavioral intention (BI) to use the internet banking systems. Measure of CSE was based on the Self Service Technology, as proposed by Compaeu and Higgins. A technology acceptance model for internet banking system was developed based on the modified version of TAM to examine the effects of Perceived Usefulness (PU), Perceived Ease of use (PE) and Perceived Credibility (PC) of extended TAM on the BI to use the internet banking systems. PU and PE are the established dimensions of classical TAM and Perceived credibility (PC) is the additional dimension to be included in the conceptual model of this study. Data were obtained from 222 undergraduates marketing students in a Malaysia’s public university. The finding indicated that CSE, PU, PE and PC of extended TAM were determinants of users’ acceptance of internet banking systems. PU, PE and PC were significantly affected BI, and respondents’ perceived credibility of the internet banking system had the strongest impact on their intention to use the system. This research validated that PU, PE and PC of the extended TAM were good predictors in understanding individual responses to information technology systems. The result of this study highlighted that issues of privacy and security of PC are important in the study of information systems acceptance, suggesting that internet banking providers need to address these issues effectively to convince potential users to use internet banking service. This study also validated the critical role of CSE in predicting individual responses to information technology systems. The finding unveiled that indirect relationship existed between CSE and BI through PU, PE and PC of TAM.",
"title": ""
},
{
"docid": "920748fbdcaf91346a40e3bf5ae53d42",
"text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].",
"title": ""
},
{
"docid": "9c8f54b087d90a2bcd9e3d7db1aabd02",
"text": "The \"new Dark Silicon\" model benchmarks transistor technologies at the architectural level for multi-core processors.",
"title": ""
},
{
"docid": "cc63fa999bed5abf05a465ae7313c053",
"text": "In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of vision-based state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.",
"title": ""
},
{
"docid": "627ae11427c75be4e49f189c2d59f881",
"text": "BACKGROUND\nHypopigmented mycosis fungoides (HMF) is a rare subtype of mycosis fungoides (MF). We compared patients with exclusive hypopigmented lesions with a group of MF patients with concomitant different lesions.\n\n\nMETHODS\n20 patients with HMF only and 14 patients with hypopigmented lesions concomitant with other types of lesions (mixed MF, MMF) were selected. Clinical-epidemiological analysis as well as histological and immunohistochemical studies were performed.\n\n\nRESULTS\nHMF and MMF preserve some similarities, like predilection for dark-skinned persons and slow progression, but they also present differences: the exclusive variant is associated with early onset and a clear CD8+ immunophenotype, whereas MMF patients tend to present a predominance of CD4+ cell infiltrates. Histological analysis revealed similar findings; relapsing courses were common.\n\n\nCONCLUSION\nWhether patients are suffering from exclusive HMF or MMF, the presence of hypopigmented lesions can be considered a marker of good prognosis in MF, since both groups presented similar data, such as staging and disease duration.",
"title": ""
},
{
"docid": "42c304823c8ea070a3f4e2714e9f09dc",
"text": "This paper presents a novel permanent-magnet (PM) machine for wind power generation. In order to achieve high power/torque density as well as get rid of the nuisances aroused by the mechanical gearbox, a coaxial magnetic gear (CMG) is engaged. Different from the existing integrated machine in which armature windings are deployed in the inner bore of the CMG as an individual part, stator windings are directly inserted among the slots between the ferromagnetic segments in this proposed machine. Thus, it can offer several merits, such as simpler mechanical structure, better utilization of PM materials and lower manufacturing cost. Moreover, by artfully designing the connection of the armature windings, the electromagnetic coupling between the windings and the outer rotor PMs can be dramatically decreased, and the electromechanical energy conversion can be achieved by the field interaction between the inner rotor PMs and the armature windings. Received 16 December 2010, Accepted 1 February 2011, Scheduled 8 February 2011 Corresponding author: Linni Jian (ln.jian@siat.ac.cn).",
"title": ""
},
{
"docid": "fd4bddf9a5ff3c3b8577c46249bec915",
"text": "In order for neural networks to learn complex languages or grammars, they must have sufficient computational power or resources to recognize or generate such languages. Though many approaches have been discussed, one obvious approach to enhancing the processing power of a recurrent neural network is to couple it with an external stack memory in effect creating a neural network pushdown automata (NNPDA). This paper discusses in detail this NNPDA its construction, how it can be trained and how useful symbolic information can be extracted from the trained network. In order to couple the external stack to the neural network, an optimization method is developed which uses an error function that connects the learning of the state automaton of the neural network to the learning of the operation of the external stack. To minimize the error function using gradient descent learning, an analog stack is designed such that the action and storage of information in the stack are continuous. One interpretation of a continuous stack is the probabilistic storage of and action on data. After training on sample strings of an unknown source grammar, a quantization procedure extracts from the analog stack and neural network a discrete pushdown automata (PDA). Simulations show that in learning deterministic context-free grammars the balanced parenthesis language, 1 n0n, and the deterministic Palindrome the extracted PDA is correct in the sense that it can correctly recognize unseen strings of arbitrary length. In addition, the extracted PDAs can be shown to be identical or equivalent to the PDAs of the source grammars which were used to generate the training strings.",
"title": ""
},
{
"docid": "6a2d7b29a0549e99cdd31dbd2a66fc0a",
"text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.",
"title": ""
},
{
"docid": "4c4f50f94de0c5a451f978472bd72b60",
"text": "BACKGROUND\nIt has been suggested that finger length may correlate with function or disorders of the male reproductive system. This is based on the HOXA and HOXD genes' common embryological control of finger development and differentiation of the genital bud. The objective of this study was to explore the association between the ratio of 2nd to 4th finger length (2D:4D ratio) and testis function in a sample of young Danish men from the general population.\n\n\nMETHODS\nSemen samples and finger measurements were obtained from a total of 360 young Danish men in addition to blood samples for sex hormone analysis to describe the possible association between 2D:4D and semen and sex-hormone parameters.\n\n\nRESULTS\nA statistically significant inverse association with the 2D:4D was found only in relation to hormone levels of FSH in the group of young men with a 2D:4D >1 (P = 0.036) and a direct association with the total sperm count in the group of young men with a 2D:4D < or = 1 (P = 0.045).\n\n\nCONCLUSION\nThe statistically significant results may be 'false positives' (type I error) rather than representing true associations. This relatively large study of young, normal Danish men shows no reliable association between 2D:4D finger ratio and testicular function. Measurements of finger lengths do not have the power to predict the testicular function of adult men.",
"title": ""
},
{
"docid": "1985426e69de04b451dcc0b207101bcb",
"text": "To seamlessly integrate into the human physical and social environment, robots must display appropriate proxemic behavior - that is, follow societal norms in establishing their physical and psychological distancing with people. Social-scientific theories suggest competing models of human proxemic behavior, but all conclude that individuals' proxemic behavior is shaped by the proxemic behavior of others and the individual's psychological closeness to them. The present study explores whether these models can also explain how people physically and psychologically distance themselves from robots and suggest guidelines for future design of proxemic behaviors for robots. In a controlled laboratory experiment, participants interacted with Wakamaru to perform two tasks that examined physical and psychological distancing of the participants. We manipulated the likeability (likeable/dislikeable) and gaze behavior (mutual gaze/averted gaze) of the robot. Our results on physical distancing showed that participants who disliked the robot compensated for the increase in the robot's gaze by maintaining a greater physical distance from the robot, while participants who liked the robot did not differ in their distancing from the robot across gaze conditions. The results on psychological distancing suggest that those who disliked the robot also disclosed less to the robot. Our results offer guidelines for the design of appropriate proxemic behaviors for robots so as to facilitate effective human-robot interaction.",
"title": ""
},
{
"docid": "11f0041c1f4889ebd186465eed934093",
"text": "A neural network method is adopted to predict the football game's winning rate of two teams according to their previous stage's official statistical data of 2006 World Cup Football Game. The adopted prediction model is based on multi-layer perceptron (MLP) with back propagation learning rule. The input data are transformed to the relative ratios between two teams of each game. New training samples are added to the training samples at the previous stages. By way of experimental results, the determined neural network architecture for MLP is 8 inputs, 11 hidden nodes, and 1 output (8-11-1). The learning rate and momentum coefficient are sequentially determined by experiments as well. Based on the adopted MLP prediction method, the prediction accuracy can achieve 76.9% if the draw games are excluded.",
"title": ""
}
] |
scidocsrr
|
25e4022a3e87c3a311aeae668f6d9a94
|
Pixel bar charts: a visualization technique for very large multi-attribute data sets?
|
[
{
"docid": "6073601ab6d6e1dbba7a42c346a29436",
"text": "We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.",
"title": ""
}
] |
[
{
"docid": "186ba2180a44b8a4a52ffba6f46751c4",
"text": "Affective characteristics are crucial factors that influence human behavior, and often, the prevalence of either emotions or reason varies on each individual. We aim to facilitate the development of agents’ reasoning considering their affective characteristics. We first identify core processes in an affective BDI agent, and we integrate them into an affective agent architecture (GenIA3). These tasks include the extension of the BDI agent reasoning cycle to be compliant with the architecture, the extension of the agent language (Jason) to support affect-based reasoning, and the adjustment of the equilibrium between the agent’s affective and rational sides.",
"title": ""
},
{
"docid": "d70ea405a182c4de3f50858599f84ad8",
"text": "Oral lichen planus (OLP) has a prevalence of approximately 1%. The etiopathogenesis is poorly understood. The annual malignant transformation is less than 0.5%. There are no effective means to either predict or to prevent such event. Oral lesions may occur that to some extent look like lichen planus but lacking the characteristic features of OLP, or that are indistinguishable from OLP clinically but having a distinct cause, e.g. amalgam restoration associated. Such lesions are referred to as oral lichenoid lesions (OLLs). The management of OLP and the various OLLs may be different. Therefore, accurate diagnosis should be aimed at.",
"title": ""
},
{
"docid": "cf52fd01af4e01f28eeb14e0c6bce7e9",
"text": "Most applications manipulate persistent data, yet traditional systems decouple data manipulation from persistence in a two-level storage model. Programming languages and system software manipulate data in one set of formats in volatile main memory (DRAM) using a load/store interface, while storage systems maintain persistence in another set of formats in non-volatile memories, such as Flash and hard disk drives in traditional systems, using a file system interface. Unfortunately, such an approach suffers from the system performance and energy overheads of locating data, moving data, and translating data between the different formats of these two levels of storage that are accessed via two vastly different interfaces. Yet today, new non-volatile memory (NVM) technologies show the promise of storage capacity and endurance similar to or better than Flash at latencies comparable to DRAM, making them prime candidates for providing applications a persistent single-level store with a single load/store interface to access all system data. Our key insight is that in future systems equipped with NVM, the energy consumed executing operating system and file system code to access persistent data in traditional systems becomes an increasingly large contributor to total energy. The goal of this work is to explore the design of a Persistent Memory Manager that coordinates the management of memory and storage under a single hardware unit in a single address space. Our initial simulation-based exploration shows that such a system with a persistent memory can improve energy efficiency and performance by eliminating the instructions and data movement traditionally used to perform I/O operations.",
"title": ""
},
{
"docid": "3e1ff2ac72da8525d358c5dcf160c4b4",
"text": "Esthetic management of extensively decayed primary maxillary anterior teeth requiring full coronal coverage restoration is usually challenging to the pediatric dentists especially in very young children. Many esthetic options have been tried over the years each having its own advantages, disadvantages and associated technical, functional or esthetic limitations. Zirconia crowns have provided a treatment alternative to address the esthetic concerns and ease of placement of extra-coronal restorations on primary anterior teeth. The present article presents a case where grossly decayed maxillary primary incisors were restored esthetically and functionally with ready made zirconia crowns (ZIRKIZ, HASS Corp; Korea). After endodontic treatment the decayed teeth were restored with zirconia crowns. Over a 30 months period, the crowns have demonstrated good retention and esthetic results. Dealing with esthetic needs in children with extensive loss of tooth structure, using Zirconia crowns would be practical and successful. The treatment described is simple and effective and represents a promising alternative for rehabilitation of decayed primary teeth.",
"title": ""
},
{
"docid": "4b2a16c023937db4f417d52b070cc2cc",
"text": "Endosomal protein trafficking is an essential cellular process that is deregulated in several diseases and targeted by pathogens. Here, we describe a role for ubiquitination in this process. We find that the E3 RING ubiquitin ligase, MAGE-L2-TRIM27, localizes to endosomes through interactions with the retromer complex. Knockdown of MAGE-L2-TRIM27 or the Ube2O E2 ubiquitin-conjugating enzyme significantly impaired retromer-mediated transport. We further demonstrate that MAGE-L2-TRIM27 ubiquitin ligase activity is required for nucleation of endosomal F-actin by the WASH regulatory complex, a known regulator of retromer-mediated transport. Mechanistic studies showed that MAGE-L2-TRIM27 facilitates K63-linked ubiquitination of WASH K220. Significantly, disruption of WASH ubiquitination impaired endosomal F-actin nucleation and retromer-dependent transport. These findings provide a cellular and molecular function for MAGE-L2-TRIM27 in retrograde transport, including an unappreciated role of K63-linked ubiquitination and identification of an activating signal of the WASH regulatory complex.",
"title": ""
},
{
"docid": "6720ae7a531d24018bdd1d3d1c7eb28b",
"text": "This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children’s textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or textese spelling, is associated with poor literacy skills. A sample of 86 children aged 10 to 12 years read and wrote text messages in conventional English and in textese, and completed tests of spelling, reading, and non-word reading. Children took significantly longer, and made more errors, when reading messages written in textese than in conventional English. Further, they were no faster at writing messages in textese than in conventional English, regardless of texting method or experience. Predictive texters were faster at reading and writing messages than multi-press texters, and texting experience increased writing, but not reading, speed. General spelling and reading scores did not differ significantly with usual texting method. However, better literacy skills were associated with greater textese reading speed and accuracy. These findings add to the growing evidence for a positive relationship between texting proficiency and traditional literacy skills. Children’s text-messaging and literacy skills 3 The advent of mobile phones, and of text-messaging in particular, has changed the way that people communicate, and adolescents and children seem especially drawn to such technology. Australian surveys have revealed that 19% of 8to 11-year-olds and 76% of 12to 14-year-olds have their own mobile phone (Cupitt, 2008), and that 69% of mobile phone users aged 14 years and over use text-messaging (Australian Government, 2008), with 90% of children in Grades 7-12 sending a reported average of 11 texts per week (ABS, 2008). Text-messaging has also been the catalyst for a new writing style: textese. Described as a hybrid of spoken and written English (Plester & Wood, 2009), textese is a largely soundbased, or phonological, form of spelling that can reduce the time and cost of texting (Leung, 2007). Common abbreviations, or textisms, include letter and number homophones (c for see, 2 for to), contractions (txt for text), and non-conventional spellings (skool for school) (Plester, Wood, & Joshi, 2009; Thurlow, 2003). Estimates of the proportion of textisms that children use in their messages range from 21-47% (increasing with age) in naturalistic messages (Wood, Plester, & Bowyer, 2009), to 34% for messages elicited by a given scenario (Plester et al., 2009), to 50-58% for written messages that children ‘translated’ to and from textese (Plester, Wood, & Bell, 2008). One aim of the current study was to examine the efficiency of using textese for both the message writer and the reader, in order to understand the reasons behind (Australian) children’s use of textisms. The spread of textese has been attributed to texters’ desire to overcome the confines of the alphanumeric mobile phone keypad (Crystal, 2008). Since several letters are assigned to each number, the multi-press style of texting requires the somewhat laborious pressing of the same button one to four times to type each letter (Taylor & Vincent, 2005). The use of textese thus has obvious savings for multi-press texters, of both time and screen-space (as message character count cannot exceed 160). However, there is evidence, discussed below, that reading textese can be relatively slow and difficult for the message recipient, compared to Children’s text-messaging and literacy skills 4 reading conventional English. Since the use of textese is now widespread, it is important to examine the potential advantages and disadvantages that this form of writing may have for message senders and recipients, especially children, whose knowledge of conventional English spelling is still developing. To test the potential advantages of using textese for multi-press texters, Neville (2003) examined the speed and accuracy of textese versus conventional English in writing and reading text messages. British girls aged 11-16 years were dictated two short passages to type into a mobile phone: one using conventional English spelling, and the other “as if writing to a friend”. They also read two messages aloud from the mobile phone, one in conventional English, and the other in textese. The proportion of textisms produced is not reported, but no differences in textese use were observed between texters and non-texters. Writing time was significantly faster for textese than conventional English messages, with greater use of textisms significantly correlated with faster message typing times. However, participants were significantly faster at reading messages written in conventional English than in textese, regardless of their usual texting frequency. Kemp (2010) largely followed Neville’s (2003) design, but with 61 Australian undergraduates (mean age 22 years), all regular texters. These adults, too, were significantly faster at writing, but slower at reading, messages written in textese than in conventional English, regardless of their usual messaging frequency. Further, adults also made significantly more reading errors for messages written in textese than conventional English. These findings converge on the important conclusion that while the use of textisms makes writing more efficient for the message sender, it costs the receiver more time to read it. However, both Neville (2003) and Kemp (2010) examined only multi-press method texting, and not the predictive texting method now also available. Predictive texting requires only a single key-press per letter, and a dictionary-based system suggests one or more likely words Children’s text-messaging and literacy skills 5 based on the combinations entered (Taylor & Vincent, 2005). Textese may be used less by predictive texters than multi-press texters for two reasons. Firstly, predictive texting requires fewer key-presses than multi-press texting, which reduces the need to save time by taking linguistic short-cuts. Secondly, the dictionary-based predictive system makes it more difficult to type textisms that are not pre-programmed into the dictionary. Predictive texting is becoming increasingly popular, with recent studies reporting that 88% of Australian adults (Kemp, in press), 79% of Australian 13to 15-year-olds (De Jonge & Kemp, in press) and 55% of British 10to 12-year-olds (Plester et al., 2009) now use this method. Another aim of this study was thus to compare the reading and writing of textese and conventional English messages in children using their typical input method: predictive or multi-press texting, as well as in children who do not normally text. Finally, this study sought to investigate the popular assumption that exposure to unconventional word spellings might compromise children’s conventional literacy skills (e.g., Huang, 2008; Sutherland, 2002), with media articles revealing widespread disapproval of this communication style (Thurlow, 2006). In contrast, some authors have suggested that the use of textisms might actually improve children’s literacy skills (e.g., Crystal, 2008). Many textisms commonly used by children rely on the ability to distinguish, blend, and/or delete letter sounds (Plester et al., 2008, 2009). Practice at reading and creating textisms may therefore lead to improved phonological awareness (Crystal, 2008), which consistently predicts both reading and spelling prowess (e.g., Bradley & Bryant, 1983; Lundberg, Frost, & Petersen, 1988). Alternatively, children who use more textisms may do so because they have better phonological awareness, or poorer spellers may be drawn to using textisms to mask weak spelling ability (e.g., Sutherland, 2002). Thus, studying children’s textism use can provide further information on the links between the component skills that constitute both conventional and alternative, including textism-based, literacy. Children’s text-messaging and literacy skills 6 There is evidence for a positive link between the use of textisms and literacy skills in preteen children. Plester et al. (2008) asked 10to 12-year-old British children to translate messages from standard English to textese, and vice versa, with pen and paper. They found a significant positive correlation between textese use and verbal reasoning scores (Study 1) and spelling scores (Study 2). Plester et al. (2009) elicited text messages from a similar group of children by asking them to write messages in response to a given scenario. Again, textism use was significantly positively associated with word reading ability and phonological awareness scores (although not with spelling scores). Neville (2003) found that the number of textisms written, and the number read accurately, as well as the speed with which both conventional and textese messages were read and written, all correlated significantly with general spelling skill in 11to 16-year-old girls. The cross-sectional nature of these studies, and of the current study, means that causal relationships cannot be firmly established. However, Wood et al. (2009) report on a longitudinal study in which 8to 12-year-old children’s use of textese at the beginning of the school year predicted their skills in reading ability and phonological awareness at the end of the year, even after controlling for verbal IQ. These results provide the first support for the idea that textism use is driving the development of literacy skills, and thus that this use of technology can improve learning in the area of language and literacy. Taken together, these findings also provide important evidence against popular media claims that the use of textese is harming children’s traditional literacy skills. No similar research has yet been published with children outside the UK. The aim of the current study was thus to examine the speed and proficiency of textese use in Australian 10to 12-year-olds and, for the first time, to compare the r",
"title": ""
},
{
"docid": "3eeb8af163f02e8ab5f709bf75bc20d6",
"text": "The connection between part-of-speech (POS) categories and morphological properties is well-documented in linguistics but underutilized in text processing systems. This paper proposes a novel model for morphological segmentation that is driven by this connection. Our model learns that words with common affixes are likely to be in the same syntactic category and uses learned syntactic categories to refine the segmentation boundaries of words. Our results demonstrate that incorporating POS categorization yields substantial performance gains on morphological segmentation of Arabic. 1",
"title": ""
},
{
"docid": "72b67938df75b1668218e290dc2e1478",
"text": "Forensic entomology, the use of insects and other arthropods in forensic investigations, is becoming increasingly more important in such investigations. To ensure its optimal use by a diverse group of professionals including pathologists, entomologists and police officers, a common frame of guidelines and standards is essential. Therefore, the European Association for Forensic Entomology has developed a protocol document for best practice in forensic entomology, which includes an overview of equipment used for collection of entomological evidence and a detailed description of the methods applied. Together with the definitions of key terms and a short introduction to the most important methods for the estimation of the minimum postmortem interval, the present paper aims to encourage a high level of competency in the field of forensic entomology.",
"title": ""
},
{
"docid": "15ea58d6c00210a2da15ce91a9a482f6",
"text": "Video processing systems such as HEVC requiring low energy consumption needed for the multimedia market has lead to extensive development in fast algorithms for the efficient approximation of 2-D DCT transforms. The DCT is employed in a multitude of compression standards due to its remarkable energy compaction properties. Multiplier-free approximate DCT transforms have been proposed that offer superior compression performance at very low circuit complexity. Such approximations can be realized in digital VLSI hardware using additions and subtractions only, leading to significant reductions in chip area and power consumption compared to conventional DCTs and integer transforms. In this paper, we introduce a novel 8-point DCT approximation that requires only 14 addition operations and no multiplications. The proposed transform possesses low computational complexity and is compared to state-of-the-art DCT approximations in terms of both algorithm complexity and peak signal-to-noise ratio. The proposed DCT approximation is a candidate for reconfigurable video standards such as HEVC. The proposed transform and several other DCT approximations are mapped to systolic-array digital architectures and physically realized as digital prototype circuits using FPGA technology and mapped to 45 nm CMOS technology.",
"title": ""
},
{
"docid": "a6b5f49b8161b45540bdd333d8588cd8",
"text": "Personality inconsistency is one of the major problems for chit-chat sequence to sequence conversational agents. Works studying this problem have proposed models with the capability of generating personalized responses, but there is not an existing evaluation method for measuring the performance of these models on personality. This thesis develops a new evaluation method based on the psychological study of personality, in particular the Big Five personality traits. With the new evaluation method, the thesis examines if the responses generated by personalized chit-chat sequence to sequence conversational agents are distinguished for speakers with different personalities. The thesis also proposes a new model that generates distinguished responses based on given personalities. The results of our experiments in the thesis show that: for both the existing personalized model and the new model that we propose, the generated responses for speakers with different personalities are significantly more distinguished than a random baseline; specially for our new model, it has the capability of generating distinguished responses for different types of personalities measured by the Big Five personality traits.",
"title": ""
},
{
"docid": "7eb2ed61c99e03080250c23b1259280d",
"text": "-------------------------------------------------------------------ABSTRACT------------------------------------------------------------Over the past decade there has been a rapid growth in higher education system. A lot of new institutions have come up both from public and private sector offering variety of courses for under graduating and post graduating students. The rates of enrolments for higher education has also increased but not as much as the number of higher institutions are increasing. It is a concern for today’s education system and this gap has to be identified and properly addressed to the learning community. Hence it has become important to understand the requirement of students and their academic progression. Educational Data Mining helps in a big way to answer the issues of predictions and profiling of not only students but other stake holders of education sectors. This paper discusses the application of various Data Mining tools and techniques that can be effectively used in answering the issues of predictions of student’s performance and their profiling.",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
},
{
"docid": "f2a1e5d8e99977c53de9f2a82576db69",
"text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.",
"title": ""
},
{
"docid": "269e1c0d737beafd10560360049c6ee3",
"text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.",
"title": ""
},
{
"docid": "5d102aec00c21891f97c8c083045e0c3",
"text": "A simple method for detecting salient regions in images is proposed. It requires only edge detection, threshold decomposition, the distance transform, and thresholding. Moreover, it avoids the need for setting any parameter values. Experiments show that the resulting regions are relatively coarse, but overall the method is surprisingly effective, and has the benefit of easy implementation. Quantitative tests were carried out on Liu et al.’s dataset of 5000 images. Although the ratings of our simple method were not as good as their approach which involved an extensive training stage, they were comparable to several other popular methods from the literature. Further tests on Kootstra and Schomaker’s dataset of 99 images also showed promising results.",
"title": ""
},
{
"docid": "90cbb02beb09695320d7ab72d709b70e",
"text": "Domain adaptation learning aims to solve the classification problems of unlabeled target domain by using rich labeled samples in source domain, but there are three main problems: negative transfer, under adaptation and under fitting. Aiming at these problems, a domain adaptation network based on hypergraph regularized denoising autoencoder (DAHDA) is proposed in this paper. To better fit the data distribution, the network is built with denoising autoencoder which can extract more robust feature representation. In the last feature and classification layers, the marginal and conditional distribution matching terms between domains are obtained via maximum mean discrepancy measurement to solve the under adaptation problem. To avoid negative transfer, the hypergraph regularization term is introduced to explore the high-order relationships among data. The classification performance of the model can be improved by preserving the statistical property and geometric structure simultaneously. Experimental results of 16 cross-domain transfer tasks verify that DAHDA outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "7edb8a803734f4eb9418b8c34b1bf07c",
"text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.",
"title": ""
},
{
"docid": "e44f67fec39390f215b5267c892d1a26",
"text": "Primary progressive aphasia (PPA) may be the onset of several neurodegenerative diseases. This study evaluates a cohort of patients with PPA to assess their progression to different clinical syndromes, associated factors that modulate this progression, and patterns of cerebral metabolism linked to different clinical evolutionary forms. Thirty-five patients meeting PPA criteria underwent a clinical and neuroimaging 18F-Fluorodeoxyglucose PET evaluation. Survival analysis was performed using time from clinical onset to the development of a non-language symptom or deficit (PPA-plus). Cerebral metabolism was analyzed using Statistical Parametric Mapping. Patients classified into three PPA variants evolved to atypical parkinsonism, behavioral disorder and motor neuron disease in the agrammatic variant; to behavioral disorder in the semantic; and to memory impairment in the logopenic. Median time from the onset of symptoms to PPA-plus was 36 months (31–40, 95 % confidence interval). Right laterality, and years of education were associated to a lower risk of progression, while logopenic variant to a higher risk. Different regions of hypometabolism were identified in agrammatic PPA with parkinsonism, motor neuron disease and logopenic PPA-plus. Clinical course of PPA differs according to each variant. Left anterior temporal and frontal medial hypometabolism in agrammatic variant is linked to motor neuron disease and atypical parkinsonism, respectively. PPA variant, laterality and education may be associated to the risk of progression. These results suggest the possibility that clinical and imaging data could help to predict the clinical course of PPA.",
"title": ""
},
{
"docid": "fc38e1e16370a3e94618e8c7e2887635",
"text": "We present a short survey and exposition of some of the important aspects of Turkish that have proved to be interesting and challenging for natural language and speech processing. Most of the challenges stem from the complex morphology of Turkish and how morphology interacts with syntax. Finally we provide a short overview of the major tools and resources developed for Turkish over the last two decades. (Parts of this chapter were previously published as Oflazer (Lang Resour Eval 48(4):639–653, 2014).)",
"title": ""
},
{
"docid": "ee9ca88d092538a399d192cf1b9e9df6",
"text": "The new user problem in recommender systems is still challenging, and there is not yet a unique solution that can be applied in any domain or situation. In this paper we analyze viable solutions to the new user problem in collaborative filtering (CF) that are based on the exploitation of user personality information: (a) personality-based CF, which directly improves the recommendation prediction model by incorporating user personality information, (b) personality-based active learning, which utilizes personality information for identifying additional useful preference data in the target recommendation domain to be elicited from the user, and (c) personality-based cross-domain recommendation, which exploits personality information to better use user preference data from auxiliary domains which can be used to compensate the lack of user preference data in the target domain. We benchmark the effectiveness of these methods on large datasets that span several domains, namely movies, music and books. Our results show that personality-aware methods achieve performance improvements that range from 6 to 94 % for users completely new to the system, while increasing the novelty of the recommended items by 3–40 % with respect to the non-personalized popularity baseline. We also discuss the limitations of our approach and the situations in which the proposed methods can be better applied, hence providing guidelines for researchers and practitioners in the field.",
"title": ""
}
] |
scidocsrr
|
9b364368adef3019c0ee62758ecbfadd
|
A tale of three next generation sequencing platforms: comparison of Ion Torrent, Pacific Biosciences and Illumina MiSeq sequencers
|
[
{
"docid": "9cf429c5398ac440438cb27b44fb710f",
"text": "In vivo protein-DNA interactions connect each transcription factor with its direct targets to form a gene network scaffold. To map these protein-DNA interactions comprehensively across entire mammalian genomes, we developed a large-scale chromatin immunoprecipitation assay (ChIPSeq) based on direct ultrahigh-throughput DNA sequencing. This sequence census method was then used to map in vivo binding of the neuron-restrictive silencer factor (NRSF; also known as REST, for repressor element-1 silencing transcription factor) to 1946 locations in the human genome. The data display sharp resolution of binding position [+/-50 base pairs (bp)], which facilitated our finding motifs and allowed us to identify noncanonical NRSF-binding motifs. These ChIPSeq data also have high sensitivity and specificity [ROC (receiver operator characteristic) area >/= 0.96] and statistical confidence (P <10(-4)), properties that were important for inferring new candidate interactions. These include key transcription factors in the gene network that regulates pancreatic islet cell development.",
"title": ""
}
] |
[
{
"docid": "af7c62cba99c426e6108d164939b44de",
"text": "The hippocampal formation can encode relative spatial location, without reference to external cues, by the integration of linear and angular self-motion (path integration). Theoretical studies, in conjunction with recent empirical discoveries, suggest that the medial entorhinal cortex (MEC) might perform some of the essential underlying computations by means of a unique, periodic synaptic matrix that could be self-organized in early development through a simple, symmetry-breaking operation. The scale at which space is represented increases systematically along the dorsoventral axis in both the hippocampus and the MEC, apparently because of systematic variation in the gain of a movement-speed signal. Convergence of spatially periodic input at multiple scales, from so-called grid cells in the entorhinal cortex, might result in non-periodic spatial firing patterns (place fields) in the hippocampus.",
"title": ""
},
{
"docid": "b9f0d1d80ba7f8c304a601d179730951",
"text": "A critical part of developing a reliable software system is testing its recovery code. This code is traditionally difficult to test in the lab, and, in the field, it rarely gets to run; yet, when it does run, it must execute flawlessly in order to recover the system from failure. In this article, we present a library-level fault injection engine that enables the productive use of fault injection for software testing. We describe automated techniques for reliably identifying errors that applications may encounter when interacting with their environment, for automatically identifying high-value injection targets in program binaries, and for producing efficient injection test scenarios. We present a framework for writing precise triggers that inject desired faults, in the form of error return codes and corresponding side effects, at the boundary between applications and libraries. These techniques are embodied in LFI, a new fault injection engine we are distributing http://lfi.epfl.ch. This article includes a report of our initial experience using LFI. Most notably, LFI found 12 serious, previously unreported bugs in the MySQL database server, Git version control system, BIND name server, Pidgin IM client, and PBFT replication system with no developer assistance and no access to source code. LFI also increased recovery-code coverage from virtually zero up to 60% entirely automatically without requiring new tests or human involvement.",
"title": ""
},
{
"docid": "32539e8223b95b25f01f81b29a7e1dc1",
"text": "In a ciphertext-policy attribute-based encryption (CP-ABE) system, decryption keys are defined over attributes shared by multiple users. Given a decryption key, it may not be always possible to trace to the original key owner. As a decryption privilege could be possessed by multiple users who own the same set of attributes, malicious users might be tempted to leak their decryption privileges to some third parties, for financial gain as an example, without the risk of being caught. This problem severely limits the applications of CP-ABE. Several traceable CP-ABE (T-CP-ABE) systems have been proposed to address this problem, but the expressiveness of policies in those systems is limited where only and gate with wildcard is currently supported. In this paper we propose a new T-CP-ABE system that supports policies expressed in any monotone access structures. Also, the proposed system is as efficient and secure as one of the best (non-traceable) CP-ABE systems currently available, that is, this work adds traceability to an existing expressive, efficient, and secure CP-ABE scheme without weakening its security or setting any particular trade-off on its performance.",
"title": ""
},
{
"docid": "4e8a53361e8ad0b0fef64136b2d6d43a",
"text": "Real-time crime forecasting is important. However, accurate prediction of when and where the next crime will happen is difficult. No known physical model provides a reasonable approximation to such a complex system. Historical crime data are sparse in both space and time and the signal of interests is weak. In this work, we first present a proper representation of crime data. We then adapt the spatial temporal residual network on the well represented data to predict the distribution of crime in Los Angeles at the scale of hours in neighborhood-sized parcels. These experiments as well as comparisons with several existing approaches to prediction demonstrate the superiority of the proposed model in terms of accuracy. Finally, we present a ternarization technique to address the resource consumption issue for its deployment in real world. This work is an extension of our short conference proceeding paper [Wang et al, Arxiv 1707.03340].",
"title": ""
},
{
"docid": "8f9b348eed632aa05a33b7810d7988f6",
"text": "Text classification models are becoming increasingly complex and opaque, however for many applications it is essential that the models are interpretable. Recently, a variety of approaches have been proposed for generating local explanations. While robust evaluations are needed to drive further progress, so far it is unclear which evaluation approaches are suitable. This paper is a first step towards more robust evaluations of local explanations. We evaluate a variety of local explanation approaches using automatic measures based on word deletion. Furthermore, we show that an evaluation using a crowdsourcing experiment correlates moderately with these automatic measures and that a variety of other factors also impact the human judgements.",
"title": ""
},
{
"docid": "6dddd252eec80ec4f3535a82e25809cf",
"text": "The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.",
"title": ""
},
{
"docid": "a7b0f0455482765efd3801c3ae9f85b7",
"text": "The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to create models with semantic errors. Such errors are especially serious, because errors in the early phases of systems development are among the most costly and hardest to correct. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. Accordingly, this paper proposes a mapping from BPMN to a formal language, namely Petri nets, for which efficient analysis techniques are available. The proposed mapping has been implemented as a tool that, in conjunction with existing Petri net-based tools, enables the static analysis of BPMN models. The formalisation also led to the identification of deficiencies in the BPMN standard specification.",
"title": ""
},
{
"docid": "28877487175f704ea3c56d8b69863018",
"text": "In this paper, we attempt to make a formal analysis of the performance in automatic part of speech tagging. Lower and upper bounds in tagging precision using existing taggers or their combination are provided. Since we show that with existing taggers, automatic perfect tagging is not possible, we offer two solutions for applications requiring very high precision: (1) a solution involving minimum human intervention for a precision of over 98.7%, and (2) a combination of taggers using a memory based learning algorithm that succeeds in reducing the error rate with 11.6% with respect to the best tagger involved.",
"title": ""
},
{
"docid": "78815a2c4b7d3d3a9aa86936e4aef18e",
"text": "The purpose of this paper is to critically review the current trend in automobile engineering toward automation of many of the functions previously performed by the driver. Working on the assumption that automation in aviation represents the basic model for driver automation, the costs and benefits of automation in aviation are explored as a means of establishing where automation of drivers tasks are likely to yield benefits. It is concluded that there are areas where automation can provide benefits to the driver, but there are other areas where this is unlikely to be the case. Automation per se does not guarantee success, and therefore it becomes vital to involve Human Factors into design to identify where automation of driver functions can be allocated with a beneficial outcome for driving performance.",
"title": ""
},
{
"docid": "780e49047bdacda9862c51338aa1397f",
"text": "We consider stochastic volatility models under parameter uncertainty and investigate how model derived prices of European options are affected. We let the pricing parameters evolve dynamically in time within a specified region, and formalise the problem as a control problem where the control acts on the parameters to maximise/minimise the option value. Through a dual representation with backward stochastic differential equations, we obtain explicit equations for Heston’s model and investigate several numerical solutions thereof. In an empirical study, we apply our results to market data from the S&P 500 index where the model is estimated to historical asset prices. We find that the conservative model-prices cover 98% of the considered market-prices for a set of European call options.",
"title": ""
},
{
"docid": "448d49f87f2464851dfe61af7e66402a",
"text": "The success of CNNs is accompanied by deep models and heavy storage costs. For compressing CNNs, we propose an efficient and robust pruning approach, cross-entropy pruning (CEP). Given a trained CNN model, connections were divided into groups in a group-wise way according to their corresponding output neurons. All connections with their cross-entropy errors below a grouping threshold were then removed. A sparse model was obtained and the number of parameters in the baseline model significantly reduced. This letter also presents a highest cross-entropy pruning (HCEP) method that keeps a small portion of weights with the highest CEP. This method further improves the accuracy of CEP. To validate CEP, we conducted the experiments on low redundant networks that are hard to compress. For the MNIST data set, CEP achieves an 0.08% accuracy drop required by LeNet-5 benchmark with only 16% of original parameters. Our proposed CEP also reduces approximately 75% of the storage cost of AlexNet on the ILSVRC 2012 data set, increasing the top-1 errorby only 0.4% and top-5 error by only 0.2%. Compared with three existing methods on LeNet-5, our proposed CEP and HCEP perform significantly better than the existing methods in terms of the accuracy and stability. Some computer vision tasks on CNNs such as object detection and style transfer can be computed in a high-performance way using our CEP and HCEP strategies.",
"title": ""
},
{
"docid": "98df036ec06b4a1de727e1c0dd87993d",
"text": "Coccidiosis is the bane of the poultry industry causing considerable economic loss. Eimeria species are known as protozoan parasites to cause morbidity and death in poultry. In addition to anticoccidial chemicals and vaccines, natural products are emerging as an alternative and complementary way to control avian coccidiosis. In this review, we update recent advances in the use of anticoccidial phytoextracts and phytocompounds, which cover 32 plants and 40 phytocompounds, following a database search in PubMed, Web of Science, and Google Scholar. Four plant products commercially available for coccidiosis are included and discussed. We also highlight the chemical and biological properties of the plants and compounds as related to coccidiosis control. Emphasis is placed on the modes of action of the anticoccidial plants and compounds such as interference with the life cycle of Eimeria, regulation of host immunity to Eimeria, growth regulation of gut bacteria, and/or multiple mechanisms. Biological actions, mechanisms, and prophylactic/therapeutic potential of the compounds and extracts of plant origin in coccidiosis are summarized and discussed.",
"title": ""
},
{
"docid": "fd2107d499f0c5e37f1cde23c691b0d3",
"text": "Spin-transfer torque magnetic random access memory (STT-MRAM) is a promising emerging memory technology due to its various advantageous features such as scalability, nonvolatility, density, endurance, and fast speed. However, the reliability of STT-MRAM is severely impacted by environmental disturbances because radiation strike on the access transistor could introduce potential write and read failures for 1T1MTJ cells. In this paper, a comprehensive approach is proposed to evaluate the radiation-induced soft errors spanning from device modeling to circuit level analysis. The simulation based on 3-D metal-oxide-semiconductor transistor modeling is first performed to capture the radiation-induced transient current pulse. Then a compact switching model of magnetic tunneling junction (MTJ) is developed to analyze the various mechanisms of STT-MRAM write failures. The probability of failure of 1T1MTJ is characterized and built as look-up-tables. This approach enables designers to consider the effect of different factors such as radiation strength, write current magnitude and duration time on soft error rate of STT-MRAM memory arrays. Meanwhile, comprehensive write and sense circuits are evaluated for bit error rate analysis under random radiation effects and transistors process variation, which is critical for performance optimization of practical STT-MRAM read and sense circuits.",
"title": ""
},
{
"docid": "295d94e49b08e5a4bae0ba3cdcd3ba05",
"text": "Imitation learning (IL) consists of a set of tools that leverage expert demonstrations to quickly learn policies. However, if the expert is suboptimal, IL can yield policies with inferior performance compared to reinforcement learning (RL). In this paper, we aim to provide an algorithm that combines the best aspects of RL and IL. We accomplish this by formulating several popular RL and IL algorithms in a common mirror descent framework, showing that these algorithms can be viewed as a variation on a single approach. We then propose LOKI, a strategy for policy learning that first performs a small but random number of IL iterations before switching to a policy gradient RL method. We show that if the switching time is properly randomized, LOKI can learn to outperform a suboptimal expert and converge faster than running policy gradient from scratch. Finally, we evaluate the performance of LOKI experimentally in several simulated environments.",
"title": ""
},
{
"docid": "7b99361ec595958457819fd2c4c67473",
"text": "At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. In this work, we consider how the electrical properties of humans and their attire can be used to support user differentiation on touchscreens. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing, which measures the impedance of a user to the environment (i.e., ground) across a range of AC frequencies. Different people have different bone densities and muscle mass, wear different footwear, and so on. This, in turn, yields different impedance profiles, which allows for touch events, including multitouch gestures, to be attributed to a particular user. This has many interesting implications for interactive design. We describe and evaluate our sensing approach, demonstrating that the technique has considerable promise. We also discuss limitations, how these might be overcome, and next steps.",
"title": ""
},
{
"docid": "c0a75bf3a2d594fb87deb7b9f58a8080",
"text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.",
"title": ""
},
{
"docid": "f2f90ede8ea53894a2a19a8f4ad62fe1",
"text": "Subgraph/supergraph queries although central to graph analytics, are costly as they entail the NP-Complete problem of subgraph isomorphism. We present a fresh solution, the novel principle of which is to acquire and utilize knowledge from the results of previously executed queries. Our approach, iGQ, encompasses two component subindexes to identify if a new query is a subgraph/supergraph of previously executed queries and stores related key information. iGQ comes with novel query processing and index space management algorithms, including graph replacement policies. The end result is a system that leads to significant reduction in the number of required subgraph isomorphism tests and speedups in query processing time. iGQ can be incorporated into any sub/supergraph query processing method and help improve performance. In fact, it is the only contribution that can speedup significantly both subgraph and supergraph query processing. We establish the principles of iGQ and formally prove its correctness. We have implemented iGQ and have incorporated it within three popular recent state of the art index-based graph query processing solutions. We evaluated its performance using real-world and synthetic graph datasets with different characteristics, and a number of query workloads, showcasing its benefits.",
"title": ""
},
{
"docid": "5a83cb0ef928b6cae6ce1e0b21d47f60",
"text": "Software defined networking, characterized by a clear separation of the control and data planes, is being adopted as a novel paradigm for wired networking. With SDN, network operators can run their infrastructure more efficiently, supporting faster deployment of new services while enabling key features such as virtualization. In this article, we adopt an SDN-like approach applied to wireless mobile networks that will not only benefit from the same features as in the wired case, but will also leverage on the distinct features of mobile deployments to push improvements even further. We illustrate with a number of representative use cases the benefits of the adoption of the proposed architecture, which is detailed in terms of modules, interfaces, and high-level signaling. We also review the ongoing standardization efforts, and discuss the potential advantages and weaknesses, and the need for a coordinated approach.",
"title": ""
},
{
"docid": "2fb5f1e17e888049bd0f506f3a37f377",
"text": "While the Semantic Web has evolved to support the meaningful exchange of heterogeneous data through shared and controlled conceptualisations, Web 2.0 has demonstrated that large-scale community tagging sites can enrich the semantic web with readily accessible and valuable knowledge. In this paper, we investigate the integration of a movies folksonomy with a semantic knowledge base about usermovie rentals. The folksonomy is used to enrich the knowledge base with descriptions and categorisations of movie titles, and user interests and opinions. Using tags harvested from the Internet Movie Database, and movie rating data gathered by Netflix, we perform experiments to investigate the question that folksonomy-generated movie tag-clouds can be used to construct better user profiles that reflect a user’s level of interest in different kinds of movies, and therefore, provide a basis for prediction of their rating for a previously unseen movie.",
"title": ""
}
] |
scidocsrr
|
e407b28c5f59b875dea39f72b4ab2463
|
Cutting circles and polygons from area-minimizing rectangles
|
[
{
"docid": "9554640e49aea8bec5283463d5a2be1f",
"text": "In this paper, we study the problem of packing unequal circle s into a2D rectangular container. We solve this problem by proposing two greedy algorithms. Th e first algorithm, denoted by B1.0, selects the next circle to place according to the maximum hole degree rule , which is inspired from human activity in packing. The second algorithm, denoted by B1.5, improves B1.0 with aself look-ahead search strategy . The comparisons with the published methods on several inst ances taken from the literature show the good performance of our ap p oach.",
"title": ""
}
] |
[
{
"docid": "f5b372607a89ea6595683276e48d6dce",
"text": "In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.",
"title": ""
},
{
"docid": "d969dfa0584101410fd2868f8de918bb",
"text": "Although fluxgates may have resolution of 50 pT and absolute precission of 1 nT, their accuracy is often degraded by crossfield response, non-linearities, hysteresis and perming effects. The trends are miniaturization, lower power consumption and production cost, non-linear tuning and digital processing. New core shapes and signal processing methods have been suggested.",
"title": ""
},
{
"docid": "f8d09d619442b3fbbd7b59370e7b6d54",
"text": "The Internet is a powerful tool that has changed the way people work. However, the ubiquity of the Internet has led to a new workplace threat to productivity-cyberloafing. Building on the ego depletion model of self-regulation, we examine how lost and low-quality sleep influence employee cyberloafing behaviors and how individual differences in conscientiousness moderate these effects. We also demonstrate that the shift to Daylight Saving Time (DST) results in a dramatic increase in cyberloafing behavior at the national level. We first tested the DST-cyberloafing relation through a national quasi-experiment, then directly tested the relation between sleep and cyberloafing in a closely controlled laboratory setting. We discuss the implications of our findings for theory, practice, and future research.",
"title": ""
},
{
"docid": "676540e4b0ce65a71e86bf346f639f22",
"text": "Methylation is a prevalent posttranscriptional modification of RNAs. However, whether mammalian microRNAs are methylated is unknown. Here, we show that the tRNA methyltransferase NSun2 methylates primary (pri-miR-125b), precursor (pre-miR-125b), and mature microRNA 125b (miR-125b) in vitro and in vivo. Methylation by NSun2 inhibits the processing of pri-miR-125b2 into pre-miR-125b2, decreases the cleavage of pre-miR-125b2 into miR-125, and attenuates the recruitment of RISC by miR-125, thereby repressing the function of miR-125b in silencing gene expression. Our results highlight the impact of miR-125b function via methylation by NSun2.",
"title": ""
},
{
"docid": "7644e24f667b221d2e5f47d71ce4e408",
"text": "Considerable adverse side effects and cytotoxicity of highly potent drugs for healthy tissue require the development of novel drug delivery systems to improve pharmacokinetics and result in selective distribution of the loaded agent. The introduction of targeted liposomal formulations has provided potential solutions for improved drug delivery to cancer cells, penetrating delivery through blood-brain barrier and gene therapy. A large number of investigations have been developed over the past few decades to overcome pharmacokinetics and unfavorable side effects limitations. These improvements have enabled targeted liposome to meet criteria for successful and improved potent drug targeting. Promising in vitro and in vivo results for liposomal-directed delivery systems appear to be effective for vast variety of highly potent therapeutics. This review will focus on the past decade's potential use and study of highly potent drugs using targeted liposomes.",
"title": ""
},
{
"docid": "73beec89ce06abfe10edb9e446b8b2f8",
"text": "Pinching is an important capability for mobile robots handling small items or tools. Successful pinching requires force-closure and, in underwater applications, gentle suction flow at the fingertips can dramatically improve the handling of light objects by counteracting the negative effects of water lubrication and enhancing friction. In addition, monitoring the flow gives a measure of suction-engagement and can act as a binary tactile sensor. Although a suction system adds complexity, elastic tubes can double as passive spring elements for desired finger kinematics.",
"title": ""
},
{
"docid": "5ead5040d26d424ab6ce9ce5c8cb87b1",
"text": "Nowadays, information technologies play an important role in education. In education, mobile and TV applications can be considered a support tool in the teaching learning process, however, relevant and appropriate mobile and TV applications are not always available; teachers can only judge applications by reviews or anecdotes instead of testing them. These reasons lead to the needs and benefits for creating one’s own mobile application for teaching and learning. In this work, we present a cloud-based platform for multi-device educational software generation (smartphones, tablets, Web, Android-based TV boxes, and smart TV devices) called AthenaCloud. It is important to mention that an open cloud-based platform allows teachers to create their own multi-device software by using a personal computer with Internet access. The goal of this platform is to provide a software tool to help educators upload their electronic contents – or use existing contents in an open repository – and package them in the desired setup file for one of the supported devices and operating systems.",
"title": ""
},
{
"docid": "13af177013f8ba7738b322146d0786f2",
"text": "Edge computing pushes application logic and the underlying data to the edge of the network, with the aim of improving availability and scalability. As the edge servers are not necessarily secure, there must be provisions for validating their outputs. This paper proposes a mechanism that creates a verification object (VO) for checking the integrity of each query result produced by an edge server - that values in the result tuples are not tampered with, and that no spurious tuples are introduced. The primary advantages of our proposed mechanism are that the VO is independent of the database size, and that relational operations can still be fulfilled by the edge servers. These advantages reduce transmission load and processing at the clients. We also show how insert and delete transactions can be supported.",
"title": ""
},
{
"docid": "0867eb365ca19f664bd265a9adaa44e5",
"text": "We present VI-DSO, a novel approach for visual-inertial odometry, which jointly estimates camera poses and sparse scene geometry by minimizing photometric and IMU measurement errors in a combined energy functional. The visual part of the system performs a bundle-adjustment like optimization on a sparse set of points, but unlike key-point based systems it directly minimizes a photometric error. This makes it possible for the system to track not only corners, but any pixels with large enough intensity gradients. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between keyframes. We explicitly include scale and gravity direction into our model and jointly optimize them together with other variables such as poses. As the scale is often not immediately observable using IMU data this allows us to initialize our visual-inertial system with an arbitrary scale instead of having to delay the initialization until everything is observable. We perform partial marginalization of old variables so that updates can be computed in a reasonable time. In order to keep the system consistent we propose a novel strategy which we call “dynamic marginalization”. This technique allows us to use partial marginalization even in cases where the initial scale estimate is far from the optimum. We evaluate our method on the challenging EuRoC dataset, showing that VI-DSO outperforms the state of the art.",
"title": ""
},
{
"docid": "c12d595a944aa592fd3a1414fa873f93",
"text": "Central nervous system cytotoxicity is linked to neurodegenerative disorders. The objective of the study was to investigate whether monosodium glutamate (MSG) neurotoxicity can be reversed by natural products, such as ginger or propolis, in male rats. Four different groups of Wistar rats were utilized in the study. Group A served as a normal control, whereas group B was orally administered with MSG (100 mg/kg body weight, via oral gavage). Two additional groups, C and D, were given MSG as group B along with oral dose (500 mg/kg body weight) of either ginger or propolis (600 mg/kg body weight) once a day for two months. At the end, the rats were sacrificed, and the brain tissue was excised and levels of neurotransmitters, ß-amyloid, and DNA oxidative marker 8-OHdG were estimated in the brain homogenates. Further, formalin-fixed and paraffin-embedded brain sections were used for histopathological evaluation. The results showed that MSG increased lipid peroxidation, nitric oxide, neurotransmitters, and 8-OHdG as well as registered an accumulation of ß-amyloid peptides compared to normal control rats. Moreover, significant depletions of glutathione, superoxide dismutase, and catalase as well as histopathological alterations in the brain tissue of MSG-treated rats were noticed in comparison with the normal control. In contrast, treatment with ginger greatly attenuated the neurotoxic effects of MSG through suppression of 8-OHdG and β-amyloid accumulation as well as alteration of neurotransmitter levels. Further improvements were also noticed based on histological alterations and reduction of neurodegeneration in the brain tissue. A modest inhibition of the neurodegenerative markers was observed by propolis. The study clearly indicates a neuroprotective effect of ginger and propolis against MSG-induced neurodegenerative disorders and these beneficial effects could be attributed to the polyphenolic compounds present in these natural products.",
"title": ""
},
{
"docid": "d5284538412222101f084fee2dc1acc4",
"text": "The hand is an integral component of the human body, with an incredible spectrum of functionality. In addition to possessing gross and fine motor capabilities essential for physical survival, the hand is fundamental to social conventions, enabling greeting, grooming, artistic expression and syntactical communication. The loss of one or both hands is, thus, a devastating experience, requiring significant psychological support and physical rehabilitation. The majority of hand amputations occur in working-age males, most commonly as a result of work-related trauma or as casualties sustained during combat. For millennia, humans have used state-of-the-art technology to design clever devices to facilitate the reintegration of hand amputees into society. The present article provides a historical overview of the progress in replacing a missing hand, from early iron hands intended primarily for use in battle, to today's standard body-powered and myoelectric prostheses, to revolutionary advancements in the restoration of sensorimotor control with targeted reinnervation and hand transplantation.",
"title": ""
},
{
"docid": "0d722ecc5bd9de4151efa09b55de7b8a",
"text": "As international research studies become more commonplace, the importance of developing multilingual research instruments continues to increase and with it that of translated materials. It is therefore not unexpected that assessing the quality of translated materials (e.g., research instruments, questionnaires, etc.) has become essential to cross-cultural research, given that the reliability and validity of the research findings crucially depend on the translated instruments. In some fields (e.g., public health and medicine), the quality of translated instruments can also impact the effectiveness and success of interventions and public campaigns. Back-translation (BT) is a commonly used quality assessment tool in cross-cultural research. This quality assurance technique consists of (a) translation (target text [TT1]) of the source text (ST), (b) translation (TT2) of TT1 back into the source language, and (c) comparison of TT2 with ST to make sure there are no discrepancies. The accuracy of the BT with respect to the source is supposed to reflect equivalence/accuracy of the TT. This article shows how the use of BT as a translation quality assessment method can have a detrimental effect on a research study and proposes alternatives to BT. One alternative is illustrated on the basis of the translation and quality assessment methods used in a research study on hearing loss carried out in a border community in the southwest of the United States.",
"title": ""
},
{
"docid": "f68a026a494b441f4acff215290397ce",
"text": "A high performance quad-ridged orthomode transducer (OMT) with small size was designed based on the traditional quad-ridged OMT for C band receiver of shanghai 65m radio telescope. The structure and working principle of the improved OMT were introduced, detail size and main performance characteristics were given, and FEM and FIT methods were used to verify the reliability of the improved quad-ridged OMT. Finally, the tolerance of the OMT is analyzed.",
"title": ""
},
{
"docid": "f72aa1beaddfaa8efefeb6c7f6d7f18e",
"text": "Computational modeling is an important tool to understand and stabilize transient turbulent fluid flow in the continuous casting of steel in order to minimize defects. The current work combines the predictions of two steady Reynolds-averaged Navier-Stokes (RANS) models, a “filtered” unsteady RANS model, and two large eddy simulation models with ultrasonic Doppler velocimetry (UDV) measurements in a small scale liquid GaInSn model of the continuous casting mold region fed by a bifurcated well-bottom nozzle with horizontal ports. Both mean and transient features of the turbulent flow are investigated.",
"title": ""
},
{
"docid": "61ae61d0950610ee2ad5e07f64f9b983",
"text": "We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.",
"title": ""
},
{
"docid": "76f3e27672340476675f7417bdc34148",
"text": "Increased encephalization, or larger brain volume relative to body mass, is a repeated theme in vertebrate evolution. Here we present an extensive sampling of relative brain sizes in fossil and extant taxa in the mammalian order Carnivora (cats, dogs, bears, weasels, and their relatives). By using Akaike Information Criterion model selection and endocranial volume and body mass data for 289 species (including 125 fossil taxa), we document clade-specific evolutionary transformations in encephalization allometries. These evolutionary transformations include multiple independent encephalization increases and decreases in addition to a remarkably static basal Carnivora allometry that characterizes much of the suborder Feliformia and some taxa in the suborder Caniformia across much of their evolutionary history, emphasizing that complex processes shaped the modern distribution of encephalization across Carnivora. This analysis also permits critical evaluation of the social brain hypothesis (SBH), which predicts a close association between sociality and increased encephalization. Previous analyses based on living species alone appeared to support the SBH with respect to Carnivora, but those results are entirely dependent on data from modern Canidae (dogs). Incorporation of fossil data further reveals that no association exists between sociality and encephalization across Carnivora and that support for sociality as a causal agent of encephalization increase disappears for this clade.",
"title": ""
},
{
"docid": "aeaae00768283aaded98858ba482aa5c",
"text": "In the early 1990s, computer scientists became motivated by the idea of rendering human-computer interactions more humanlike and natural for their users in order to both address complaints that technologies impose a mechanical (sometimes even anti-social) aesthetic to their everyday environment, and also investigate innovative ways to manage system-environment complexity. With the recent development of the field of Social Robotics and particularly HumanRobot Interaction, the integration of intentional emotional mechanisms in a system’s control architecture becomes inevitable. Unfortunately, this presents significant issues that must be addressed for a successful emotional artificial system to be developed. This paper provides an additional dimension to documented arguments for and against the introduction of emotions into artificial systems by highlighting some fundamental paradoxes, mistakes, and proposes guidelines for how to develop successful affective intelligent social machines.",
"title": ""
},
{
"docid": "5de0fcb624f4c14b1a0fe43c60d7d4ad",
"text": "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.",
"title": ""
},
{
"docid": "a3fa64c1f6553a46cfd9f88e9a802bb2",
"text": "With the increasing use of liquid crystal-based displays in everyday life, led both by the development of new portable electronic devices and the desire to minimize the use of printed paper, Nematic Liquid Crystals [4] (NLCs) are now hugely important industrial materials; and research into ways to engineer more efficient display technologies is crucial. Modern electronic display technology mostly relies on the ability of NLC materials to rotate the plane of polarized light (birefringence). The degree to which they can do this depends on the orientation of the molecules within the liquid crystal, and this in turn is affected by factors such as an applied electric field (the molecules, which are typically long and thin, line up in an applied field), or by boundary effects (a phenomenon known as surface anchoring). Most devices currently available use the former effect: an electric field is applied to control the molecular orientation of a thin film of nematic liquid crystal between crossed polarizers (which are also the electrodes), and this in turn controls the optical effect when light passes through the layer (figure 1). The main disadvantage of this set-up is that the electric field must be applied constantly in order for the display to maintain its configuration – if the field is removed, the molecules of the NLC relax into the unique, stable, field-free state (giving no contrast between pixels, and a monochrome display). This is expensive in terms of power consumption, leading to generally short battery lifetimes. On the other hand, if one could somehow exploit the fact that the bounding surfaces of a cell affect the molecular configuration – the anchoring effect, which can, to a large extent, be controlled by mechanical or chemical treatments [1]– then one might be able to engineer a bistable system, with two (or more) stable field-free states, giving two optically-distinct stable steady states of the device, without any electric field required to sustain them. Power is required only to change the state of the cell from one steady state to the other (and this issue of “switchability”, which can be hard to achieve, is really the challenging part of the design). Such technology is particularly appropriate for LCDs that change only infrequently, e.g. “electronic paper” applications such as e-books, e-newspapers, and so on. Certain technologies for bistable devices already exist; and most use the surface anchoring effect, combined with a clever choice of bounding surface geometry. The goal of this project will be to investigate simpler designs for liquid crystal devices that exhibit bistability. With planar surface topography, but different anchoring conditions at the two bounding surfaces, bistability is possible [2,3]; and a device of this kind should be easier to manufacture. Two different modeling approaches can be taken depending on what design aspect is to be optimized. A simple approach is to study only steady states of the system. Such states will be governed by (nonlinear) ODEs, and stability can be investigated as the electric field strength is varied. In a system with several steady states, loss of stability of one state at a critical field would mean a bifurcation of the solution, and a switch to a different state. Such an analysis could give information about how to achieve switching at low critical fields, for example; or at physically-realistic material parameter values; but would say nothing about how fast the switching might be. Speed of switching would need to be investigated by studying a simple PDE model for the system. We can explore both approaches here, and attempt to come up with some kind of “optimal” design – whatever that means!",
"title": ""
}
] |
scidocsrr
|
e8f05b9a6293252f356badd893f09021
|
Cyber Threat Intelligence
|
[
{
"docid": "e8c54c34e4944cbc65698cdd5b0966d4",
"text": "Bitcoin, a decentralized cryptographic currency that has experienced proliferating popularity over the past few years, is the common denominator in a wide variety of cybercrime. We perform a measurement analysis of CryptoLocker, a family of ransomware that encrypts a victim's files until a ransom is paid, within the Bitcoin ecosystem from September 5, 2013 through January 31, 2014. Using information collected from online fora, such as reddit and BitcoinTalk, as an initial starting point, we generate a cluster of 968 Bitcoin addresses belonging to CryptoLocker. We provide a lower bound for CryptoLocker's economy in Bitcoin and identify 795 ransom payments totalling 1,128.40 BTC ($310,472.38), but show that the proceeds could have been worth upwards of $1.1 million at peak valuation. By analyzing ransom payment timestamps both longitudinally across CryptoLocker's operating period and transversely across times of day, we detect changes in distributions and form conjectures on CryptoLocker that corroborate information from previous efforts. Additionally, we construct a network topology to detail CryptoLocker's financial infrastructure and obtain auxiliary information on the CryptoLocker operation. Most notably, we find evidence that suggests connections to popular Bitcoin services, such as Bitcoin Fog and BTC-e, and subtle links to other cybercrimes surrounding Bitcoin, such as the Sheep Marketplace scam of 2013. We use our study to underscore the value of measurement analyses and threat intelligence in understanding the erratic cybercrime landscape.",
"title": ""
},
{
"docid": "9bafd07082066235a6b99f00e360b0d2",
"text": "Mobile devices have become a significant part of people’s lives, leading to an increasing number of users involved with such technology. The rising number of users invites hackers to generate malicious applications. Besides, the security of sensitive data available on mobile devices is taken lightly. Relying on currently developed approaches is not sufficient, given that intelligent malware keeps modifying rapidly and as a result becomes more difficult to detect. In this paper, we propose an alternative solution to evaluating malware detection using the anomaly-based approach with machine learning classifiers. Among the various network traffic features, the four categories selected are basic information, content based, time based and connection based. The evaluation utilizes two datasets: public (i.e. MalGenome) and private (i.e. self-collected). Based on the evaluation results, both the Bayes network and random forest classifiers produced more accurate readings, with a 99.97 % true-positive rate (TPR) as opposed to the multi-layer perceptron with only 93.03 % on the MalGenome dataset. However, this experiment revealed that the k-nearest neighbor classifier efficiently detected the latest Android malware with an 84.57 % truepositive rate higher than other classifiers. Communicated by V. Loia. F. A. Narudin · A. Gani Mobile Cloud Computing (MCC), University of Malaya, 50603 Kuala Lumpur, Malaysia A. Feizollah (B) · N. B. Anuar Security Research Group (SECReg), Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: ali.feizollah@siswa.um.edu.my",
"title": ""
}
] |
[
{
"docid": "548ca7ecd778bc64e4a3812acd73dcfb",
"text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.",
"title": ""
},
{
"docid": "56642ffad112346186a5c3f12133e59b",
"text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.",
"title": ""
},
{
"docid": "b8095fb49846c89a74cc8c0f69891877",
"text": "Attitudes held with strong moral conviction (moral mandates) were predicted to have different interpersonal consequences than strong but nonmoral attitudes. After controlling for indices of attitude strength, the authors explored the unique effect of moral conviction on the degree that people preferred greater social (Studies 1 and 2) and physical (Study 3) distance from attitudinally dissimilar others and the effects of moral conviction on group interaction and decision making in attitudinally homogeneous versus heterogeneous groups (Study 4). Results supported the moral mandate hypothesis: Stronger moral conviction led to (a) greater preferred social and physical distance from attitudinally dissimilar others, (b) intolerance of attitudinally dissimilar others in both intimate (e.g., friend) and distant relationships (e.g., owner of a store one frequents), (c) lower levels of good will and cooperativeness in attitudinally heterogeneous groups, and (d) a greater inability to generate procedural solutions to resolve disagreements.",
"title": ""
},
{
"docid": "6ccad3fd0fea9102d15bd37306f5f562",
"text": "This paper reviews deposition, integration, and device fabrication of ferroelectric PbZrxTi1−xO3 (PZT) films for applications in microelectromechanical systems. As examples, a piezoelectric ultrasonic micromotor and pyroelectric infrared detector array are presented. A summary of the published data on the piezoelectric properties of PZT thin films is given. The figures of merit for various applications are discussed. Some considerations and results on operation, reliability, and depolarization of PZT thin films are presented.",
"title": ""
},
{
"docid": "6886849300b597fdb179162744b40ee2",
"text": "This paper argues that the dominant study of the form and structure of games – their poetics – should be complemented by the analysis of their aesthetics (as understood by modern cultural theory): how gamers use their games, what aspects they enjoy and what kinds of pleasures they experience by playing them. The paper outlines a possible aesthetic theory of games based on different aspects of pleasure: the psychoanalytical, the social and the physical form of pleasure.",
"title": ""
},
{
"docid": "236b57a34968f27927cb6f967bdcb644",
"text": "The potential of microalgae as a source of renewable energy has received considerable interest, but if microalgal biofuel production is to be economically viable and sustainable, further optimization of mass culture conditions are needed. Wastewaters derived from municipal, agricultural and industrial activities potentially provide cost-effective and sustainable means of algal growth for biofuels. In addition, there is also potential for combining wastewater treatment by algae, such as nutrient removal, with biofuel production. Here we will review the current research on this topic and discuss the potential benefits and limitations of using wastewaters as resources for cost-effective microalgal biofuel production.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "0072941488ef0e22b06d402d14cbe1be",
"text": "This chapter is about computational modelling of the process of musical composition, based on a cognitive model of human behaviour. The idea is to try to study not only the requirements for a computer system which is capable of musical composition, but also to relate it to human behaviour during the same process, so that it may, perhaps, work in the same way as a human composer, but also so that it may, more likely, help us understand how human composers work. Pearce et al. (2002) give a fuller discussion of the motivations behind this endeavour.",
"title": ""
},
{
"docid": "6a3afa9644477304d2d32d99c99e07c8",
"text": "This paper presents a comprehensive survey of five most widely used in-vehicle networks from three perspectives: system cost, data transmission capacity, and fault-tolerance capability. The paper reviews the pros and cons of each network, and identifies possible approaches to improve the quality of service (QoS). In addition, two classifications of automotive gateways have been presented along with a brief discussion about constructing a comprehensive in-vehicle communication system with different networks and automotive gateways. Furthermore, security threats to in-vehicle networks are briefly discussed, along with the corresponding protective methods. The survey concludes with highlighting the trends in future development of in-vehicle network technology and a proposal of a topology of the next generation in-vehicle network.",
"title": ""
},
{
"docid": "cee0d7bac437a3a98fa7aba31969341b",
"text": "Throughout history, the educational process used different educational technologies which did not significantly alter the manner of learning in the classroom. By implementing e-learning technology to the educational process, new and completely different innovative learning scenarios are made possible, including more active student involvement outside the traditional classroom. The quality of the realization of the educational objective in any learning environment depends primarily on the teacher who creates the educational process, mentors and acts as a moderator in the communication within the educational process, but also relies on the student who acquires the educational content. The traditional classroom learning and e-learning environment enable different manners of adopting educational content, and this paper reveals their key characteristics with the purpose of better use of e-learning technology in the educational process.",
"title": ""
},
{
"docid": "1d2a02e49c439b47c750fb24a8d25d18",
"text": "Neural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top B candidates. This tends to result in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing a diversity-augmented objective. We observe that our method not only improved diversity but also finds better top 1 solutions by controlling for the exploration and exploitation of the search space. Moreover, these gains are achieved with minimal computational or memory overhead compared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation, conversation and visual question generation using both standard quantitative metrics and qualitative human studies. We find that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.",
"title": ""
},
{
"docid": "51f5ba274068c0c03e5126bda056ba98",
"text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "be4b6d68005337457e66fcdf21a04733",
"text": "A real-time algorithm to detect eye blinks in a video sequence from a standard camera is proposed. Recent landmark detectors, trained on in-the-wild datasets exhibit excellent robustness against face resolution, varying illumination and facial expressions. We show that the landmarks are detected precisely enough to reliably estimate the level of the eye openness. The proposed algorithm therefore estimates the facial landmark positions, extracts a single scalar quantity – eye aspect ratio (EAR) – characterizing the eye openness in each frame. Finally, blinks are detected either by an SVM classifier detecting eye blinks as a pattern of EAR values in a short temporal window or by hidden Markov model that estimates the eye states followed by a simple state machine recognizing the blinks according to the eye closure lengths. The proposed algorithm has comparable results with the state-of-the-art methods on three standard datasets.",
"title": ""
},
{
"docid": "bdfb0ec2182434dad32049fa04f8c795",
"text": "This paper introduces a vision-based gesture mouse system, which is roughly independent from the lighting conditions, because it only uses the depth data for hand sign recognition. A Kinect sensor was used to develop the system, but other depth sensing cameras are adequate as well, if their resolutions are similar or better than the resolution of Kinect sensor. Our aim was to find a comfortable, user-friendly solution, which can be used for a long time without getting tired. The implementation of the system was developed in C++, and two types of test were performed too. We investigated how fast the user can position with the cursor and click on objects and we also examined which controls of the graphical user interfaces (GUI) are easy to use and which ones are difficult to use with our gesture mouse. Our system is precise enough to use efficiently most of the elements of traditional GUI such as buttons, icons, scrollbars, etc. The accuracy achieved by advanced users is only slightly below as if they used the traditional mouse.",
"title": ""
},
{
"docid": "0d8075b26c8e8554ec8eec5f41a73c23",
"text": "As robots are going to spread in human society, the study of their appearance becomes a critical matter when assessing robots performance and appropriateness for an application and for the employment in different countries, with different background cultures and religions. Robot appearance categories are commonly divided in anthropomorphic, zoomorphic and functional. In this paper, we offer a theoretical contribution by introducing a new category, called `theomorphic robots', in which robots carry the shape and the identity of a supernatural creature or object within a religion. Discussing the theory of dehumanisation and the different categories of supernatural among different religions, we hypothesise the possible advantages of the theomorphic design for different applications.",
"title": ""
},
{
"docid": "c10a83c838f59adeb50608d5b96c0fbc",
"text": "Robots are typically equipped with multiple complementary sensors such as cameras and laser range finders. Camera generally provides dense 2D information while range sensors give sparse and accurate depth information in the form of a set of 3D points. In order to represent the different data sources in a common coordinate system, extrinsic calibration is needed. This paper presents a pipeline for extrinsic calibration a zed setero camera with Velodyne LiDAR puck using a novel self-made 3D marker whose edges can be robustly detected in the image and 3d point cloud. Our approach first estimate the large sensor displacement using just a single frame. then we optimize the coarse results by finding the best align of edges in order to obtain a more accurate calibration. Finally, the ratio of the 3D points correctly projected onto proper image segments is used to evaluate the accuracy of calibration.",
"title": ""
},
{
"docid": "e060548f90eb06f359b2d8cfcf713c29",
"text": "Objective\nTo conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs.\n\n\nDesign/method\nWe searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies.\n\n\nResults\nWe surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task.\n\n\nDiscussion\nDespite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.",
"title": ""
},
{
"docid": "91452b6df15a16548e14677231757959",
"text": "Simulation provides a flexible approach to analyzing business processes. Through simulation experiments various “what if” questions can be answered and redesign alternatives can be compared with respect to key performance indicators. This chapter introduces simulation as an analysis tool for business process management. After describing the characteristics of business simulation models, the phases of a simulation project, the generation of random variables, and the analysis of simulation results, we discuss 15 risks, i.e., potential pitfalls jeopardizing the correctness and value of business process simulation. For example, the behavior of resources is often modeled in a rather naı̈ve manner resulting in unreliable simulation models. Whereas traditional simulation approaches rely on hand-made models, we advocate the use of process mining techniques for creating more reliable simulation models based on real event data. Moreover, simulation can be turned into a powerful tool for operational decision making by using real-time process data.",
"title": ""
},
{
"docid": "c1b79f29ce23b2d0ba97928831302e18",
"text": "Quality assessment of biometric fingerprint images is necessary to ensure high biometric performance in biometric recognition systems. We relate the quality of a fingerprint sample to the biometric performance to ensure an objective and performance oriented benchmark. The proposed quality metric is based on Gabor filter responses and is evaluated against eight contemporary quality estimation methods on four datasets using sample utility derived from the separation of genuine and imposter distributions as benchmark. The proposed metric shows performance and consistency approaching that of the composite NFIQ quality assessment algorithm and is thus a candidate for inclusion in a feature vector introducing the NFIQ 2.0 metric.",
"title": ""
},
{
"docid": "b3d494b7771480712b9db1c830240171",
"text": "Many trait-specific countermeasures to face spoofing attacks have been developed for security of face authentication. However, there is no superior face anti-spoofing technique to deal with every kind of spoofing attack in varying scenarios. In order to improve the generalization ability of face anti-spoofing approaches, an extendable multi-cues integration framework for face anti-spoofing using a hierarchical neural network is proposed, which can fuse image quality cues and motion cues for liveness detection. Shearlet is utilized to develop an image quality-based liveness feature. Dense optical flow is utilized to extract motion-based liveness features. A bottleneck feature fusion strategy can integrate different liveness features effectively. The proposed approach was evaluated on three public face antispoofing databases. A half total error rate (HTER) of 0% and an equal error rate (EER) of 0% were achieved on both REPLAY-ATTACK database and 3D-MAD database. An EER of 5.83% was achieved on CASIA-FASD",
"title": ""
}
] |
scidocsrr
|
5afc63a8ba8fef5f1901e0e2bc32b817
|
Multi-label classification search space in the MEKA software
|
[
{
"docid": "f0e35100617a7e34a04e43d6bee8db9d",
"text": "This paper presents a pruned sets method (PS) for multi-label classification. It is centred on the concept of treating sets of labels as single labels. This allows the classification process to inherently take into account correlations between labels. By pruning these sets, PS focuses only on the most important correlations, which reduces complexity and improves accuracy. By combining pruned sets in an ensemble scheme (EPS), new label sets can be formed to adapt to irregular or complex data. The results from experimental evaluation on a variety of multi-label datasets show that [E]PS can achieve better performance and train much faster than other multi-label methods.",
"title": ""
},
{
"docid": "cc1ae8daa1c1c4ee2b3b4a65ef48b6f5",
"text": "The use of entropy as a distance measure has several benefits. Amongst other things it provides a consistent approach to handling of symbolic attributes, real valued attributes and missing values. The approach of taking all possible transformation paths is discussed. We describe K*, an instance-based learner which uses such a measure, and results are presented which compare favourably with several machine learning algorithms.",
"title": ""
}
] |
[
{
"docid": "99698fc712b777dfb3d1eb782626586f",
"text": "Looking into the future, when the billion transitor ASICs will become reality, this p per presents Network on a chip (NOC) concept and its associated methodology as solu the design productivity problem. NOC is a network of computational, storage and I/O resou interconnected by a network of switches. Resources communcate with each other usi dressed data packets routed to their destination by the switch fabric. Arguments are pre to justify that in the billion transistor era, the area and performance penalty would be minim A concrete topology for the NOC, a honeycomb structure, is proposed and discussed. A odology to support NOC is presented. This methodology outlines steps from requirements to implementation. As an illustration of the concepts, a plausible mapping of an entire ba tion on hypothetical NOC is discussed.",
"title": ""
},
{
"docid": "f6cb9bfd79fbee8bff0a2f6ad0bca705",
"text": "Neuroendocrine neoplasms are detected very rarely in pregnant women. The following is a case report of carcinoid tumor of the appendix diagnosed in 28 year-old woman at 25th week of gestation. The woman delivered spontaneously a healthy baby at the 38th week of gestation. She did not require adjuvant therapy with somatostatin analogues. The patient remained in remission. There are not established standards of care due to the very rare incidence of carcinoid tumors in pregnancy. A review of the literature related to management and prognosis in such cases was done.",
"title": ""
},
{
"docid": "0f0ba88f817b467e04fa279a782ddf73",
"text": "Prior work suggests that receiving feedback that one's response was correct or incorrect (right/wrong feedback) does not help learners, as compared to not receiving any feedback at all (Pashler, Cepeda, Wixted, & Rohrer, 2005). In three experiments we examined the generality of this conclusion. Right/wrong feedback did not aid error correction, regardless of whether participants learned facts embedded in prose (Experiment 1) or translations of foreign vocabulary (Experiment 2). While right/wrong feedback did not improve the overall retention of correct answers (Experiments 1 and 2), it facilitated retention of low-confidence correct answers (Experiment 3). Reviewing the original materials was very useful to learners, but this benefit was similar after receiving either right/wrong feedback or no feedback (Experiments 1 and 2). Overall, right/wrong feedback conveys some information to the learner, but is not nearly as useful as being told the correct answer or having the chance to review the to-be-learned materials.",
"title": ""
},
{
"docid": "d2b45d76e93f07ededbab03deee82431",
"text": "A cordless battery charger will greatly improve the user friendliness of electric vehicles (EVs), accelerating the replacement of traditional internal combustion engine (ICE) vehicles with EVs and improving energy sustainability as a result. Resonant circuits are used for both the power transmitter and receiver of a cordless charger to compensate their coils and improve power transfer efficiency. However, conventional compensation circuit topology is not suitable for application to an EV, which involves very large power, a wide gap between the transmitter and receiver coils, and large horizontal misalignment. This paper proposes a novel compensation circuit topology that has a carefully designed series capacitor added to the parallel resonant circuit of the receiver. The proposed circuit has been implemented and tested on an EV. The simulation and experimental results are presented to show that the circuit can improve the power factor and power transfer efficiency, and as a result, allow a larger gap between the transmitter and receiver coils.",
"title": ""
},
{
"docid": "a1908f624faeff4c5c6a64009ece67ce",
"text": "PURPOSE\nTo describe change in mental health after treatment with antidepressants and trauma-focused cognitive behavioral therapy.\n\n\nMETHODS\nPatients receiving treatment at the Psychiatric Trauma Clinic for Refugees in Copenhagen completed self-ratings of level of functioning, quality of life, and symptoms of PTSD, depression and anxiety before and after treatment. Changes in mental state and predictors of change were evaluated in a sample that all received well-described and comparable treatment.\n\n\nRESULTS\n85 patients with PTSD or depression were included in the analysis. Significant improvement and effect size were observed on all rating scales (p-value <0.01 and Cohen's d 45-0.68). Correlation analysis showed no association between severity of symptoms at baseline and the observed change.\n\n\nCONCLUSION\nDespite methodological limitations, the finding of a significant improvement on all rating scales is important considering that previous follow-up studies of comparable patient populations have not found significant change in the patients'condition after treatment.",
"title": ""
},
{
"docid": "65f7180b6f2c2a22b0529b3bd811f9bf",
"text": "To succeed in the highly competitive e-commerce environment, it is vital to understand the impact of website quality in enhancing customer conversion and retention. Although numerous contingent website attributes have been identified in the extant website quality studies, there is no unified framework to classify these attributes and no comparison done between customer conversion and retention according to the different website quality attributes and their varying impact. This study adopts the model of Information Systems (IS) success by DeLone and McLean to provide a parsimonious and unified view of website quality, and compares the impact of website quality on intention of initial purchase with that on intention of continued purchase. With the proposed framework, we seek to understand how a company can increase customer conversion and/or retention. Our findings demonstrate the strength of our framework in explaining the impact of website quality on intention to purchase on the Web, and that website quality constructs exert different impact on intention of initial purchase and intention of continued purchase. The results suggest that an online company should focus on system quality to increase customer conversion, and on service quality for customer retention.",
"title": ""
},
{
"docid": "a36032c72d485d89e7eb5de784962a65",
"text": "OBJECTIVE\nThe past two decades have seen dramatic progress in our ability to model brain signals recorded by electroencephalography, functional near-infrared spectroscopy, etc., and to derive real-time estimates of user cognitive state, response, or intent for a variety of purposes: to restore communication by the severely disabled, to effect brain-actuated control and, more recently, to augment human-computer interaction. Continuing these advances, largely achieved through increases in computational power and methods, requires software tools to streamline the creation, testing, evaluation and deployment of new data analysis methods.\n\n\nAPPROACH\nHere we present BCILAB, an open-source MATLAB-based toolbox built to address the need for the development and testing of brain-computer interface (BCI) methods by providing an organized collection of over 100 pre-implemented methods and method variants, an easily extensible framework for the rapid prototyping of new methods, and a highly automated framework for systematic testing and evaluation of new implementations.\n\n\nMAIN RESULTS\nTo validate and illustrate the use of the framework, we present two sample analyses of publicly available data sets from recent BCI competitions and from a rapid serial visual presentation task. We demonstrate the straightforward use of BCILAB to obtain results compatible with the current BCI literature.\n\n\nSIGNIFICANCE\nThe aim of the BCILAB toolbox is to provide the BCI community a powerful toolkit for methods research and evaluation, thereby helping to accelerate the pace of innovation in the field, while complementing the existing spectrum of tools for real-time BCI experimentation, deployment and use.",
"title": ""
},
{
"docid": "aaf69cb42fc9d17cf0ae3b80a55f12d6",
"text": "Bringing Blockchain technology and business process management together, we follow the Design Science Research approach and design, implement, and evaluate a Blockchain prototype for crossorganizational workflow management together with a German bank. For the use case of a documentary letter of credit we describe the status quo of the process, identify areas of improvement, implement a Blockchain solution, and compare both workflows. The prototype illustrates that the process, as of today paper-based and with high manual effort, can be significantly improved. Our research reveals that a tamper-proof process history for improved auditability, automation of manual process steps and the decentralized nature of the system can be major advantages of a Blockchain solution for crossorganizational workflow management. Further, our research provides insights how Blockchain technology can be used for business process management in general.",
"title": ""
},
{
"docid": "d631d6526ec0be85cadfb3f4d3d73c73",
"text": "Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception. It incorporates the nonnegativity constraint and thus obtains the parts-based representation as well as enhancing the interpretability of the issue correspondingly. This survey paper mainly focuses on the theoretical research into NMF over the last 5 years, where the principles, basic models, properties, and algorithms of NMF along with its various modifications, extensions, and generalizations are summarized systematically. The existing NMF algorithms are divided into four categories: Basic NMF (BNMF), Constrained NMF (CNMF), Structured NMF (SNMF), and Generalized NMF (GNMF), upon which the design principles, characteristics, problems, relationships, and evolution of these algorithms are presented and analyzed comprehensively. Some related work not on NMF that NMF should learn from or has connections with is involved too. Moreover, some open issues remained to be solved are discussed. Several relevant application areas of NMF are also briefly described. This survey aims to construct an integrated, state-of-the-art framework for NMF concept, from which the follow-up research may benefit.",
"title": ""
},
{
"docid": "eb6636299df817817aa49f1f8dad04f5",
"text": "This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks (RNNs) and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long short-term memory (LSTM) cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks (GANs), such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contact-aware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to high-quality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, online/offline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models.",
"title": ""
},
{
"docid": "220c70da9dff130a91eca6695fab5659",
"text": "Article history: Received 6 July 2016 Revised 2 September 2016 Accepted 17 September 2016 Available online 17 September 2016",
"title": ""
},
{
"docid": "775fe381aa59d3491ff50f593be5fafa",
"text": "This chapter elaborates on augmented reality marketing (ARM) as a digital marketing campaign and a strategic trend in tourism and hospitality. The computer assisted augmenting of perception by means of additional interactive information levels in real time is known as augmented reality. Augmented reality marketing is a constructed worldview on a device with blend of reality and added or augmented themes interacting with five sense organs and experiences. The systems and approaches of marketing are integrating with technological applications in almost all sectors of economies and in all phases of a business’s value delivery network. Trends in service sector marketing provide opportunities in generating technology led tourism marketing campaigns. Also, the adoption, relevance and significance of technology in tourism and hospitality value delivery network can hardly be ignored. Many factors are propelling the functionalities of diverse actors in tourism. This paper explores the use of technology at various phases of tourism and hospitality marketing, along with the role of technology in enhancing consumer experience and value addition. It further supports the view that technology is aiding in faster diffusion of tourism products, relates destinations or attractions and thus benefiting the entire society. The augmented reality in marketing can create effective and enjoyable interactive experience by engaging the customer through a rich and rewarding experience of virtually plus reality. Such a tool has real potential in marketing in tourism and hospitality sector. Thus, this study discusses the ARM as a promising trend in tourism and hospitality and how this will meet future needs of tourism and hospitality products or offerings. The Augmented Reality Marketing: A Merger of Marketing and Technology in Tourism",
"title": ""
},
{
"docid": "38438e6a0bd03ad5f076daa1f248d001",
"text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.",
"title": ""
},
{
"docid": "26bb0b5412e4dbe038bb4c42962184d2",
"text": "Large organizations often face the critical challenge of sharing information and maintaining connections between disparate subunits. Tools for automated analysis of document collections, such as topic models, can provide an important means for communication. The value of topic modeling is in its ability to discover interpretable, coherent themes from unstructured document sets, yet it is not unusual to find semantic mismatches that substantially reduce user confidence. In this paper, we first present an expert-driven topic annotation study, undertaken in order to obtain an annotated set of baseline topics and their distinguishing characteristics. We then present a metric for detecting poor-quality topics that does not rely on human feedback or external reference corpora. Next we introduce a new topic model that incorporates salient properties of this metric. We show significant gains in topic quality on a substantial document collection from the National Institutes of Health, measured using both automated evaluation metrics and expert evaluations.",
"title": ""
},
{
"docid": "dc783054dac29af7d08cee0a13259a8d",
"text": "This paper develops a novel flexible capacitive tactile sensor array for prosthetic hand gripping force measurement. The sensor array has 8 × 8 (= 64) sensing units, each sensing unit has a four-layered structure: two thick PET layers with embedded copper electrodes generates a capacitor, a PDMS film with line-structure used as an insulation layer, and a top PDMS bump layer to concentrate external force. The structural design, working principle, and fabrication process of this sensor array are presented. The fabricated tactile sensor array features high flexibility has a spatial resolution of 2 mm. This is followed by the characterization of the sensing unit for normal force measurement and found that the sensing unit has two sensitivities: 4.82 0/00/mN for small contact force and 0.23 0/00/mN for large gripping force measurements. Finally, the tactile sensor array is integrated into a prosthetic hand for gripping force measurement. Results showed that the developed flexible capacitive tactile sensor array could be utilized for tactile sensing and real-time contact force visualization for prosthetic hand gripping applications.",
"title": ""
},
{
"docid": "445a49977b5d36f9da462e07faf79548",
"text": "In this paper, we consider the use of deep neural networks in the context of Multiple-Input-Multiple-Output (MIMO) detection. We give a brief introduction to deep learning and propose a modern neural network architecture suitable for this detection task. First, we consider the case in which the MIMO channel is constant, and we learn a detector for a specific system. Next, we consider the harder case in which the parameters are known yet changing and a single detector must be learned for all multiple varying channels. We demonstrate the performance of our deep MIMO detector using numerical simulations in comparison to competing methods including approximate message passing and semidefinite relaxation. The results show that deep networks can achieve state of the art accuracy with significantly lower complexity while providing robustness against ill conditioned channels and mis-specified noise variance.",
"title": ""
},
{
"docid": "4b96679173c825db7bc334449b6c4b83",
"text": "This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are usually grounded in the agent’s decision making architecture, of which RL is an important subclass. Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class. This survey provides background on emotion theory and RL. It systematically addresses (1) from what underlying dimensions (e.g. homeostasis, appraisal) emotions can be derived and how these can be modelled in RL-agents, (2) what types of emotions have been derived from these dimensions, and (3) how these emotions may either influence the learning efficiency of the agent or be useful as social signals. We also systematically compare evaluation criteria, and draw connections to important RL sub-domains like (intrinsic) motivation and model-based RL. In short, this survey provides both a practical overview for engineers wanting to implement emotions in their RL agents, and identifies challenges and directions for future emotion-RL research.",
"title": ""
},
{
"docid": "6a7f59aafa08363b293412ce2f9aecf8",
"text": "The Xilinx Partial Reconfiguration Early Access Software Tools for ISE 9.2i has been an instrumental package for performing a wide variety of research on Xilinx FPGAs, and is now superseded with the corresponding non-free add-on for ISE 12.3. The original package was developed and offered by Xilinx as a downloadable add-on package to the Xilinx ISE 9.2 tools. The 9.2i toolkit provided a methodology for creating rectangular partial reconfiguration modules that could be swapped in and out of a static baseline design with one or more PR slots. This paper presents a new PR toolkit called Open PR that, for starters, provides similar functionality as the Xilinx PR Toolkit, yet is extendable to explore other modes of partial reconfiguration. The distinguishing feature of this toolkit is that it is being released as open source, and is intended to extend to the needs of individual researchers.",
"title": ""
},
{
"docid": "4594f0649665596ea708dccaf0557de3",
"text": "Detection of spam Twitter social networks is one of the significant research areas to discover unauthorized user accounts. A number of research works have been carried out to solve these issues but most of the existing techniques had not focused on various features and doesn't group similar user trending topics which become their major limitation. Trending topics collects the current Internet trends and topics of argument of each and every user. In order to overcome the problem of feature extraction,this work initially extracts many features such as user profile features, user activity features, location based features and text and content features. Then the extracted text features use Jenson-Shannon Divergence (JSD) measure to characterize each labeled tweet using natural language models. Different features are extracted from collected trending topics data in twitter. After features are extracted, clusters are formed to group similar trending topics of tweet user profile. Fuzzy K-means (FKM) algorithm primarily cluster the similar user profiles with same trending topics of tweet and centers are determined to similar user profiles with same trending topics of tweet from fuzzy membership function. Moreover, Extreme learning machine (ELM) algorithm is applied to analyze the growing characteristics of spam with similar topics in twitter from clustering result and acquire necessary knowledge in the detection of spam. The results are evaluated with F-measure, True Positive Rate (TPR), False Positive Rate (FPR) and Classification Accuracy with improved detection results.",
"title": ""
},
{
"docid": "ce18f78a9285a68016e7d793122d3079",
"text": "Civic technology, or civic tech, encompasses a rich body of work, inside and outside HCI, around how we shape technology for, and in turn how technology shapes, how we govern, organize, serve, and identify matters of concern for communities. This study builds on previous work by investigating how civic leaders in a large US city conceptualize civic tech, in particular, how they approach the intersection of data, design and civics. We encountered a range of overlapping voices, from providers, to connectors, to volunteers of civic services and resources. Through this account, we identified different conceptions and expectation of data, design and civics, as well as several shared issues around pressing problems and strategic aspirations. Reflecting on this set of issues produced guiding questions, in particular about the current and possible roles for design, to advance civic tech.",
"title": ""
}
] |
scidocsrr
|
256879b8b827de6f34139c093985bfd6
|
An Open Approach to Autonomous Vehicles
|
[
{
"docid": "74afc31d233f76e28b58f019dfc28df4",
"text": "We present a motion planner for autonomous highway driving that adapts the state lattice framework pioneered for planetary rover navigation to the structured environment of public roadways. The main contribution of this paper is a search space representation that allows the search algorithm to systematically and efficiently explore both spatial and temporal dimensions in real time. This allows the low-level trajectory planner to assume greater responsibility in planning to follow a leading vehicle, perform lane changes, and merge between other vehicles. We show that our algorithm can readily be accelerated on a GPU, and demonstrate it on an autonomous passenger vehicle.",
"title": ""
}
] |
[
{
"docid": "1f35cad3dc73e82bec9ee82963cc9a94",
"text": "Term-Relevance Prediction from Brain Signals (TRPB) is proposed to automatically detect relevance of text information directly from brain signals. An experiment with forty participants was conducted to record neural activity of participants while providing relevance judgments to text stimuli for a given topic. High-precision scientific equipment was used to quantify neural activity across 32 electroencephalography (EEG) channels. A classifier based on a multi-view EEG feature representation showed improvement up to 17% in relevance prediction based on brain signals alone. Relevance was also associated with brain activity with significant changes in certain brain areas. Consequently, TRPB is based on changes identified in specific brain areas and does not require user-specific training or calibration. Hence, relevance predictions can be conducted for unseen content and unseen participants. As an application of TRPB we demonstrate a high-precision variant of the classifier that constructs sets of relevant terms for a given unknown topic of interest. Our research shows that detecting relevance from brain signals is possible and allows the acquisition of relevance judgments without a need to observe any other user interaction. This suggests that TRPB could be used in combination or as an alternative for conventional implicit feedback signals, such as dwell time or click-through activity.",
"title": ""
},
{
"docid": "9e9237c5b886a108a88f824a745550eb",
"text": "Neuroblastoma is a solid tumour that arises from the developing sympathetic nervous system. Over the past decade, our understanding of this disease has advanced tremendously. The future challenge is to apply the knowledge gained to developing risk-based therapies and, ultimately, improving outcome. In this Review we discuss the key discoveries in the developmental biology, molecular genetics and immunology of neuroblastoma, as well as new translational tools for bringing these promising scientific advances into the clinic.",
"title": ""
},
{
"docid": "384f7f309e996d4cd289228a3f368d93",
"text": "With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.",
"title": ""
},
{
"docid": "87133250a9e04fd42f5da5ecacd39d70",
"text": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.",
"title": ""
},
{
"docid": "470e354d364e54fecb39828847e0dc68",
"text": "Online solvers for partially observable Markov decision processes have been applied to problems with large discrete state spaces, but continuous state, action, and observation spaces remain a challenge. This paper begins by investigating double progressive widening (DPW) as a solution to this challenge. However, we prove that this modification alone is not sufficient because the belief representations in the search tree collapse to a single particle causing the algorithm to converge to a policy that is suboptimal regardless of the computation time. The main contribution of the paper is to propose a new algorithm, POMCPOW, that incorporates DPW and weighted particle filtering to overcome this deficiency and attack continuous problems. Simulation results show that these modifications allow the algorithm to be successful where previous approaches fail.",
"title": ""
},
{
"docid": "003e0146613ed3a781ea1e866128f2d9",
"text": "Virtual characters are an important part of many 3D graphical simulations. In entertainment or training applications, virtual characters might be one of the main mechanisms for creating and developing content and scenarios. In such applications the user may need to interact with a number of different characters that need to invoke specific responses in the user, so that the user interprets the scenario in the way that the designer intended. Whilst representations of virtual characters have come a long way in recent years, interactive virtual characters tend to be a bit “wooden” with respect to their perceived behaviour. In this STAR we give an overview of work on expressive virtual characters. In particular, we assume that a virtual character representation is already available, and we describe a variety of models and methods that are used to give the characters more “depth” so that they are less wooden and more plausible. We cover models of individual characters’ emotion and personality, models of interpersonal behaviour and methods for generating expression.",
"title": ""
},
{
"docid": "79c0490d7c19c855812beb8e71e52c54",
"text": "Software engineering project management (SEPM) has been the focus of much recent attention because of the enormous penalties incurred during software development and maintenance resulting from poor management. To date there has been no comprehensive study performed to determine the most significant problems of SEPM, their relative importance, or the research directions necessary to solve them. We conducted a major survey of individuals from all areas of the computer field to determine the general consensus on SEPM problems. Twenty hypothesized problems were submitted to several hundred individuals for their opinions. The 294 respondents validated most of these propositions. None of the propositions was rejected by the respondents as unimportant. A number of research directions were indicated by the respondents which, if followed, the respondents believed would lead to solutions for these problems.",
"title": ""
},
{
"docid": "4c671a45f593aa3f556df93a377333ea",
"text": "Motivation\nWhile drug combination therapies are a well-established concept in cancer treatment, identifying novel synergistic combinations is challenging due to the size of combinatorial space. However, computational approaches have emerged as a time- and cost-efficient way to prioritize combinations to test, based on recently available large-scale combination screening data. Recently, Deep Learning has had an impact in many research areas by achieving new state-of-the-art model performance. However, Deep Learning has not yet been applied to drug synergy prediction, which is the approach we present here, termed DeepSynergy. DeepSynergy uses chemical and genomic information as input information, a normalization strategy to account for input data heterogeneity, and conical layers to model drug synergies.\n\n\nResults\nDeepSynergy was compared to other machine learning methods such as Gradient Boosting Machines, Random Forests, Support Vector Machines and Elastic Nets on the largest publicly available synergy dataset with respect to mean squared error. DeepSynergy significantly outperformed the other methods with an improvement of 7.2% over the second best method at the prediction of novel drug combinations within the space of explored drugs and cell lines. At this task, the mean Pearson correlation coefficient between the measured and the predicted values of DeepSynergy was 0.73. Applying DeepSynergy for classification of these novel drug combinations resulted in a high predictive performance of an AUC of 0.90. Furthermore, we found that all compared methods exhibit low predictive performance when extrapolating to unexplored drugs or cell lines, which we suggest is due to limitations in the size and diversity of the dataset. We envision that DeepSynergy could be a valuable tool for selecting novel synergistic drug combinations.\n\n\nAvailability and implementation\nDeepSynergy is available via www.bioinf.jku.at/software/DeepSynergy.\n\n\nContact\nklambauer@bioinf.jku.at.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "d2c0fe25a2089eb8b56e552212cd2390",
"text": "We consider the problem of reasoning with linear temporal logic on truncated paths. A truncated path is a path that is finite, but not necessarily maximal. Truncated paths arise naturally in several areas, among which are incomplete verification methods (such as simulation or bounded model checking) and hardware resets. We present a formalism for reasoning about truncated paths, and analyze its characteristics.",
"title": ""
},
{
"docid": "05941fa5fe1d7728d9bce44f524ff17f",
"text": "legend N2D N1D 2LPEG N2D vs. 2LPEG N1D vs. 2LPEG EFFICACY Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 272 Primary endpoint: Patients with successful overall bowel cleansing efficacy (HCS) [n] 253 (92.0%) 245 (89.1%) 238 (87.5%) -4.00%* [0.055] -6.91%* [0.328] Supportive secondary endpoint: Patients with successful overall bowel cleansing efficacy (BBPS) [n] 249 (90.5%) 243 (88.4%) 232 (85.3%) n.a. n.a. Primary endpoint: Excellent plus Good cleansing rate in colon ascendens (primary analysis set) [n] 87 (31.6%) 93 (33.8%) 41 (15.1%) 8.11%* [50.001] 10.32%* [50.001] Key secondary endpoint: Adenoma detection rate, colon ascendens 11.6% 11.6% 8.1% -4.80%; 12.00%** [0.106] -4.80%; 12.00%** [0.106] Key secondary endpoint: Adenoma detection rate, overall colon 26.6% 27.6% 26.8% -8.47%; 8.02%** [0.569] -7.65%; 9.11%** [0.455] Key secondary endpoint: Polyp detection rate, colon ascendens 23.3% 18.6% 16.2% -1.41%; 15.47%** [0.024] -6.12%; 10.82%** [0.268] Key secondary endpoint: Polyp detection rate, overall colon 44.0% 45.1% 44.5% -8.85%; 8.00%** [0.579] –7.78%; 9.09%** [0.478] Compliance rates (min 75% of both doses taken) [n] 235 (85.5%) 233 (84.7%) 245 (90.1%) n.a. n.a. SAFETY Safety set, n1⁄4 262 Safety set, n1⁄4 269 Safety set, n1⁄4 263 All treatment-emergent adverse events [n] 77 89 53 n.a. n.a. Patients with any related treatment-emergent adverse event [n] 30 (11.5%) 40 (14.9%) 20 (7.6%) n.a. n.a. *1⁄4 97.5% 1-sided CI; **1⁄4 95% 2-sided CI; n.a.1⁄4 not applicable. United European Gastroenterology Journal 4(5S) A219",
"title": ""
},
{
"docid": "5a58ab9fe86a4d0693faacfc238fb35c",
"text": "Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.",
"title": ""
},
{
"docid": "a99c8d5b74e2470b30706b57fd96868d",
"text": "Implant restorations have become a primary treatment option for the replacement of congenitally missing lateral incisors. The central incisor and canine often erupt in less than optimal positions adjacent to the edentulous lateral incisor space, and therefore preprosthetic orthodontic treatment is frequently required. Derotation of the central incisor and canine, space closure and correction of root proximities may be required to create appropriate space in which to place the implant and achieve an esthetic restoration. This paper discusses aspects of preprosthetic orthodontic diagnosis and treatment that need to be considered with implant restorations.",
"title": ""
},
{
"docid": "948e65673f679fe37027f4dc496397f8",
"text": "Online courses are growing at a tremendous rate, and although we have discovered a great deal about teaching and learning in the online environment, there is much left to learn. One variable that needs to be explored further is procrastination in online coursework. In this mixed methods study, quantitative methods were utilized to evaluate the influence of online graduate students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Additionally, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Collectively, results indicated that ability, effort, context, and luck influenced procrastination in this sample of graduate students. A discussion of these findings, implications for instructors, and recommendations for future research ensues. Online course offerings and degree programs have recently increased at a rapid rate and have gained in popularity among students (Allen & Seaman, 2010, 2011). Garrett (2007) reported that half of prospective students surveyed about postsecondary programs expressed a preference for online and hybrid programs, typically because of the flexibility and convenience (Daymont, Blau, & Campbell, 2011). Advances in learning management systems such as Blackboard have facilitated the dramatic increase in asynchronous programs. Although the research literature concerning online learning has blossomed over the past decade, much is left to learn about important variables that impact student learning and achievement. The purpose of this mixed methods study was to better understand the relationship between online graduate students’ attributional beliefs and their tendency to procrastinate. The approach to this objective was twofold. First, quantitative methods were utilized to evaluate the influence of students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Second, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Journal of Interactive Online Learning Rakes, Dunn, and Rakes",
"title": ""
},
{
"docid": "07457116fbecf8e5182459961b8a87d0",
"text": "Modeling temporal sequences plays a fundamental role in various modern applications and has drawn more and more attentions in the machine learning community. Among those efforts on improving the capability to represent temporal data, the Long Short-Term Memory (LSTM) has achieved great success in many areas. Although the LSTM can capture long-range dependency in the time domain, it does not explicitly model the pattern occurrences in the frequency domain that plays an important role in tracking and predicting data points over various time cycles. We propose the State-Frequency Memory (SFM), a novel recurrent architecture that allows to separate dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of input sequences. By jointly decomposing memorized dynamics into statefrequency components, the SFM is able to offer a fine-grained analysis of temporal sequences by capturing the dependency of uncovered patterns in both time and frequency domains. Evaluations on several temporal modeling tasks demonstrate the SFM can yield competitive performances, in particular as compared with the state-of-the-art LSTM models.",
"title": ""
},
{
"docid": "1b147a49dad77a020b66894cf4e72874",
"text": "ivergent thinking; creativity in women; hemispheric specialization opposing right brain to left as the source of intuition, metaphor, and imagery; the contribution of altered states of consciousness to creative thinking; an organismic interpretation of the relationship of creativity to personality and intelligence; new methods of analysis of biographical material and a new emphasis on psychohistory; the relationship of thought disorder to originality; the inheritance of intellectual and personal traits important to creativity; the enhancement of creativity by training; these have been the main themes emerging in research on creativity since the last major reviews of the field (Stein 1968; Dellas & Gaier 1970; Freeman, Butcher & Christie 1971; Gilchrist 1972). Much indeed has happened in the field of creativity research since 1950, when J. P. Guilford in his parting address as president of the American Psychological Association pointed out that up to that time only 186 out of 121,000 entries in Psychological Abstracts dealt with creative imagination. By 1956, when the first national research conference on creativity was organized by C. W. Taylor at the University of Utah (under the sponsorship of the National Science Foundation), this number had doubled. By 1962, when Scientific Creativity (compiled by C. W. Taylor and F. Barron) went to press with a summary of the",
"title": ""
},
{
"docid": "e30cec311a39bad3a2eb77d79a04351c",
"text": "High increment rate of human population brings about the necessity of efficient utilization of world resources. One way of achieving this is providing the plants with the optimum amount of water at the right time in agricultural applications. In this paper, a cloud-based drip irrigation system, which determines the amount of irrigation water and performs the irrigation process automatically, is presented. Basically, water level in a Class A pan is continuously measured via a water level sensor and duration of irrigation is calculated using total amount of level decrement in a given time interval. The irrigation process is initialized by powering solenoid valves through a microcontroller board. To measure the environmental quantities such as temperature, humidity and pressure, an extra sensor is included in the system. A GSM/GPRS module enables internet connection of the system and all the sensor data as well as system status data are recorded in a cloud server. Furthermore, an accompanying Android application is developed to monitor the instantaneous status of the system. The system is tested on a strawberry field in a greenhouse. Currently, it is active for about four months and first observations imply that the system is capable of successfully perform the irrigation task.",
"title": ""
},
{
"docid": "fc9babe40365e5dc943fccf088f7a44f",
"text": "The network performance of virtual machines plays a critical role in Network Functions Virtualization (NFV), and several technologies have been developed to address hardware-level virtualization shortcomings. Recent advances in operating system level virtualization and deployment platforms such as Docker have made containers an ideal candidate for high performance application encapsulation and deployment. However, Docker and other solutions typically use lower-performing networking mechanisms. In this paper, we explore the feasibility of using technologies designed to accelerate virtual machine networking with containers, in addition to quantifying the network performance of container-based VNFs compared to the state-of-the-art virtual machine solutions. Our results show that containerized applications can provide lower latency and delay variation, and can take advantage of high performance networking technologies previously only used for hardware virtualization.",
"title": ""
},
{
"docid": "e9dc75f34b398b4e0d028f4dbbb707d1",
"text": "INTRODUCTION\nUniversity students are potentially important targets for the promotion of healthy lifestyles as this may reduce the risks of lifestyle-related disorders later in life. This cross-sectional study examined differences in eating behaviours, dietary intake, weight status, and body composition between male and female university students.\n\n\nMETHODOLOGY\nA total of 584 students (59.4% females and 40.6% males) aged 20.6 +/- 1.4 years from four Malaysian universities in the Klang Valley participated in this study. Participants completed the Eating Behaviours Questionnaire and two-day 24-hour dietary recall. Body weight, height, waist circumference and percentage of body fat were measured.\n\n\nRESULTS\nAbout 14.3% of males and 22.4% of females were underweight, while 14.0% of males and 12.3% of females were overweight and obese. A majority of the participants (73.8% males and 74.6% females) skipped at least one meal daily in the past seven days. Breakfast was the most frequently skipped meal. Both males and females frequently snacked during morning tea time. Fruits and biscuits were the most frequently consumed snack items. More than half of the participants did not meet the Malaysian Recommended Nutrient Intake (RNI) for energy, vitamin C, thiamine, riboflavin, niacin, iron (females only), and calcium. Significantly more males than females achieved the RNI levels for energy, protein and iron intakes.\n\n\nCONCLUSION\nThis study highlights the presence of unhealthy eating behaviours, inadequate nutrient intake, and a high prevalence of underweight among university students. Energy and nutrient intakes differed between the sexes. Therefore, promoting healthy eating among young adults is crucial to achieve a healthy nutritional status.",
"title": ""
},
{
"docid": "6aa1b0481559211d89a07e59c4cff6bf",
"text": "Dempster-Shafer theory of evidence is widely applied to uncertainty modelling and knowledge reasoning because of its advantages in dealing with uncertain information. But some conditions or requirements, such as exclusiveness hypothesis and completeness constraint, limit the development and application of that theory to a large extend. To overcome the shortcomings and enhance its capability of representing the uncertainty, a novel model, called D numbers, has been proposed recently. However, many key issues, for example how to implement the combination of D numbers, remain unsolved. In the paper, we have explored the combination of D Numbers from a perspective of conflict redistribution, and propose two combination rules being suitable for different situations for the fusion of two D numbers. The proposed combination rules can reduce to the classical Dempster's rule in Dempster-Shafer theory under certain conditions. Numerical examples and discussion about the proposed rules are also given in the paper.",
"title": ""
},
{
"docid": "a53fd98780baa0830813543d5e246a63",
"text": "This paper covers a sales forecasting problem on e-commerce sites. To predict product sales, we need to understand customers’ browsing behavior and identify whether it is for purchase purpose or not. For this goal, we propose a new customer model, B2P, of aggregating predictive features extracted from customers’ browsing history. We perform experiments on a real world e-commerce site and show that sales predictions by our model are consistently more accurate than those by existing state-of-the-art baselines.",
"title": ""
}
] |
scidocsrr
|
8c79503535be35d2633d85c0a0da95f1
|
Blockchains and Bitcoin: Regulatory responses to cryptocurrencies
|
[
{
"docid": "45b1cb6c9393128c9a9dcf9dbeb50778",
"text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.",
"title": ""
}
] |
[
{
"docid": "8cd73397c9a79646ac1b2acac44dd8a7",
"text": "Liquid micro-jet array impingement cooling of a power conversion module with 12 power switching devices (six insulated gate bipolar transistors and six diodes) is investigated. The 1200-V/150-A module converts dc input power to variable frequency, variable voltage three-phase ac output to drive a 50HP three-phase induction motor. The silicon devices are attached to a packaging layer [direct bonded copper (DBC)], which in turn is soldered to a metal base plate. DI water micro-jet array impinges on the base plate of the module targeted at the footprint area of the devices. Although the high heat flux cooling capability of liquid impingement is a well-established finding, the impact of its practical implementation in power systems has never been addressed. This paper presents the first one-to-one comparison of liquid micro-jet array impingement cooling (JAIC) with the traditional methods, such as air-cooling over finned heat sink or liquid flow in multi-pass cold plate. Results show that compared to the conventional cooling methods, JAIC can significantly enhance the module output power. If the output power is maintained constant, the device temperature can be reduced drastically by JAIC. Furthermore, jet impingement provides uniform cooling for multiple devices placed over a large area, thereby reducing non-uniformity of temperature among the devices. The reduction in device temperature, both its absolute value and the non-uniformity, implies multi-fold increase in module reliability. The results thus illustrate the importance of efficient thermal management technique for compact and reliable power conversion application",
"title": ""
},
{
"docid": "44bd9d0b66cb8d4f2c4590b4cb724765",
"text": "AIM\nThis paper is a description of inductive and deductive content analysis.\n\n\nBACKGROUND\nContent analysis is a method that may be used with either qualitative or quantitative data and in an inductive or deductive way. Qualitative content analysis is commonly used in nursing studies but little has been published on the analysis process and many research books generally only provide a short description of this method.\n\n\nDISCUSSION\nWhen using content analysis, the aim was to build a model to describe the phenomenon in a conceptual form. Both inductive and deductive analysis processes are represented as three main phases: preparation, organizing and reporting. The preparation phase is similar in both approaches. The concepts are derived from the data in inductive content analysis. Deductive content analysis is used when the structure of analysis is operationalized on the basis of previous knowledge.\n\n\nCONCLUSION\nInductive content analysis is used in cases where there are no previous studies dealing with the phenomenon or when it is fragmented. A deductive approach is useful if the general aim was to test a previous theory in a different situation or to compare categories at different time periods.",
"title": ""
},
{
"docid": "af2a1083436450b9147eb7b51be5c761",
"text": "Over the past century, various value models have been proposed. To determine which value model best predicts prosocial behavior, mental health, and pro-environmental behavior, we subjected seven value models to a hierarchical regression analysis. A sample of University students (N = 271) completed the Portrait Value Questionnaire (Schwartz et al., 2012), the Basic Value Survey (Gouveia et al., 2008), and the Social Value Orientation scale (Van Lange et al., 1997). Additionally, they completed the Values Survey Module (Hofstede and Minkov, 2013), Inglehart's (1977) materialism-postmaterialism items, the Study of Values, fourth edition (Allport et al., 1960; Kopelman et al., 2003), and the Rokeach (1973) Value Survey. However, because the reliability of the latter measures was low, only the PVQ-RR, the BVS, and the SVO where entered into our analysis. Our results provide empirical evidence that the PVQ-RR is the strongest predictor of all three outcome variables, explaining variance above and beyond the other two instruments in almost all cases. The BVS significantly predicted prosocial and pro-environmental behavior, while the SVO only explained variance in pro-environmental behavior.",
"title": ""
},
{
"docid": "c1bfef951e9775f6ffc949c5110e1bd1",
"text": "In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism.",
"title": ""
},
{
"docid": "f28d48c838af52caca200e69ebe4cc73",
"text": "This paper shows a new class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> amplifier topology with the objective to increase the nominal class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> output power for a given voltage and current stress on the power transistor. To obtain that result, a parallel LC resonator is added to the load network, tuned to the second harmonic of the switching frequency. A class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> power amplifier is obtained whose transistor-voltage waveform peak value is 81% of the peak value of the voltage of a nominal class- <formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> amplifier using the same dc supply voltage. In this amplifier, the peak voltage across the transistor is 3.0 times the dc supply voltage, instead of the 3.6 times associated with nominal class-<formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula> amplifiers. A normalized design is presented, and the behavior of the circuit is analyzed with simulation showing that the ratio of output power versus transistor peak voltage times peak current is 20.4% better than the nominal class <formula formulatype=\"inline\"><tex Notation=\"TeX\">$E$</tex></formula>. The proposed converter and normalized design approach are verified by simulations and measurements done on an experimental prototype.",
"title": ""
},
{
"docid": "d698ce3df2f1216b7b78237dcecb0df1",
"text": "A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small RF input power conditions. A test circuit of the proposed differential-drive rectifier was fabricated with 0.18 mu m CMOS technology, and the measured performance was compared with those of other types of rectifiers. Dependence of the PCE on the input RF signal frequency, output loading conditions and transistor sizing was also evaluated. At the single-stage configuration, 67.5% of PCE was achieved under conditions of 953 MHz, - 12.5 dBm RF input and 10 KOmega output load. This is twice as large as that of the state-of-the-art rectifier circuit. The peak PCE increases with a decrease in operation frequency and with an increase in output load resistance. In addition, experimental results show the existence of an optimum transistor size in accordance with the output loading conditions. The multi-stage configuration for larger output DC voltage is also presented.",
"title": ""
},
{
"docid": "491a2805f928d081261b5a140c9aa952",
"text": "The proliferation of IoT devices that can be more easily compromised than desktop computers has led to an increase in IoT-based botnet attacks. To mitigate this threat, there is a need for new methods that detect attacks launched from compromised IoT devices and that differentiate between hours- and milliseconds-long IoT-based attacks. In this article, we propose a novel network-based anomaly detection method for the IoT called N-BaIoT that extracts behavior snapshots of the network and uses deep autoencoders to detect anomalous network traffic from compromised IoT devices. To evaluate our method, we infected nine commercial IoT devices in our lab with two widely known IoT-based botnets, Mirai and BASHLITE. The evaluation results demonstrated our proposed methods ability to accurately and instantly detect the attacks as they were being launched from the compromised IoT devices that were part of a botnet.",
"title": ""
},
{
"docid": "3bf37b20679ca6abd022571e3356e95d",
"text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.",
"title": ""
},
{
"docid": "768240033185f6464d2274181370843a",
"text": "Most of today's commercial companies heavily rely on social media and community management tools to interact with their clients and analyze their online behaviour. Nonetheless, these tools still lack evolved data mining and visualization features to tailor the analysis in order to support useful marketing decisions. We present an original methodology that aims at formalizing the marketing need of the company and develop a tool that can support it. The methodology is derived from the Cross-Industry Standard Process for Data Mining (CRISP-DM) and includes additional steps dedicated to the design and development of visualizations of mined data. We followed the methodology in two use cases with Swiss companies. First, we developed a prototype that aims at understanding the needs of tourists based on Flickr and Instagram data. In that use case, we extend the existing literature by enriching hashtags analysis methods with a semantic network based on Linked Data. Second, we analyzed internal customer data of an online discount retailer to help them define guerilla marketing measures. We report on the challenges of integrating Facebook data in the process. Informal feedback from domain experts confirms the strong potential of such advanced analytic features based on social data to inform marketing decisions.",
"title": ""
},
{
"docid": "6a03d3b4159fe35e8772d5e3e8d656c1",
"text": "In this paper, we propose a novel 3D feature point detection algorithm using Multiresolution Surface Variation (MSV). The proposed algorithm is used to extract 3D features from a cluttered, unstructured environment for use in realtime Simultaneous Localisation and Mapping (SLAM) algorithms running on a mobile robot. The salient feature of the proposed method is that, it can not only handle dense, uniform 3D point clouds (such as those obtained from Kinect or rotating 2D Lidar), but also (perhaps more importantly) handle sparse, non-uniform 3D point clouds (obtained from sensors such as 3D Lidar) and produce robust, repeatable key points that are specifically suitable for SLAM. The efficacy of the proposed method is evaluated using a dataset collected from a mobile robot with a 3D Velodyne Lidar (VLP-16) mounted on top.",
"title": ""
},
{
"docid": "a389222b13819ccd164a6a2f80e2e912",
"text": "Graphene, in its ideal form, is a two-dimensional (2D) material consisting of a single layer of carbon atoms arranged in a hexagonal lattice. The richness in morphological, physical, mechanical, and optical properties of ideal graphene has stimulated enormous scientific and industrial interest, since its first exfoliation in 2004. In turn, the production of graphene in a reliable, controllable, and scalable manner has become significantly important to bring us closer to practical applications of graphene. To this end, chemical vapor deposition (CVD) offers tantalizing opportunities for the synthesis of large-area, uniform, and high-quality graphene films. However, quite different from the ideal 2D structure of graphene, in reality, the currently available CVD-grown graphene films are still suffering from intrinsic defective grain boundaries, surface contaminations, and wrinkles, together with low growth rate and the requirement of inevitable transfer. Clearly, a gap still exits between the reality of CVD-derived graphene, especially in industrial production, and ideal graphene with outstanding properties. This Review will emphasize the recent advances and strategies in CVD production of graphene for settling these issues to bridge the giant gap. We begin with brief background information about the synthesis of nanoscale carbon allotropes, followed by the discussion of fundamental growth mechanism and kinetics of CVD growth of graphene. We then discuss the strategies for perfecting the quality of CVD-derived graphene with regard to domain size, cleanness, flatness, growth rate, scalability, and direct growth of graphene on functional substrate. Finally, a perspective on future development in the research relevant to scalable growth of high-quality graphene is presented.",
"title": ""
},
{
"docid": "435fcf5dab986fd87db6fc24fef3cc1a",
"text": "Web applications make life more convenient through on the activities. Many web applications have several kind of user input (e.g. personal information, a user's comment of commercial goods, etc.) for the activities. However, there are various vulnerabilities in input functions of web applications. It is possible to try malicious actions using free accessibility of the web applications. The attacks by exploitation of these input vulnerabilities enable to be performed by injecting malicious web code; it enables one to perform various illegal actions, such as SQL Injection Attacks (SQLIAs) and Cross Site Scripting (XSS). These actions come down to theft, replacing personal information, or phishing. Many solutions have devised for the malicious web code, such as AMNESIA [1] and SQL Check [2], etc. The methods use parser for the code, and limited to fixed and very small patterns, and are difficult to adapt to variations. Machine learning method can give leverage to cover far broader range of malicious web code and is easy to adapt to variations and changes. Therefore, we suggests adaptable classification of malicious web code by machine learning approach such as Support Vector Machine (SVM)[3], Naïve-Bayes[4], and k-Nearest Neighbor Algorithm[5] for detecting the exploitation user inputs.",
"title": ""
},
{
"docid": "6de91d6b71ff97c5564dd3e3a42092a0",
"text": "Characteristics of physical movements are indicative of infants' neuro-motor development and brain dysfunction. For instance, infant seizure, a clinical signal of brain dysfunction, could be identified and predicted by monitoring its physical movements. With the advance of wearable sensor technology, including the miniaturization of sensors, and the increasing broad application of micro- and nanotechnology, and smart fabrics in wearable sensor systems, it is now possible to collect, store, and process multimodal signal data of infant movements in a more efficient, more comfortable, and non-intrusive way. This review aims to depict the state-of-the-art of wearable sensor systems for infant movement monitoring. We also discuss its clinical significance and the aspect of system design.",
"title": ""
},
{
"docid": "2185097978553d5030252ffa9240fb3c",
"text": "The concept of celebrity culture remains remarkably undertheorized in the literature, and it is precisely this gap that this article aims to begin filling in. Starting with media culture definitions, celebrity culture is conceptualized as collections of sense-making practices whose main resources of meaning are celebrity. Consequently, celebrity cultures are necessarily plural. This approach enables us to focus on the spatial differentiation between (sub)national celebrity cultures, for which the Flemish case is taken as a central example. We gain a better understanding of this differentiation by adopting a translocal frame on culture and by focusing on the construction of celebrity cultures through the ‘us and them’ binary and communities. Finally, it is also suggested that what is termed cultural working memory improves our understanding of the remembering and forgetting of actual celebrities, as opposed to more historical figures captured by concepts such as cultural memory.",
"title": ""
},
{
"docid": "c26caff761092bc5b6af9f1c66986715",
"text": "The mechanisms used by DNN accelerators to leverage datareuse and perform data staging are known as dataflow, and they directly impact the performance and energy efficiency of DNN accelerator designs. Co-optimizing the accelerator microarchitecture and its internal dataflow is crucial for accelerator designers, but there is a severe lack of tools and methodologies to help them explore the co-optimization design space. In this work, we first introduce a set of datacentric directives to concisely specify DNN dataflows in a compiler-friendly form. Next, we present an analytical model, MAESTRO, that estimates various cost-benefit tradeoffs of a dataflow including execution time and energy efficiency for a DNN model and hardware configuration. Finally, we demonstrate the use of MAESTRO to drive a hardware design space exploration (DSE) engine. The DSE engine searched 480M designs and identified 2.5M valid designs at an average rate of 0.17M designs per second, and also identified throughputand energy-optimized designs among this set.",
"title": ""
},
{
"docid": "938f49e103d0153c82819becf96f126c",
"text": "Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.",
"title": ""
},
{
"docid": "7e800094f52080194d94bdedf1d92b9c",
"text": "IMPORTANCE\nHealth care-associated infections (HAIs) account for a large proportion of the harms caused by health care and are associated with high costs. Better evaluation of the costs of these infections could help providers and payers to justify investing in prevention.\n\n\nOBJECTIVE\nTo estimate costs associated with the most significant and targetable HAIs.\n\n\nDATA SOURCES\nFor estimation of attributable costs, we conducted a systematic review of the literature using PubMed for the years 1986 through April 2013. For HAI incidence estimates, we used the National Healthcare Safety Network of the Centers for Disease Control and Prevention (CDC).\n\n\nSTUDY SELECTION\nStudies performed outside the United States were excluded. Inclusion criteria included a robust method of comparison using a matched control group or an appropriate regression strategy, generalizable populations typical of inpatient wards and critical care units, methodologic consistency with CDC definitions, and soundness of handling economic outcomes.\n\n\nDATA EXTRACTION AND SYNTHESIS\nThree review cycles were completed, with the final iteration carried out from July 2011 to April 2013. Selected publications underwent a secondary review by the research team.\n\n\nMAIN OUTCOMES AND MEASURES\nCosts, inflated to 2012 US dollars.\n\n\nRESULTS\nUsing Monte Carlo simulation, we generated point estimates and 95% CIs for attributable costs and length of hospital stay. On a per-case basis, central line-associated bloodstream infections were found to be the most costly HAIs at $45,814 (95% CI, $30,919-$65,245), followed by ventilator-associated pneumonia at $40,144 (95% CI, $36,286-$44,220), surgical site infections at $20,785 (95% CI, $18,902-$22,667), Clostridium difficile infection at $11,285 (95% CI, $9118-$13,574), and catheter-associated urinary tract infections at $896 (95% CI, $603-$1189). The total annual costs for the 5 major infections were $9.8 billion (95% CI, $8.3-$11.5 billion), with surgical site infections contributing the most to overall costs (33.7% of the total), followed by ventilator-associated pneumonia (31.6%), central line-associated bloodstream infections (18.9%), C difficile infections (15.4%), and catheter-associated urinary tract infections (<1%).\n\n\nCONCLUSIONS AND RELEVANCE\nWhile quality improvement initiatives have decreased HAI incidence and costs, much more remains to be done. As hospitals realize savings from prevention of these complications under payment reforms, they may be more likely to invest in such strategies.",
"title": ""
},
{
"docid": "44618874fe7725890fbfe9fecde65853",
"text": "Software development teams in large scale offshore enterprise development programmes are often under intense pressure to deliver high quality software within challenging time contraints. Project failures can attract adverse publicity and damage corporate reputations. Agile methods have been advocated to reduce project risks, improving both productivity and product quality. This article uses practitioner descriptions of agile method tailoring to explore large scale offshore enterprise development programmes with a focus on product owner role tailoring, where the product owner identifies and prioritises customer requirements. In globalised projects, the product owner must reconcile competing business interests, whilst generating and then prioritising large numbers of requirements for numerous development teams. The study comprises eight international companies, based in London, Bangalore and Delhi. Interviews with 46 practitioners were conducted between February 2010 and May 2012. Grounded theory was used to identify that product owners form into teams. The main contribution of this research is to describe the nine product owner team functions identified: groom, prioritiser, release master, technical architect, governor, communicator, traveller, intermediary and risk assessor. These product owner functions arbitrate between conflicting customer requirements, approve release schedules, disseminate architectural design decisions, provide technical governance and propogate information across teams. The functions identified in this research are mapped to a scrum of scrums process, and a taxonomy of the functions shows how focusing on either decision-making or information dissemination in each helps to tailor agile methods to large scale offshore enterprise development programmes.",
"title": ""
}
] |
scidocsrr
|
88f9fd4b3594c0762fca596f731f8676
|
I-vector based speaker gender recognition
|
[
{
"docid": "5f50c2872e381da8ef170b5d4864ec99",
"text": "Gender is an important demographic attribute of people. This paper provides a survey of human gender recognition in computer vision. A review of approaches exploiting information from face and whole body (either from a still image or gait sequence) is presented. We highlight the challenges faced and survey the representative methods of these approaches. Based on the results, good performance have been achieved for datasets captured under controlled environments, but there is still much work that can be done to improve the robustness of gender recognition under real-life environments.",
"title": ""
}
] |
[
{
"docid": "2330910facc12129765143e0a4a90f43",
"text": "Sensor networks for healthcare IoT (Internet of Things) have advanced rapidly in recent years, which has made it possible to integrate real-time health data by connecting bodies and sensors. Body sensors require accurate time synchronization in order to collaboratively monitor health conditions and medication usage. Self-recovery and high accuracy are crucial for time synchronization protocols in sensor networks for healthcare IoT. Because body sensors are generally deployed with unstable energy sources, nodes can fail because of inadequate power supply. This influences the efficiency and robustness of time synchronization protocols. Tree-based protocols require stable root nodes as time references. The time synchronization process cannot be completed if a root node fails. To address this problem, we present a Self-Recoverable Time Synchronization (SRTS) scheme for healthcare IoT sensor networks. A recovery timer is set up for candidate nodes, which are dynamically elected. The candidate node whose timer expires first takes charge of selecting a new root node. Meanwhile, SRTS combines the two-points least-squares method and the MAC layer timestamp to significantly improve the accuracy of PBS. Furthermore, SRP and RRP models are used in SRTS. Thus, our approach provides higher accuracy than PBS, while consuming a similar amount of energy. We use NS2 network tools to evaluate our approach. The simulation results show that SRTS exhibits better self-recovery than time synchronization protocols STETS and GPA under different network scales. Moreover, accuracy and clock drift compensation are better than those of PBS and TPSN.",
"title": ""
},
{
"docid": "ec3c61d2e01888ab6675332f56c5621e",
"text": "This paper explores the existing energy harvesting technologies, their stage of maturity and their feasibility for powering sensor nodes. It contains a study of the energy requirements of the sensor nodes that are a part of the commercial domain. Further, it investigates methods and concepts for harvesting the energy from electric and magnetic fields present near utility assets through laboratory experimentation. The flux concentrator based approach that scavenges the magnetic field was considered to be the most promising solution providing nearly 250mW of power sufficient to power a sensor node.",
"title": ""
},
{
"docid": "81fb2598afd330a6d075ce2d0deeb026",
"text": "This paper gives an overview of the development of a novel biped walking machine. The robot is designed as an experimental system for studying biped locomotion based on torque controlled joints. As an underlying drive technology, the torque controlled joint units of the DLR-KUKA-Lightweight-Robot are employed. The relevant design choices for using this technology in a biped robot with integrated joint torque sensors are highlighted and some first experimental results using a conventional ZMP based control scheme are discussed.",
"title": ""
},
{
"docid": "2cb1c713b8e75e7f2e38be90c1b5a9e6",
"text": "Frequent action video game players often outperform non-gamers on measures of perception and cognition, and some studies find that video game practice enhances those abilities. The possibility that video game training transfers broadly to other aspects of cognition is exciting because training on one task rarely improves performance on others. At first glance, the cumulative evidence suggests a strong relationship between gaming experience and other cognitive abilities, but methodological shortcomings call that conclusion into question. We discuss these pitfalls, identify how existing studies succeed or fail in overcoming them, and provide guidelines for more definitive tests of the effects of gaming on cognition.",
"title": ""
},
{
"docid": "7dde8fad4448a27a38b6dd5f6d41617f",
"text": "We address the problem of making general video game playing agents play in a human-like manner. To this end, we introduce several modifications of the UCT formula used in Monte Carlo Tree Search that biases action selection towards repeating the current action, making pauses, and limiting rapid switching between actions. Playtraces of human players are used to model their propensity for repeated actions; this model is used for biasing the UCT formula. Experiments show that our modified MCTS agent, called BoT, plays quantitatively similar to human players as measured by the distribution of repeated actions. A survey of human observers reveals that the agent exhibits human-like playing style in some games but not others.",
"title": ""
},
{
"docid": "b01c62a4593254df75c1e390487982fa",
"text": "This paper addresses the question \"why and how is it that we say the same thing differently to different people, or even to the same person in different circumstances?\" We vary the content and form of our text in order to convey more information than is contained in the literal meanings of our words. This information expresses the speaker's interpersonal goals toward the hearer and, in general, his or her perception of the pragmatic aspects of the conversation. This paper discusses two insights that arise when one studies this question: the existence of a level of organization that mediates between communicative goals and generator decisions, and the interleaved planningrealization regime and associated monitoring required for generation. To illustrate these ideas, a computer program is described which contains plans and strategies to produce stylistically appropriate texts from a single representation under various settings that model pragmatic circumstances.",
"title": ""
},
{
"docid": "52da42b320e23e069519c228f1bdd8b5",
"text": "Over the last few years, C-RAN is proposed as a transformative architecture for 5G cellular networks that brings the flexibility and agility of cloud computing to wireless communications. At the same time, content caching in wireless networks has become an essential solution to lower the content- access latency and backhaul traffic loading, leading to user QoE improvement and network cost reduction. In this article, a novel cooperative hierarchical caching (CHC) framework in C-RAN is introduced where contents are jointly cached at the BBU and at the RRHs. Unlike in traditional approaches, the cache at the BBU, cloud cache, presents a new layer in the cache hierarchy, bridging the latency/capacity gap between the traditional edge-based and core-based caching schemes. Trace-driven simulations reveal that CHC yields up to 51 percent improvement in cache hit ratio, 11 percent decrease in average content access latency, and 18 percent reduction in backhaul traffic load compared to the edge-only caching scheme with the same total cache capacity. Before closing the article, we discuss the key challenges and promising opportunities for deploying content caching in C-RAN in order to make it an enabler technology in 5G ultra-dense systems.",
"title": ""
},
{
"docid": "5883597258387e83c4c5b9c1e896c818",
"text": "Techniques making use of Deep Neural Networks (DNN) have recently been seen to bring large improvements in textindependent speaker recognition. In this paper, we verify that the DNN based methods result in excellent performances in the context of text-dependent speaker verification as well. We build our system on the previously introduced HMM based ivector approach, where phone models are used to obtain frame level alignment in order to collect sufficient statistics for ivector extraction. For comparison, we experiment with an alternative alignment obtained directly from the output of DNN trained for phone classification. We also experiment with DNN based bottleneck features and their combinations with standard cepstral features. Although the i-vector approach is generally considered not suitable for text-dependent speaker verification, we show that our HMM based approach combined with bottleneck features provides truly state-of-the-art performance on RSR2015 data.",
"title": ""
},
{
"docid": "74ef26e332b12329d8d83f80169de5c0",
"text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.",
"title": ""
},
{
"docid": "f329009bbee172c495a441a0ab911e28",
"text": "This paper provides an application of game theoretic techniques to the analysis of a class of multiparty cryptographic protocols for secret bit exchange.",
"title": ""
},
{
"docid": "bd3776d1dc36d6a91ea73d3c12ca326c",
"text": "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.",
"title": ""
},
{
"docid": "87e61b20768a9f8397031798295874f8",
"text": "Arterial pressure is a cyclic phenomenon characterized by a pressure wave oscillating around the mean blood pressure, from diastolic to systolic blood pressure, defining the pulse pressure. Aortic input impedance is a measure of the opposition of the circulation to an oscillatory flow input (stroke volume generated by heart work). Aortic input impedance integrates factors opposing LV ejection, such as peripheral resistance, viscoelastic properties and dimensions of the large central arteries, and the intensity and timing of the pressure wave reflections, associated with the opposition to LV ejection influenced by inertial forces. The two most frequently used methods of arterial stiffness are measurement of PWV and central (aortic or common carotid artery) pulse wave analysis, recorded directly at the carotid artery or indirectly in the ascending aorta from radial artery pressure curve. The arterial system is heterogenous and characterized by the existence of a stiffness gradient with progressive stiffness increase (PWV) from ascending aorta and large elastic proximal arteries to the peripheral muscular conduit arteries. Analysis of aortic or carotid pressure waveform and amplitude concerns the effect of reflected waves on pressure shape and amplitude, estimated in absolute terms, augmented pressure in millimetre of mercury, or, in relative terms, 'augmentation index' (Aix in percentage of pulse pressure). Finally, if the aortic PWV has the highest predictive value for prognosis, the aortic or central artery pressure waveform should be recorded and analysed in parallel with the measure of PWV to allow a deeper analysis of arterial haemodynamics.",
"title": ""
},
{
"docid": "061ba21fc14986b420cc7cbe7ab6516a",
"text": "Shilajit is a natural substance found mainly in the Himalayas, formed for centuries by the gradual decomposition of certain plants by the action of microorganisms. It is a potent and very safe dietary supplement, restoring the energetic balance and potentially able to prevent several diseases. Recent investigations point to an interesting medical application toward the control of cognitive disorders associated with aging, and cognitive stimulation. Thus, fulvic acid, the main active principle, blocks tau self-aggregation, opening an avenue toward the study of Alzheimer's therapy. In essence, this is a nutraceutical product of demonstrated benefits for human health. Considering the expected impact of shilajit usage in the medical field, especially in the neurological sciences, more investigations at the basic biological level as well as clinical trials are necessary, in order to understand how organic molecules of shilajit and particularly fulvic acid, one of the active principles, and oligoelements act at both the molecular and cellular levels and in the whole organism.",
"title": ""
},
{
"docid": "2c6a8f3a60a1d2854d5abf711c2a98cb",
"text": "A SIMD processor that contains a 16-way partitioned data-path is designed for efficient multimedia data processing. In order to automatically align data needed for SIMD processing, the architecture adopts a vector memory unit that consists of 17-bank memory blocks. The vector memory unit also has address generation and rearrangement units for eliminating bank conflicts. The MicroBlaze FPGA based RISC processor is used for program control and scalar data processing. The architecture has been implemented on a Xilinx FPGA, and the implementation performance for several multimedia kernels is obtained",
"title": ""
},
{
"docid": "35625af7c8f2b12c8425c2398e025ef8",
"text": "Child stunting in India exceeds that in poorer regions like sub-Saharan Africa. Data on over 168,000 children show that, relative to Africa, India's height disadvantage increases sharply with birth order. We posit that India’s steep birth order gradient is due to favoritism toward eldest sons, which affects parents' fertility decisions and resource allocation across children. We show that, within India, the gradient is steeper for high-son-preference regions and religions. The gradient also varies with sibling gender as predicted. A back-of-the-envelope calculation suggests that India's steeper birth order gradient can explain over one-half of the India-Africa gap in average child height.",
"title": ""
},
{
"docid": "6807545797869605f90721ee5777b5a0",
"text": "This paper examines location-based services (LBS) from a broad perspective involving deWnitions, characteristics, and application prospects. We present an overview of LBS modeling regarding users, locations, contexts and data. The LBS modeling endeavors are cross-examined with a research agenda of geographic information science. Some core research themes are brieXy speculated. © 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2711d38ab9d5bcc8cd4a123630344fbf",
"text": "Using CMOS-MEMS micromachining techniques we have constructed a prototype earphone that is audible from 1 to 15 kHz. The fabrication of the acoustic membrane consists of only two steps in addition to the prior post-CMOS micromachining steps developed at CMU. The ability to build a membrane directly on a standard CMOS chip, integrating mechanical structures with signal processing electronics will enable a variety of applications including economical earphones, microphones, hearing aids, high-fidelity earphones, cellular phones and noise cancellation. The large compliance of the CMOS-MEMS membrane also promises application as a sensitive microphone and pressure sensor.",
"title": ""
},
{
"docid": "1de3a70567e68eebfebe2bc797f58e08",
"text": "This article provides a comprehensive description of FastSLAM, a new family of algorithms for the simultaneous localization and mapping problem, which specifically address hard data association problems. The algorithm uses a particle filter for sampling robot paths, and extended Kalman filters for representing maps acquired by the vehicle. This article presents two variants of this algorithm, the original algorithm along with a more recent variant that provides improved performance in certain operating regimes. In addition to a mathematical derivation of the new algorithm, we present a proof of convergence and experimental results on its performance on real-world data.",
"title": ""
}
] |
scidocsrr
|
589b470a56d2a6faf7075e0c250f161c
|
Robust Text Classification under Confounding Shift
|
[
{
"docid": "fdc4efad14d79f1855dddddb6a30ace6",
"text": "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase 'sick of' and the word 'depressed'), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive 'my' when mentioning their 'wife' or 'girlfriend' more often than females use 'my' with 'husband' or 'boyfriend'). To date, this represents the largest study, by an order of magnitude, of language and personality.",
"title": ""
},
{
"docid": "dadd12e17ce1772f48eaae29453bc610",
"text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st",
"title": ""
}
] |
[
{
"docid": "7be1f8be2c74c438b1ed1761e157d3a3",
"text": "The feeding behavior and digestive physiology of the sea cucumber, Apostichopus japonicus are not well understood. A better understanding may provide useful information for the development of the aquaculture of this species. In this article the tentacle locomotion, feeding rhythms, ingestion rate (IR), feces production rate (FPR) and digestive enzyme activities were studied in three size groups (small, medium and large) of sea cucumber under a 12h light/12h dark cycle. Frame-by-frame video analysis revealed that all size groups had similar feeding strategies using a grasping motion to pick up sediment particles. The tentacle insertion rates of the large size group were significantly faster than those of the small and medium-sized groups (P<0.05). Feeding activities investigated by charge coupled device cameras with infrared systems indicated that all size groups of sea cucumber were nocturnal and their feeding peaks occurred at 02:00-04:00. The medium and large-sized groups also had a second feeding peak during the day. Both IR and FPR in all groups were significantly higher at night than those during the daytime (P<0.05). Additionally, the peak activities of digestive enzymes were 2-4h earlier than the peak of feeding. Taken together, these results demonstrated that the light/dark cycle was a powerful environment factor that influenced biological rhythms of A. japonicus, which had the ability to optimize the digestive processes for a forthcoming ingestion.",
"title": ""
},
{
"docid": "bf5b7f9fe7012164af1f94e05c7092e6",
"text": "With increased popularity and wide adoption of smartphones and mobile devices, recent years have seen a new burgeoning economy model centered around mobile apps. However, app repackaging, among many other threats, brings tremendous risk to the ecosystem, including app developers, app market operators, and end users. To mitigate such threat, we propose and develop a watermarking mechanism for Android apps. First, towards automatic watermark embedding and extraction, we introduce the novel concept of manifest app, which is a companion of a target Android app under protection. We then design and develop a tool named AppInk, which takes the source code of an app as input to automatically generate a new app with a transparently-embedded watermark and the associated manifest app. The manifest app can be later used to reliably recognize embedded watermark with zero user intervention. To demonstrate the effectiveness of AppInk in preventing app repackaging, we analyze its robustness in defending against distortive, subtractive, and additive attacks, and then evaluate its resistance against two open source repackaging tools. Our results show that AppInk is easy to use, effective in defending against current known repackaging threats on Android platform, and introduces small performance overhead.",
"title": ""
},
{
"docid": "19e2790010bfa1081fb9503ba5f9d808",
"text": "Existing electricity market segmentation analysis techniques only make use of limited consumption statistics (usually averages and variances). In this paper we use power demand distributions (PDDs) obtained from fine-grain smart meter data to perform market segmentation based on distributional clustering. We apply this approach to mining 8 months of readings from about 1000 US Google employees.",
"title": ""
},
{
"docid": "880aa3de3b839739927cbd82b7abcf8a",
"text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.",
"title": ""
},
{
"docid": "7e884438ee8459a441cbe1500f1bac88",
"text": "We consider the problem of autonomously flying Miniature Aerial Vehicles (MAVs) in indoor environments such as home and office buildings. The primary long range sensor in these MAVs is a miniature camera. While previous approaches first try to build a 3D model in order to do planning and control, our method neither attempts to build nor requires a 3D model. Instead, our method first classifies the type of indoor environment the MAV is in, and then uses vision algorithms based on perspective cues to estimate the desired direction to fly. We test our method on two MAV platforms: a co-axial miniature helicopter and a toy quadrotor. Our experiments show that our vision algorithms are quite reliable, and they enable our MAVs to fly in a variety of corridors and staircases.",
"title": ""
},
{
"docid": "9d27c176201193a7c72820cba7d2ea23",
"text": "In this article we consider quantile regression in reproducing kernel Hilbert spaces, which we call kernel quantile regression (KQR). We make three contributions: (1) we propose an efficient algorithm that computes the entire solution path of the KQR, with essentially the same computational cost as fitting one KQR model; (2) we derive a simple formula for the effective dimension of the KQR model, which allows convenient selection of the regularization parameter; and (3) we develop an asymptotic theory for the KQR model.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "1b6adeb66afcdd69950c9dfd7cb2e54a",
"text": "The vision of the Semantic Web was coined by Tim Berners-Lee almost two decades ago. The idea describes an extension of the existing Web in which “information is given well-defined meaning, better enabling computers and people to work in cooperation” [Berners-Lee et al., 2001]. Semantic annotations in HTML pages are one realization of this vision which was adopted by large numbers of web sites in the last years. Semantic annotations are integrated into the code of HTML pages using one of the three markup languages Microformats, RDFa, or Microdata. Major consumers of semantic annotations are the search engine companies Bing, Google, Yahoo!, and Yandex. They use semantic annotations from crawled web pages to enrich the presentation of search results and to complement their knowledge bases. However, outside the large search engine companies, little is known about the deployment of semantic annotations: How many web sites deploy semantic annotations? What are the topics covered by semantic annotations? How detailed are the annotations? Do web sites use semantic annotations correctly? Are semantic annotations useful for others than the search engine companies? And how can semantic annotations be gathered from the Web in that case? The thesis answers these questions by profiling the web-wide deployment of semantic annotations. The topic is approached in three consecutive steps: In the first step, two approaches for extracting semantic annotations from the Web are discussed. The thesis evaluates first the technique of focused crawling for harvesting semantic annotations. Afterward, a framework to extract semantic annotations from existing web crawl corpora is described. The two extraction approaches are then compared for the purpose of analyzing the deployment of semantic annotations in the Web. In the second step, the thesis analyzes the overall and markup language-specific adoption of semantic annotations. This empirical investigation is based on the largest web corpus that is available to the public. Further, the topics covered by deployed semantic annotations and their evolution over time are analyzed. Subsequent studies examine common errors within semantic annotations. In addition, the thesis analyzes the data overlap of the entities that are described by semantic annotations from the same and across different web sites. The third step narrows the focus of the analysis towards use case-specific issues. Based on the requirements of a marketplace, a news aggregator, and a travel portal the thesis empirically examines the utility of semantic annotations for these use cases. Additional experiments analyze the capability of product-related semantic annotations to be integrated into an existing product categorization schema. Especially, the potential of exploiting the diverse category information given by the web sites providing semantic annotations is evaluated.",
"title": ""
},
{
"docid": "4a69a0c5c225d9fbb40373aebaeb99be",
"text": "The hyperlink structure of Wikipedia constitutes a key resource for many Natural Language Processing tasks and applications, as it provides several million semantic annotations of entities in context. Yet only a small fraction of mentions across the entire Wikipedia corpus is linked. In this paper we present the automatic construction and evaluation of a Semantically Enriched Wikipedia (SEW) in which the overall number of linked mentions has been more than tripled solely by exploiting the structure of Wikipedia itself and the wide-coverage sense inventory of BabelNet. As a result we obtain a sense-annotated corpus with more than 200 million annotations of over 4 million different concepts and named entities. We then show that our corpus leads to competitive results on multiple tasks, such as Entity Linking and Word Similarity.",
"title": ""
},
{
"docid": "410d4b0eb8c60517506b0d451cf288ba",
"text": "Prepositional phrases (PPs) express crucial information that knowledge base construction methods need to extract. However, PPs are a major source of syntactic ambiguity and still pose problems in parsing. We present a method for resolving ambiguities arising from PPs, making extensive use of semantic knowledge from various resources. As training data, we use both labeled and unlabeled data, utilizing an expectation maximization algorithm for parameter estimation. Experiments show that our method yields improvements over existing methods including a state of the art dependency parser.",
"title": ""
},
{
"docid": "133ee6deb1c5608ff0ede74d17a54c4a",
"text": "In 3GPP Rel-13, a narrowband wide-area cellular system, named Narrowband Internet of Things (NB-IoT), has been introduced to provide low-cost, low-power connectivity for the Internet of Things. This system, based on Long Term Evolution (LTE) technology, can be deployed in three operation modes - in-band within LTE carrier, in the guard-band of LTE carrier, and stand-alone. Guard-band deployment allows the operator to support NB-IoT without requiring new spectrum and with minimal impact to LTE. This paper provides an overview and presents a study of NB-IoT deployment in LTE guard band. An analysis of coexistence of NB-IoT with LTE is presented and demonstrates that while there is mutual interference between NB-IoT and LTE on the uplink, the impact on either system is small. A performance analysis of NB-IoT is also presented. The link budget analysis demonstrates how the coverage target can be achieved for the uplink and downlink data channels. Illustrative uplink and downlink link-level simulation results are provided. The analysis in the paper shows that guard-band operation of NB-IoT is feasible and can provide support for massive number of low-throughput IoT devices.",
"title": ""
},
{
"docid": "a0429b8c7f7ae11eab315b28384e312b",
"text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. The portion of the RF spectrum above 3GHz has largely been uxexploited for commercial mobile applications. In this paper, we reason why wireless community should start looking at 3–300GHz spectrum for mobile broadband applications. We discuss propagation and device technology challenges associated with this band as well as its unique advantages such as spectrum availability and small component sizes for mobile applications.",
"title": ""
},
{
"docid": "b151343a4c1e365ede70a71880065aab",
"text": "Cardiovascular disease (CVD) and depression are common. Patients with CVD have more depression than the general population. Persons with depression are more likely to eventually develop CVD and also have a higher mortality rate than the general population. Patients with CVD, who are also depressed, have a worse outcome than those patients who are not depressed. There is a graded relationship: the more severe the depression, the higher the subsequent risk of mortality and other cardiovascular events. It is possible that depression is only a marker for more severe CVD which so far cannot be detected using our currently available investigations. However, given the increased prevalence of depression in patients with CVD, a causal relationship with either CVD causing more depression or depression causing more CVD and a worse prognosis for CVD is probable. There are many possible pathogenetic mechanisms that have been described, which are plausible and that might well be important. However, whether or not there is a causal relationship, depression is the main driver of quality of life and requires prevention, detection, and management in its own right. Depression after an acute cardiac event is commonly an adjustment disorder than can improve spontaneously with comprehensive cardiac management. Additional management strategies for depressed cardiac patients include cardiac rehabilitation and exercise programmes, general support, cognitive behavioural therapy, antidepressant medication, combined approaches, and probably disease management programmes.",
"title": ""
},
{
"docid": "a1c9553dbe9d4f9f9b5d81feb9ece9d5",
"text": "Knowledge tracing is a sequence prediction problem where the goal is to predict the outcomes of students over questions as they are interacting with a learning platform. By tracking the evolution of the knowledge of some student, one can optimize instruction. Existing methods are either based on temporal latent variable models, or factor analysis with temporal features. We here show that factorization machines (FMs), a model for regression or classification, encompasses several existing models in the educational literature as special cases, notably additive factor model, performance factor model, and multidimensional item response theory. We show, using several real datasets of tens of thousands of users and items, that FMs can estimate student knowledge accurately and fast even when student data is sparsely observed, and handle side information such as multiple knowledge components and number of attempts at item or skill level. Our approach allows to fit student models of higher dimension than existing models, and provides a testbed to try new combinations of features in order to improve existing models. Modeling student learning is key to be able to detect students that need further attention, or recommend automatically relevant learning resources. Initially, models were developed for students sitting for standardized tests, where students could read every problem statement, and missing answers could be treated as incorrect. However, in online platforms such as MOOCs, students attempt some exercises, but do not even look at other ones. Also, they may learn between different attempts. How to measure knowledge when students have attempted different questions? We want to predict the performance of a set I of students, say users, over a set J of questions, say items (we will interchangeably refer to questions as items, problems, or tasks). Each student can attempt a question multiple times, and may learn between successive attempts. We assume we observe ordered triplets (i, j, o) ∈ I × J × {0, 1} which encode the fact that student i attempted question j and got it either correct (o = 1) or incorrect (o = 0). Triplets are sorted chronologically. Then, given a new pair (i′, j′), we need to predict whether student i′ will get question j′ correct or incorrect. We can also assume extra knowledge about users, or items. So far, various models have been designed for student Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. modeling, either based on prediction of sequences (Piech et al. 2015), or factor analysis (Thai-Nghe et al. 2011; Lavoué et al. 2018). Most of existing techniques model students or questions with unidimensional parameters. In this paper, we generalize these models to higher dimensions and manage to train efficiently student models of dimension up to 20. Our family of models is particularly convenient when observations from students are sparse, e.g. when some students attempted few questions, or some questions were answered by few students, which is most of the data usually encountered in online platforms such as MOOCs. When fitting student models, it is better to rely on all the information available at hand. In order to get information about questions, one can identify the knowledge components (KCs) involved in each question. This side information is usually encoded under the form of a q-matrix, that maps items to knowledge components: qjk is 1 if item j involves KC k, 0 otherwise. In this paper, we will also note KC(j) the sets of skills involved by question j, i.e. KC(j) = {k|qjk = 1}. In order to model different attempts, one can keep track of how many times a student has attempted a question, or how many times a student has had the opportunity to acquire a skill, while interacting with the learning material. Our experiments show, in particular, that: • It is better to estimate a bias for each item (not only skill), which popular educational data mining (EDM) models do not. • Most existing models in EDM cannot handle side information such as multiple skills for one item, but the proposed approach does. • Side information improves performance more than increasing the latent dimension. To the best of our knowledge, this is the most generic framework that incorporates side information into a student model. For the sake of reproducibility, our implementation is available on GitHub1. The interested reader can check our code and reuse it in order to try new combinations and devise new models. In Section 2, we show related work. In Section 3, we present a family of models, knowledge tracing machines, and recover famous models of the EDM literature as special https://github.com/jilljenn/ktm ar X iv :1 81 1. 03 38 8v 2 [ cs .I R ] 1 5 N ov 2 01 8 cases. Then, in Section 4 we conduct experiments and show our results in Section 5. We conclude with further work in Section 6.",
"title": ""
},
{
"docid": "e5dc07c94c7519f730d03aa6c53ca98e",
"text": "Brown adipose tissue (BAT) is specialized to dissipate chemical energy in the form of heat as a defense against cold and excessive feeding. Interest in the field of BAT biology has exploded in the past few years because of the therapeutic potential of BAT to counteract obesity and obesity-related diseases, including insulin resistance. Much progress has been made, particularly in the areas of BAT physiology in adult humans, developmental lineages of brown adipose cell fate, and hormonal control of BAT thermogenesis. As we enter into a new era of brown fat biology, the next challenge will be to develop strategies for activating BAT thermogenesis in adult humans to increase whole-body energy expenditure. This article reviews the recent major advances in this field and discusses emerging questions.",
"title": ""
},
{
"docid": "e77fd6ac551debbcbb4bc6bc940135fb",
"text": "Tremendous compute throughput is becoming available in personal desktop and laptop systems through the use of graphics processing units (GPUs). However, exploiting this resource requires re-architecting an application to fit a data parallel programming model. The complex graph traversal routines in the inference process for large vocabulary continuous speech recognition (LVCSR) have been considered by many as unsuitable for extensive parallelization. We explore and demonstrate a fully data parallel implementation of a speech inference engine on NVIDIA’s GTX280 GPU. Our implementation consists of two phases compute-intensive observation probability computation phase and communication-intensive graph traversal phase. We take advantage of dynamic elimination of redundant computation in the compute-intensive phase while maintaining close-to-peak execution efficiency. We also demonstrate the importance of exploring application-level trade-offs in the communication-intensive graph traversal phase to adapt the algorithm to data parallel execution on GPUs. On 3.1 hours of speech data set, we achieve more than 11× speedup compared to a highly optimized sequential implementation on Intel Core i7 without sacrificing accuracy.",
"title": ""
},
{
"docid": "0a1925251cac8d15da9bbc90627c28dc",
"text": "The Madden–Julian oscillation (MJO) is the dominant mode of tropical atmospheric intraseasonal variability and a primary source of predictability for global sub-seasonal prediction. Understanding the origin and perpetuation of the MJO has eluded scientists for decades. The present paper starts with a brief review of progresses in theoretical studies of the MJO and a discussion of the essential MJO characteristics that a theory should explain. A general theoretical model framework is then described in an attempt to integrate the major existing theoretical models: the frictionally coupled Kelvin–Rossby wave, the moisture mode, the frictionally coupled dynamic moisture mode, the MJO skeleton, and the gravity wave interference, which are shown to be special cases of the general MJO model. The last part of the present paper focuses on a special form of trio-interaction theory in terms of the general model with a simplified Betts–Miller (B-M) cumulus parameterization scheme. This trio-interaction theory extends the Matsuno–Gill theory by incorporating a trio-interaction among convection, moisture, and wave-boundary layer (BL) dynamics. The model is shown to produce robust large-scale characteristics of the observed MJO, including the coupled Kelvin–Rossby wave structure, slow eastward propagation (~5 m/s) over warm pool, the planetary (zonal) scale circulation, the BL low-pressure and moisture convergence preceding major convection, and amplification/decay over warm/cold sea surface temperature (SST) regions. The BL moisture convergence feedback plays a central role in coupling equatorial Kelvin and Rossby waves with convective heating, selecting a preferred eastward propagation, and generating instability. The moisture feedback can enhance Rossby wave component, thereby substantially slowing down eastward propagation. With the trio-interaction theory, a number of fundamental issues of MJO dynamics are addressed: why the MJO possesses a mixed Kelvin–Rossby wave structure and how the Kelvin and Rossby waves, which propagate in opposite directions, could couple together with convection and select eastward propagation; what makes the MJO move eastward slowly in the eastern hemisphere, resulting in the 30–60-day periodicity; why MJO amplifies over the warm pool ocean and decays rapidly across the dateline. Limitation and ramifications of the model results to general circulation modeling of MJO are discussed.",
"title": ""
},
{
"docid": "e96f455aa2c82d358eb94c72d93c8b03",
"text": "OBJECTIVE\nTo evaluate the effects of mirror therapy on upper-extremity motor recovery, spasticity, and hand-related functioning of inpatients with subacute stroke.\n\n\nDESIGN\nRandomized, controlled, assessor-blinded, 4-week trial, with follow-up at 6 months.\n\n\nSETTING\nRehabilitation education and research hospital.\n\n\nPARTICIPANTS\nA total of 40 inpatients with stroke (mean age, 63.2y), all within 12 months poststroke.\n\n\nINTERVENTIONS\nThirty minutes of mirror therapy program a day consisting of wrist and finger flexion and extension movements or sham therapy in addition to conventional stroke rehabilitation program, 5 days a week, 2 to 5 hours a day, for 4 weeks.\n\n\nMAIN OUTCOME MEASURES\nThe Brunnstrom stages of motor recovery, spasticity assessed by the Modified Ashworth Scale (MAS), and hand-related functioning (self-care items of the FIM instrument).\n\n\nRESULTS\nThe scores of the Brunnstrom stages for the hand and upper extremity and the FIM self-care score improved more in the mirror group than in the control group after 4 weeks of treatment (by 0.83, 0.89, and 4.10, respectively; all P<.01) and at the 6-month follow-up (by 0.16, 0.43, and 2.34, respectively; all P<.05). No significant differences were found between the groups for the MAS.\n\n\nCONCLUSIONS\nIn our group of subacute stroke patients, hand functioning improved more after mirror therapy in addition to a conventional rehabilitation program compared with a control treatment immediately after 4 weeks of treatment and at the 6-month follow-up, whereas mirror therapy did not affect spasticity.",
"title": ""
},
{
"docid": "d9605c1cde4c40d69c2faaea15eb466c",
"text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.",
"title": ""
},
{
"docid": "8ed5032f5bf2e26c177577a28bdb7d3a",
"text": "Wireless Sensor Network (WSN) is an important research area nowadays. Wireless Sensor Network is deployed in hostile environment consisting of hundreds to thousands of nodes. They can be deployed for various mission-critical applications, such as health care, military monitoring as well as civilian applications. There are various security issues in these networks. One of such issue is outlier detection. In outlier detection, data obtained by some of the nodes whose behavior is different from the data of other nodes are spotted in the group of data. But identification of such nodes is a little difficult. In this paper, machine learning based methods for outlier detection are discussed among which the Bayesian Network looks advantageous over other methods. Bayesian classification algorithm can be used for calculating the conditional dependency of the available nodes in WSN. This method can also calculate the missing data value.",
"title": ""
}
] |
scidocsrr
|
3ba06122277cd6e0e5f1bc4b9da1dc3f
|
A Critical Review of Centrality Measures in Social Networks
|
[
{
"docid": "7c1691fd1140b3975b61f8e2ce3dcd9b",
"text": "In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"title": ""
},
{
"docid": "a8e8bbe19ed505b3e1042783e5e363d6",
"text": "We study the topology of e-mail networks with e-mail addresses as nodes and e-mails as links using data from server log files. The resulting network exhibits a scale-free link distribution and pronounced small-world behavior, as observed in other social networks. These observations imply that the spreading of e-mail viruses is greatly facilitated in real e-mail networks compared to random architectures.",
"title": ""
},
{
"docid": "a3fed913a54bebf6c86e5955f40aed45",
"text": "Centralitymeasures, or at least popular interpretations of thesemeasures,make implicit assumptions about the manner in which traffic flows through a network. For example, some measures count only geodesic paths, apparently assuming that whatever flows through the network only moves along the shortest possible paths. This paper lays out a typology of network flows based on two dimensions of variation, namely the kinds of trajectories that traffic may follow (geodesics, paths, trails, or walks) and the method of spread (broadcast, serial replication, or transfer). Measures of centrality are then matched to the kinds of flows that they are appropriate for. Simulations are used to examine the relationship between type of flow and the differential importance of nodes with respect to key measurements such as speed of reception of traffic and frequency of receiving traffic. It is shown that the off-the-shelf formulas for centrality measures are fully applicable only for the specific flow processes they are designed for, and that when they are applied to other flow processes they get the “wrong” answer. It is noted that the most commonly used centrality measures are not appropriate for most of the flows we are routinely interested in. A key claim made in this paper is that centrality measures can be regarded as generating expected values for certain kinds of node outcomes (such as speed and frequency of reception) given implicit models of how traffic flows, and that this provides a new and useful way of thinking about centrality. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "07461e63cd097bca5ce098c0f3b5db3b",
"text": "Path prediction is a fundamental task for estimating how pedestrians or vehicles are going to move in a scene. Because path prediction as a task of computer vision uses video as input, various information used for prediction, such as the environment surrounding the target and the internal state of the target, need to be estimated from the video in addition to predicting paths. Many prediction approaches that include understanding the environment and the internal state have been proposed. In this survey, we systematically summarize methods of path prediction that take video as input and and extract features from the video. Moreover, we introduce datasets used to evaluate path prediction methods quantitatively.",
"title": ""
},
{
"docid": "435803f0f30a60d2083d7a903e98823a",
"text": "A dual-frequency radar, which estimates the range of a target based on the phase difference between two closely spaced frequencies, has been shown to be a cost-effective approach to accomplish both range-to-motion estimation and tracking. This approach, however, suffers from two drawbacks: it cannot deal with multiple moving targets, and it has poor performance in noisy environments. In this letter, we propose the use of time-frequency signal representations to overcome these drawbacks. The phase, and subsequently the range information, is obtained based on the moving target instantaneous Doppler frequency law, which is provided through time-frequency signal representations. The case of multiple moving targets is handled by separating the different Doppler signatures prior to phase estimation.",
"title": ""
},
{
"docid": "6c937adbdfe7f86a83948f1a28d67649",
"text": "BACKGROUND\nViral warts are a common skin condition, which can range in severity from a minor nuisance that resolve spontaneously to a troublesome, chronic condition. Many different topical treatments are available.\n\n\nOBJECTIVES\nTo evaluate the efficacy of local treatments for cutaneous non-genital warts in healthy, immunocompetent adults and children.\n\n\nSEARCH METHODS\nWe updated our searches of the following databases to May 2011: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library, MEDLINE (from 2005), EMBASE (from 2010), AMED (from 1985), LILACS (from 1982), and CINAHL (from 1981). We searched reference lists of articles and online trials registries for ongoing trials.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of topical treatments for cutaneous non-genital warts.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors independently selected trials and extracted data; a third author resolved any disagreements.\n\n\nMAIN RESULTS\nWe included 85 trials involving a total of 8815 randomised participants (26 new studies were included in this update). There was a wide range of different treatments and a variety of trial designs. Many of the studies were judged to be at high risk of bias in one or more areas of trial design.Trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (RR (risk ratio) 1.56, 95% CI (confidence interval) 1.20 to 2.03). Subgroup analysis for different sites, hands (RR 2.67, 95% CI 1.43 to 5.01) and feet (RR 1.29, 95% CI 1.07 to 1.55), suggested it might be more effective for hands than feet.A meta-analysis of cryotherapy versus placebo for warts at all sites favoured neither intervention nor control (RR 1.45, 95% CI 0.65 to 3.23). Subgroup analysis for different sites, hands (RR 2.63, 95% CI 0.43 to 15.94) and feet (RR 0.90, 95% CI 0.26 to 3.07), again suggested better outcomes for hands than feet. One trial showed cryotherapy to be better than both placebo and SA, but only for hand warts.There was no significant difference in cure rates between cryotherapy at 2-, 3-, and 4-weekly intervals.Aggressive cryotherapy appeared more effective than gentle cryotherapy (RR 1.90, 95% CI 1.15 to 3.15), but with increased adverse effects.Meta-analysis did not demonstrate a significant difference in effectiveness between cryotherapy and SA at all sites (RR 1.23, 95% CI 0.88 to 1.71) or in subgroup analyses for hands and feet.Two trials with 328 participants showed that SA and cryotherapy combined appeared more effective than SA alone (RR 1.24, 95% CI 1.07 to 1.43).The benefit of intralesional bleomycin remains uncertain as the evidence was inconsistent. The most informative trial with 31 participants showed no significant difference in cure rate between bleomycin and saline injections (RR 1.28, 95% CI 0.92 to 1.78).Dinitrochlorobenzene was more than twice as effective as placebo in 2 trials with 80 participants (RR 2.12, 95% CI 1.38 to 3.26).Two trials of clear duct tape with 193 participants demonstrated no advantage over placebo (RR 1.43, 95% CI 0.51 to 4.05).We could not combine data from trials of the following treatments: intralesional 5-fluorouracil, topical zinc, silver nitrate (which demonstrated possible beneficial effects), topical 5-fluorouracil, pulsed dye laser, photodynamic therapy, 80% phenol, 5% imiquimod cream, intralesional antigen, and topical alpha-lactalbumin-oleic acid (which showed no advantage over placebo).We did not identify any RCTs that evaluated surgery (curettage, excision), formaldehyde, podophyllotoxin, cantharidin, diphencyprone, or squaric acid dibutylester.\n\n\nAUTHORS' CONCLUSIONS\nData from two new trials comparing SA and cryotherapy have allowed a better appraisal of their effectiveness. The evidence remains more consistent for SA, but only shows a modest therapeutic effect. Overall, trials comparing cryotherapy with placebo showed no significant difference in effectiveness, but the same was also true for trials comparing cryotherapy with SA. Only one trial showed cryotherapy to be better than both SA and placebo, and this was only for hand warts. Adverse effects, such as pain, blistering, and scarring, were not consistently reported but are probably more common with cryotherapy.None of the other reviewed treatments appeared safer or more effective than SA and cryotherapy. Two trials of clear duct tape demonstrated no advantage over placebo. Dinitrochlorobenzene (and possibly other similar contact sensitisers) may be useful for the treatment of refractory warts.",
"title": ""
},
{
"docid": "253072dcfdf4c417819ce8eee6af886f",
"text": "The majority of theoretical work in machine learning is done under the assumption of exchangeability: essentially, it is assumed that the examples are generated from the same probability distribution independently. This paper is concerned with the problem of testing the exchangeability assumption in the on-line mode: examples are observed one by one and the goal is to monitor on-line the strength of evidence against the hypothesis of exchangeability. We introduce the notion of exchangeability martingales, which are on-line procedures for detecting deviations from exchangeability; in essence, they are betting schemes that never risk bankruptcy and are fair under the hypothesis of exchangeability. Some specific exchangeability martingales are constructed using Transductive Confidence Machine. We report experimental results showing their performance on the USPS benchmark data set of hand-written digits (known to be somewhat heterogeneous); one of them multiplies the initial capital by more than 10; this means that the hypothesis of exchangeability is rejected at the significance level 10−18.",
"title": ""
},
{
"docid": "7b043c7823ce7920848c06b3d339ba6c",
"text": "Compression artifacts arise in images whenever a lossy compression algorithm is applied. These artifacts eliminate details present in the original image, or add noise and small structures; because of these effects they make images less pleasant for the human eye, and may also lead to decreased performance of computer vision algorithms such as object detectors. To eliminate such artifacts, when decompressing an image, it is required to recover the original image from a disturbed version. To this end, we present a feed-forward fully convolutional residual network model trained using a generative adversarial framework. To provide a baseline, we show that our model can be also trained optimizing the Structural Similarity (SSIM), which is a better loss with respect to the simpler Mean Squared Error (MSE). Our GAN is able to produce images with more photorealistic details than MSE or SSIM based networks. Moreover we show that our approach can be used as a pre-processing step for object detection in case images are degraded by compression to a point that state-of-the art detectors fail. In this task, our GAN method obtains better performance than MSE or SSIM trained networks.",
"title": ""
},
{
"docid": "3159879f34a093d38e82dba61b92d74e",
"text": "The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm A from scratch for each new type of benchmark instances, here we propose to exploit information about A’s performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.",
"title": ""
},
{
"docid": "3992975b4f218b7025e28e4ba52d0c14",
"text": "We present ConfErr, a tool for testing and quantifying the resilience of software systems to human-induced configuration errors. ConfErr uses human error models rooted in psychology and linguistics to generate realistic configuration mistakes; it then injects these mistakes and measures their effects, producing a resilience profile of the system under test. The resilience profile, capturing succinctly how sensitive the target software is to different classes of configuration errors, can be used for improving the software or to compare systems to each other. ConfErr is highly portable, because all mutations are performed on abstract representations of the configuration files. Using ConfErr, we found several serious flaws in the MySQL and Postgres databases, Apache web server, and BIND and djbdns name servers; we were also able to directly compare the resilience of functionally-equivalent systems, such as MySQL and Postgres.",
"title": ""
},
{
"docid": "3c30209d29779153b4cb33d13d101cf8",
"text": "Acceptance-based interventions such as mindfulness-based stress reduction program and acceptance and commitment therapy are alternative therapies for cognitive behavioral therapy for treating chronic pain patients. To assess the effects of acceptance-based interventions on patients with chronic pain, we conducted a systematic review and meta-analysis of controlled and noncontrolled studies reporting effects on mental and physical health of pain patients. All studies were rated for quality. Primary outcome measures were pain intensity and depression. Secondary outcomes were anxiety, physical wellbeing, and quality of life. Twenty-two studies (9 randomized controlled studies, 5 clinical controlled studies [without randomization] and 8 noncontrolled studies) were included, totaling 1235 patients with chronic pain. An effect size on pain of 0.37 was found for the controlled studies. The effect on depression was 0.32. The quality of the studies was not found to moderate the effects of acceptance-based interventions. The results suggest that at present mindfulness-based stress reduction program and acceptance and commitment therapy are not superior to cognitive behavioral therapy but can be good alternatives. More high-quality studies are needed. It is recommended to focus on therapies that integrate mindfulness and behavioral therapy. Acceptance-based therapies have small to medium effects on physical and mental health in chronic pain patients. These effects are comparable to those of cognitive behavioral therapy.",
"title": ""
},
{
"docid": "7dfb75bba9d6a261a7138199c834ee36",
"text": "Approximately 50% of patients with Fisher's syndrome show involvement of the pupillomotor fibers and present with mydriasis and light-near dissociation. However, it is uncertain whether this phenomenon is induced by an aberrant reinnervation mechanism as in tonic pupil, or is based on other mechanisms such as those associated with tectal pupils. We evaluated the clinical course and the pupillary responses in four of 27 patients with Fisher's syndrome who presented with bilateral mydriasis. The pupils of both eyes of the four patients were involved at the early stage of Fisher's syndrome. The pupils in patients 1 and 2 showed mydriasis with apparent light-near dissociation lasting for a significant period and had denervation supersensitivity to cholinergic agents. On the other hand, the pupils of patients 3 and 4 were dilated and fixed to both light and near stimuli. Our observations indicate that the denervated iris sphincter muscles, which are supersensitive to the cholinergic transmitter, may play an important role in the expression of light-near dissociation in Fisher's syndrome. Jpn J Ophthalmol 2007;51:224–227 © Japanese Ophthalmological Society 2007",
"title": ""
},
{
"docid": "9e42fd0754365eb534b1887ba1002608",
"text": "Despite the success of existing works on single-turn conversation generation, taking the coherence in consideration, human conversing is actually a context-sensitive process. Inspired by the existing studies, this paper proposed the static and dynamic attention based approaches for context-sensitive generation of open-domain conversational responses. Experimental results on two public datasets show that the proposed static attention based approach outperforms all the baselines on automatic and human evaluation.",
"title": ""
},
{
"docid": "cec9f586803ffc8dc5868f6950967a1f",
"text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.",
"title": ""
},
{
"docid": "3c2633b440b89ce92f469787dbb24a3b",
"text": "The aim of this paper is to report the impact of the addition of cellulose nanocrystals on the barrier properties and on the migration behaviour of poly(lactic acid), PLA, based nano-biocomposites prepared by the solvent casting method. Their microstructure, crystallinity, barrier and overall migration properties were investigated. Pristine (CNC) and surfactant-modified cellulose nanocrystals (s-CNC) were used, and the effect of the cellulose modification and content in the nano-biocomposites was investigated. The presence of surfactant on the nanocrystal surface favours the dispersion of CNC in the PLA matrix. Electron microscopy analysis shows the good dispersion of s-CNC in the nanoscale with well-defined single crystals indicating that the surfactant allowed a better interaction between the cellulose structures and the PLA matrix. Reductions of 34% in water permeability were obtained for the cast films containing 1 wt.% of s-CNC while good oxygen barrier properties were detected for nano-biocomposites with both 1 wt.% and 5 wt.% of modified and un-modified cellulose nanocrystals, underlining the improvement provided by cellulose on the PLA films. Moreover, the migration level of the studied nano-biocomposites was below the overall migration limits required by the current normative for food packaging materials in both non-polar and polar simulants.",
"title": ""
},
{
"docid": "3357b7d13cd41f49a03860c62f634dc1",
"text": "Changes in the elasticity of the vaginal walls, connective support tissues, and muscles are thought to be significant factors in the development of pelvic organ prolapse, a highly prevalent condition affecting at least 50% of women in the United States during their lifetimes. It creates two predominant concerns specific to the biomechanical properties of pelvic support tissues: how does tissue elasticity affect the development of pelvic organ prolapse and how can functional elasticity be maintained through reconstructive surgery. We designed a prototype of vaginal tactile imager (VTI) for visualization and assessment of elastic properties of pelvic floor tissues. In this paper, we analyze applicability of tactile imaging for evaluation of reconstructive surgery results and characterization of normal and pelvic organ prolapse conditions. A pilot clinical study with 13 patients demonstrated that VTI allows imaging of vaginal walls with increased rigidity due to implanted mesh grafts following reconstructive pelvic surgery and VTI has the potential for prolapse characterization and detection.",
"title": ""
},
{
"docid": "7b7924ccd60d01468f6651b9226cbed0",
"text": "Leucine-rich repeat kinase 2 (LRRK2) mutations have been implicated in autosomal dominant parkinsonism, consistent with typical levodopa-responsive Parkinson's disease. The gene maps to chromosome 12q12 and encodes a large, multifunctional protein. To identify novel LRRK2 mutations, we have sequenced 100 affected probands with family history of parkinsonism. Semiquantitative analysis was also performed in all probands to identify LRRK2 genomic multiplication or deletion. In these kindreds, referred from movement disorder clinics in many parts of Europe, Asia, and North America, parkinsonism segregates as an autosomal dominant trait. All 51 exons of the LRRK2 gene were analyzed and the frequency of all novel sequence variants was assessed within controls. The segregation of mutations with disease has been examined in larger, multiplex families. Our study identified 26 coding variants, including 15 nonsynonymous amino acid substitutions of which three affect the same codon (R1441C, R1441G, and R1441H). Seven of these coding changes seem to be pathogenic, as they segregate with disease and were not identified within controls. No multiplications or deletions were identified.",
"title": ""
},
{
"docid": "e15e5896c21018de65653b3c96640ef5",
"text": "This paper presents a single-ended Class-E power amplifier for wireless power transfer systems. The power amplifier is designed with a low-cost power MOSFET and high-Q inductor. It adopts a second harmonic filter in the output matching network. The proposed Class-E power amplifier has low second harmonic level by the second harmonic filter. Also, we designed an input driver with a single supply voltage for driving the Class-E power amplifier. The implemented Class-E power amplifier delivers an output power of 40.8 dBm and a high-efficiency of 90.3% for the 6.78 MHz input signal. Index Terms — Class-E power amplifier, High efficiency amplifier, wireless power transfer, harmonic filter",
"title": ""
},
{
"docid": "29495e389441ff61d5efad10ad38e995",
"text": "The natural world is infinitely diverse, yet this diversity arises from a relatively small set of coherent properties and rules, such as the laws of physics or chemistry. We conjecture that biological intelligent systems are able to survive within their diverse environments by discovering the regularities that arise from these rules primarily through unsupervised experiences, and representing this knowledge as abstract concepts. Such representations possess useful properties of compositionality and hierarchical organisation, which allow intelligent agents to recombine a finite set of conceptual building blocks into an exponentially large set of useful new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such concepts in the visual domain. We first use the previously published β-VAE (Higgins et al., 2017a) architecture to learn a disentangled representation of the latent structure of the visual world, before training SCAN to extract abstract concepts grounded in such disentangled visual primitives through fast symbol association. Our approach requires very few pairings between symbols and images and makes no assumptions about the choice of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of compositional visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to invent and learn novel visual concepts through recombination of the few learnt concepts.",
"title": ""
},
{
"docid": "02d441c58e7757a4c2f428064dfa756c",
"text": "The prevalence of disordered eating and eating disorders vary from 0-19% in male athletes and 6-45% in female athletes. The objective of this paper is to present an overview of eating disorders in adolescent and adult athletes including: (1) prevalence data; (2) suggested sport- and gender-specific risk factors and (3) importance of early detection, management and prevention of eating disorders. Additionally, this paper presents suggestions for future research which includes: (1) the need for knowledge regarding possible gender-specific risk factors and sport- and gender-specific prevention programmes for eating disorders in sports; (2) suggestions for long-term follow-up for female and male athletes with eating disorders and (3) exploration of a possible male athlete triad.",
"title": ""
},
{
"docid": "df232e20d1a6aced5b40a11935ab8810",
"text": "Part of speech (POS) tagging is the process of assigning the part of speech tag or other lexical class marker to each and every word in a sentence. In many Natural Language Processing applications such as word sense disambiguation, information retrieval, information processing, parsing, question answering, and machine translation, POS tagging is considered as the one of the basic necessary tool. Identifying the ambiguities in language lexical items is the challenging objective in the process of developing an efficient and accurate POS Tagger. Literature survey shows that, for Indian languages, POS taggers were developed only in Hindi, Bengali, Panjabi and Dravidian languages. Some POS taggers were also developed generic to the Hindi, Bengali and Telugu languages. All proposed POS taggers were based on different Tagset, developed by different organization and individuals. This paper addresses the various developments in POS-taggers and POS-tagset for Indian language, which is very essential computational linguistic tool needed for many natural language processing (NLP) applications.",
"title": ""
},
{
"docid": "9e68f50309814e976abff3f5a5926a57",
"text": "This paper presents a compact and broadband micro strip patch antenna array (BMPAA) with uniform linear array configuration of 4x1 elemnt for 3G applications. The 4×1 BMPAA was designed and developed by integrating new patch sha pe Hybrid E-H shape, L-probe feeding, inverted patch structure with air filled dielectric microstr ip patch antenna (MPA) element. The array was constructed using two dielectric layer arrangement with a thick air-filled substrate sandwiched betwee n a top-loaded dielectric substrate ( RT 5880) with inverting radiating patch and a ground plane . The Lprobe fed inverted hybrid E-H (LIEH) shaped BMPAA a chieves an impedance bandwidth of 17.32% referenced to the center frequency at 2.02 GHz (at VSWR ≤ 1.5), maximum achievable gain of 11.9±1dBi, and 23 dB crosspolarization level.",
"title": ""
},
{
"docid": "4fd8eb1c592960a0334959fcd74f00d8",
"text": "Automatic grammatical error detection for Chinese has been a big challenge for NLP researchers. Due to the formal and strict grammar rules in Chinese, it is hard for foreign students to master Chinese. A computer-assisted learning tool which can automatically detect and correct Chinese grammatical errors is necessary for those foreign students. Some of the previous works have sought to identify Chinese grammatical errors using templateand learning-based methods. In contrast, this study introduced convolutional neural network (CNN) and long-short term memory (LSTM) for the shared task of Chinese Grammatical Error Diagnosis (CGED). Different from traditional word-based embedding, single word embedding was used as input of CNN and LSTM. The proposed single word embedding can capture both semantic and syntactic information to detect those four type grammatical error. In experimental evaluation, the recall and f1-score of our submitted results Run1 of the TOCFL testing data ranked the fourth place in all submissions in detection-level.",
"title": ""
}
] |
scidocsrr
|
51376e08fa8c3c04cad028ec29afc895
|
Comparative Deep Learning of Hybrid Representations for Image Recommendations
|
[
{
"docid": "d57f996ed29e1a91f6d0b04d5a83ea38",
"text": "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity/dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of hand-crafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HH where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.",
"title": ""
},
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
}
] |
[
{
"docid": "aacde6474d9cbc27b7b668cab229f170",
"text": "Deep reinforcement learning algorithms have been shown to learn complex tasks using highly general policy classes. However, sparse reward problems remain a significant challenge. Exploration methods based on novelty detection have been particularly successful in such settings but typically require generative or predictive models of the observations, which can be difficult to train when the observations are very high-dimensional and complex, as in the case of raw images. We propose a novelty detection algorithm for exploration that is based entirely on discriminatively trained exemplar models, where classifiers are trained to discriminate each visited state against all others. Intuitively, novel states are easier to distinguish against other states seen during training. We show that this kind of discriminative modeling corresponds to implicit density estimation, and that it can be combined with countbased exploration to produce competitive results on a range of popular benchmark tasks, including state-of-the-art results on challenging egocentric observations in the vizDoom benchmark.",
"title": ""
},
{
"docid": "5822594b8e4f8d61f65f3136f66fadad",
"text": "Document Analysis and Recognition (DAR) aims at the automatic extraction of information presented on paper and initially addressed to human comprehension. The desired output of DAR systems is usually in a suitable symbolic representation that can subsequently be processed by computers. Over the centuries, paper documents have been the principal instrument to make permanent the progress of the humankind. Nowadays, most information is still recorded, stored, and distributed in paper format. The widespread use of computers for document editing, with the introduction of PCs and wordprocessors in the late 1980’s, had the effect of increasing, instead of reducing, the amount of information held on paper. Even if current technological trends seem to move towards a paperless world, some studies demonstrated that the use of paper as a media for information exchange is still increasing [1]. Moreover, there are still application domains where the paper persists to be the preferred media [2]. The most widely known applications of DAR are related to the processing of office documents (such as invoices, bank documents, business letters, and checks) and to the automatic mail sorting. With the current availability of inexpensive high-resolution scanning devices, combined with powerful computers, state-of-the-art OCR packages can solve simple recognition tasks for most users. Recent research directions are widening the use of the DAR techniques, significant examples are the processing of ancient/historical documents in digital libraries, the information extraction from “digital born” documents, such as PDF and HTML, and the analysis of natural images (acquired with mobile phones and digital cameras) containing textual information. The development of a DAR system requires the integration of several competences in computer science, among the others: image processing, pattern recognition, natural language processing, artificial intelligence, and database systems. DAR applications are particularly suitable for the incorporation of",
"title": ""
},
{
"docid": "07ce1301392e18c1426fd90507dc763f",
"text": "The fluorescent lamp lifetime is very dependent of the start-up lamp conditions. The lamp filament current and temperature during warm-up and at steady-state operation are important to extend the life of a hot-cathode fluorescent lamp, and the preheating circuit is responsible for attending to the start-up lamp requirements. The usual solution for the preheating circuit used in self-oscillating electronic ballasts is simple and presents a low cost. However, the performance to extend the lamp lifetime is not the most effective. This paper presents an effective preheating circuit for self-oscillating electronic ballasts as an alternative to the usual solution.",
"title": ""
},
{
"docid": "9218a87b0fba92874e5f7917c925843a",
"text": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than 1% of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback.",
"title": ""
},
{
"docid": "4b19c99cc675803e9416a6cc6fc32b44",
"text": "Changepoints are abrupt variations in the generative parameters of a data sequence. Online detection of changepoints is useful in modelling and prediction of time series in application areas such as finance, biometrics, and robotics. While frequentist methods have yielded online filtering and prediction techniques, most Bayesian papers have focused on the retrospective segmentation problem. Here we examine the case where the model parameters before and after the changepoint are independent and we derive an online algorithm for exact inference of the most recent changepoint. We compute the probability distribution of the length of the current “run,” or time since the last changepoint, using a simple message-passing algorithm. Our implementation is highly modular so that the algorithm may be applied to a variety of types of data. We illustrate this modularity by demonstrating the algorithm on three different real-world data sets.",
"title": ""
},
{
"docid": "ae86bbc8d1c489b0ed1a75a5d76ed6e2",
"text": "Face recognition is an increasingly popular method for user authentication. However, face recognition is susceptible to playback attacks. Therefore, a reliable way to detect malicious attacks is crucial to the robustness of the system. We propose and validate a novel physics-based method to detect images recaptured from printed material using only a single image. Micro-textures present in printed paper manifest themselves in the specular component of the image. Features extracted from this component allows a linear SVM classifier to achieve 2.2% False Acceptance Rate and 13% False Rejection Rate (6.7% Equal Error Rate). We also show that the classifier can be generalizable to contrast enhanced recaptured images and LCD screen recaptured images without re-training, demonstrating the robustness of our approach.1",
"title": ""
},
{
"docid": "1f003b16c5343f0abdee26bcde53b86e",
"text": "Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a “rain component” and a “nonrain component” by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "525182fb2d7c2d6b4e99317bc4e43fff",
"text": "This paper proposes a dual-rotor, toroidal-winding, axial-flux vernier permanent magnet (VPM) machine. By the combination of toroidal windings with the rotor-stator-rotor topology, the end winding length of the machine is significantly reduced when compared with the regular VPM machine. Based on the airgap permeance function, the back-EMF and torque expressions are derived, through which the nature of this machine is revealed. The influence of pole ratio (ratio of rotor pole pair number to stator pole pair number) and main geometric parameters such as slot opening, magnet thickness etc., on torque performance is then analytically investigated. Both the quasi-3-dimensional (quasi-3D) finite element analysis (FEA) and 3D FEA are applied to verify the theoretical analysis. With the current density of 4.2 A/mm2, the torque density of the proposed machine can reach 32.6 kNm/m3. A prototype has been designed and is in manufacturing process. Experimental validation will be presented in the future.",
"title": ""
},
{
"docid": "8bfef248f1fd496b09b0a236717a6baa",
"text": "The paper investigates whether there is a causal link between poverty or low education and participation in terrorist activities. After presenting a discussion of theoretical issues, we review evidence on the determinants of hate crimes, which are closely related to terrorism. This literature finds that the occurrence of hate crimes is largely independent of economic conditions. Next we analyze data on support for terrorism from public opinion polls conducted in the West Bank and Gaza Strip. These polls indicate that support for terrorism does not decrease among those with higher education and higher living standards. The core contribution of the paper is a statistical analysis of the determinants of participation in Hezbollah terrorist activities in Lebanon in the late 1980s and early 1990s. The evidence that we have assembled suggests that having a living standard above the poverty line or a secondary school or higher education is positively associated with participation in terrorism. Although our results are tentative and exploratory, they suggest that neither poverty nor education have a direct, causal impact on terrorism. The conclusion speculates on why economic conditions and education are largely unrelated to participation in, and support for, terrorism. *This paper was prepared for the World Bank’s Annual Bank Conference on Development Economics, April 2002. We thank Claude Berrebi for excellent research assistance, Eli Hurvitz, Ayoub Mustafa, Adib Nehmeh and Zeina el Khalil for providing data, and Joshua Angrist, Guido Imbens and Elie Tamer for helpful discussions.",
"title": ""
},
{
"docid": "9280eb309f7a6274eb9d75d898768f56",
"text": "In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "f29ed3c9f3de56bd3e8ec7a24860043b",
"text": "Antennas implanted in a human body are largely applicable to hyperthermia and biotelemetry. To make practical use of antennas inside a human body, resonance characteristics of the implanted antennas and their radiation signature outside the body must be evaluated through numerical analysis and measurement setup. Most importantly, the antenna must be designed with an in-depth consideration given to its surrounding environment. In this paper, the spherical dyadic Green's function (DGF) expansions and finite-difference time-domain (FDTD) code are applied to analyze the electromagnetic characteristics of dipole antennas and low-profile patch antennas implanted in the human head and body. All studies to characterize and design the implanted antennas are performed at the biomedical frequency band of 402-405 MHz. By comparing the results from two numerical methodologies, the accuracy of the spherical DGF application for a dipole antenna at the center of the head is evaluated. We also consider how much impact a shoulder has on the performance of the dipole inside the head using FDTD. For the ease of the design of implanted low-profile antennas, simplified planar geometries based on a real human body are proposed. Two types of low-profile antennas, i.e., a spiral microstrip antenna and a planar inverted-F antenna, with superstrate dielectric layers are initially designed for medical devices implanted in the chest of the human body using FDTD simulations. The radiation performances of the designed low-profile antennas are estimated in terms of radiation patterns, radiation efficiency, and specific absorption rate. Maximum available power calculated to characterize the performance of a communication link between the designed antennas and an exterior antenna show how sensitive receivers are required to build a reliable telemetry link.",
"title": ""
},
{
"docid": "4083af4f4c546056123e8b4f0489e5cf",
"text": "In this paper, a multi-agent optimization algorithm (MAOA) is proposed for solving the resourceconstrained project scheduling problem (RCPSP). In the MAOA, multiple agents work in a grouped environment where each agent represents a feasible solution. The evolution of agents is achieved by using four main elements in the MAOA, including social behavior, autonomous behavior, self-learning, and environment adjustment. The social behavior includes the global one and the local one for performing exploration. Through the global social behavior, the leader agent in every group is guided by the global best leader. Through the local social behavior, each agent is guided by its own leader agent. Through the autonomous behavior, each agent exploits its own neighborhood. Through the self-learning, the best agent performs an intensified search to further exploit the promising region. Meanwhile, some agents perform migration among groups to adjust the environment dynamically for information sharing. The implementation of the MAOA for solving the RCPSP is presented in detail, and the effect of key parameters of the MAOA is investigated based on the Taguchi method of design of experiment. Numerical testing results are provided by using three sets of benchmarking instances. The comparisons to the existing algorithms demonstrate the effectiveness of the proposed MAOA for solving the RCPSP. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a8d9293972cda5f1961e46130a435a1a",
"text": "The vehicle parking system is designed to prevent usual problems associated with the parking. This system is designed to solve the problem of locating empty parking slot, congestion and indiscriminate parking. The system has been designed using VHDL and successfully implemented on CPLD (family-MAX V,Device-5M1270ZT144C5, Board-Krypton v1.2). A complete design and layout was drawn out and suitably implemented. The different modules were separately tested and then combined together as a working model of intelligent vehicle parking system. The results discussed in this paper shows that very less hardware is utilized (21%) on CPLD board, thus proving the system to be cost effective.",
"title": ""
},
{
"docid": "41df9902a1b88da0943ae8641541acc0",
"text": "The computational and robotic synthesis of language evolution is emerging as a new exciting field of research. The objective is to come up with precise operational models of how communities of agents, equipped with a cognitive apparatus, a sensori-motor system, and a body, can arrive at shared grounded communication systems. Such systems may have similar characteristics to animal communication or human language. Apart from its technological interest in building novel applications in the domain of human-robot or robot-robot interaction, this research is of interest to the many disciplines concerned with the origins and evolution of language and communication.",
"title": ""
},
{
"docid": "e4e60c0ea93a2297636c265c00277bb1",
"text": "Event studies, which look at stock market reactions to assess corporate business events, represent a relatively new research approach in the information systems field. In this paper we present a systematic review of thirty event studies related to information technology. After a brief discussion of each of the papers included in our review, we call attention to several limitations of the published studies and propose possible future research avenues.",
"title": ""
},
{
"docid": "08ebd914f39a284fb3ba6810bd1b0802",
"text": "The recent influx in generation, storage and availability of textual data presents researchers with the challenge of developing suitable methods for their analysis. Latent Semantic Analysis (LSA), a member of a family of methodological approaches that offers an opportunity to address this gap by describing the semantic content in textual data as a set of vectors, was pioneered by researchers in psychology, information retrieval, and bibliometrics. LSA involves a matrix operation called singular value decomposition, an extension of principal component analysis. LSA generates latent semantic dimensions that are either interpreted, if the researcher’s primary interest lies with the understanding of the thematic structure in the textual data, or used for purposes of clustering, categorisation and predictive modelling, if the interest lies with the conversion of raw text into numerical data, as a precursor to subsequent analysis. This paper reviews five methodological issues that need to be addressed by the researcher who will embark on LSA. We examine the dilemmas, present the choices, and discuss the considerations under which good methodological decisions are made. We illustrate these issues with the help of four small studies, involving the analysis of abstracts for papers published in the European Journal of Information Systems.",
"title": ""
},
{
"docid": "ffba00cedb97777174a418fbcfc2c687",
"text": "Quantum computing is moving rapidly to the point of deployment of technology. Functional quantum devices will require the ability to correct error in order to be scalable and effective. A leading choice of error correction, in particular for modular or distributed architectures, is the surface code with logical two-qubit operations realised via “lattice surgery”. These operations consist of “merges” and “splits” acting non-unitarily on the logical states and are not easily captured by standard circuit notation. This raises the question of how best to reason about lattice surgery in order efficiently to use quantum states and operations in architectures with complex resource management issues. In this paper we demonstrate that the operations of the ZX calculus, a form of quantum diagrammatic reasoning designed using category theory, match exactly the operations of lattice surgery. Red and green “spider” nodes match rough and smooth merges and splits, and follow the axioms of a dagger special associative Frobenius algebra. Some lattice surgery operations can require non-trivial correction operations, which are captured natively in the use of the ZX calculus in the form of ensembles of diagrams. We give a first taste of the power of the calculus as a language for surgery by considering two operations (magic state use and producing a CNOT ) and show how ZX diagram re-write rules give lattice surgery procedures for these operations that are novel, efficient, and highly configurable.",
"title": ""
},
{
"docid": "a91d7ff0fe514011bfb8e5c90f3e8781",
"text": "In sequential decision problems in an unknown environment, the decision maker often faces a dilemma over whether to explore to discover more about the environment, or to exploit current knowledge. We address the exploration-exploitation dilemma in a general setting encompassing both standard and contextualised bandit problems. The contextual bandit problem has recently resurfaced in attempts to maximise click-through rates in web based applications, a task with significant commercial interest. In this article we consider an approach of Thompson (1933) which makes use of samples from the posterior distributions for the instantaneous value of each action. We extend the approach by introducing a new algorithm, Optimistic Bayesian Sampling (OBS), in which the probability of playing an action increases with the uncertainty in the estimate of the action value. This results in better directed exploratory behaviour. We prove that, under unrestrictive assumptions, both approaches result in optimal behaviour with respect to the average reward criterion of Yang and Zhu (2002). We implement OBS and measure its performance in simulated Bernoulli bandit and linear regression domains, and also when tested with the task of personalised news article recommendation on a Yahoo! Front Page Today Module data set. We find that OBS performs competitively when compared to recently proposed benchmark algorithms and outperforms Thompson’s method throughout.",
"title": ""
},
{
"docid": "46f623cea7c1f643403773fc5ed2508d",
"text": "The use of machine learning tools has become widespread in medical diagnosis. The main reason for this is the effective results obtained from classification and diagnosis systems developed to help medical professionals in the diagnosis phase of diseases. The primary objective of this study is to improve the accuracy of classification in medical diagnosis problems. To this end, studies were carried out on 3 different datasets. These datasets are heart disease, Parkinson’s disease (PD) and BUPA liver disorders. Key feature of these datasets is that they have a linearly non-separable distribution. A new method entitled k-medoids clustering-based attribute weighting (kmAW) has been proposed as a data preprocessing method. The support vector machine (SVM) was preferred in the classification phase. In the performance evaluation stage, classification accuracy, specificity, sensitivity analysis, f-measure, kappa statistics value and ROC analysis were used. Experimental results showed that the developed hybrid system entitled kmAW + SVM gave better results compared to other methods described in the literature. Consequently, this hybrid intelligent system can be used as a useful medical decision support tool.",
"title": ""
},
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
}
] |
scidocsrr
|
3221601d288a0aa6dec9e57b545af649
|
Selective Supervision: Guiding Supervised Learning with Decision-Theoretic Active Learning
|
[
{
"docid": "174a35b2c608a7cbef4ca8183fc19d0e",
"text": "This paper shows how a text classifier’s need for labeled training documents can be reduced by taking advantage of a large pool of unlabeled documents. We modify the Query-by-Committee (QBC) method of active learning to use the unlabeled pool for explicitly estimating document density when selecting examples for labeling. Then active learning is combined with ExpectationMaximization in order to “fill in” the class labels of those documents that remain unlabeled. Experimental results show that the improvements to active learning require less than two-thirds as many labeled training examples as previous QBC approaches, and that the combination of EM and active learning requires only slightly more than half as many labeled training examples to achieve the same accuracy as either the improved active learning or EM alone.",
"title": ""
}
] |
[
{
"docid": "11384f03036b5eff08b37513b4bda154",
"text": "In this paper, we consider the problem of formally verifying the safety of an autonomous robot equipped with a Neural Network (NN) controller that processes LiDAR images to produce control actions. Given a workspace that is characterized by a set of polytopic obstacles, our objective is to compute the set of safe initial conditions such that a robot trajectory starting from these initial conditions is guaranteed to avoid the obstacles. Our approach is to construct a finite state abstraction of the system and use standard reachability analysis over the finite state abstraction to compute the set of the safe initial states. The first technical problem in computing the finite state abstraction is to mathematically model the imaging function that maps the robot position to the LiDAR image. To that end, we introduce the notion of imaging-adapted sets as partitions of the workspace in which the imaging function is guaranteed to be affine. Based on this notion, and resting on efficient algorithms in the literature of computational geometry, we develop a polynomialtime algorithm to partition theworkspace into imaging-adapted sets along with computing the corresponding affine imaging functions. Given this workspace partitioning, a discrete-time linear dynamics of the robot, and a pre-trained NN controller with Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge is to analyze the behavior of the neural network. To that end, and thanks to the ReLU functions being piecewise linear functions, we utilize a Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible segments of different ReLUs. SMC solvers then use a Boolean satisfiability solver and a convex programming solver and decompose the problem into smaller subproblems. At each iteration, the Boolean satisfiability solver searches for a candidate assignment for the different ReLU segments while completely abstracting the robot dynamics. Convex programming is then used to check the feasibility of the proposed ReLU phases against the dynamic and imagining constraints, or generate succinct explanations for their infeasibility to reduce the search space. To accelerate this process, we develop a pre-processing algorithm that could rapidly prune the space feasible ReLU segments. Finally, we demonstrate the efficiency of the proposed algorithms using numerical simulations with increasing complexity of the neural network controller.",
"title": ""
},
{
"docid": "fd8574edb4fc609ade520fff36fac8cd",
"text": "A large share of websites today allow users to contribute and manage user-generated content. This content is often in textual form and involves names, terms, and keywords that can be ambiguous and difficult to interpret for other users. Semantic annotation can be used to tackle such issues, but this technique has been adopted by only a few websites. This may be attributed to a lack of a standard web input component that allows users to simply and efficiently annotate text. In this paper, we introduce an autocomplete-enabled annotation box that supports users in associating their text with DBpedia resources as they type. This web component can replace existing input fields and does not require particular user skills. Furthermore, it can be used by semantic web developers as a user interface for advanced semantic search and data processing back-ends. Finally, we validate the approach with a preliminary user study.",
"title": ""
},
{
"docid": "3fa1abd26925407bbf34716060a1a589",
"text": "Generating knowledge from data is an increasingly important activity. This process of data exploration consists of multiple tasks: data ingestion, visualization, statistical analysis, and storytelling. Though these tasks are complementary, analysts often execute them in separate tools. Moreover, these tools have steep learning curves due to their reliance on manual query specification. Here, we describe the design and implementation of DIVE, a web-based system that integrates state-of-the-art data exploration features into a single tool. DIVE contributes a mixed-initiative interaction scheme that combines recommendation with point-and-click manual specification, and a consistent visual language that unifies different stages of the data exploration workflow. In a controlled user study with 67 professional data scientists, we find that DIVE users were significantly more successful and faster than Excel users at completing predefined data visualization and analysis tasks.",
"title": ""
},
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "d2cbeb1f764b5a574043524bb4a0e1a9",
"text": "The latest 6th generation Carrier Stored Trench Gate Bipolar Transistor (CSTBT™) provides state of the art optimization of conduction and switching losses in IGBT modules. Use of low values of resistance in series with the IGBT gate produces low turn-on losses but increases stress on the recovery of the free-wheel diode resulting in higher dv/dt and increased EMI. The latest modules also incorporate new, improved recovery free-wheel diode chips which improve this situation but detailed evaluation of the trade-off between turn-on loss and dv/dt performance is required. This paper describes the evaluation, test results, and a comparative analysis of dv/dt versus turn-on loss as a function of gate drive conditions for the 6th generation IGBT compared to the standard 5th generation module.",
"title": ""
},
{
"docid": "04e269feb0402a54317bd09f72e77144",
"text": "Fourier ptychography microscopy (FPM) is a lately developed technique, which achieves wide field, high resolution, and phase imaging, simultaneously. FPM stitches together the captured low-resolution images corresponding to angular varying illuminations in Fourier domain utilizing the concept of synthetic aperture and phase retrieval algorithms, which can surpass the space-bandwidth product limit of the objective lens and reconstruct a high-resolution complex image. In general FPM system, the LED source is important for the reconstructed quality and it is sensitive to the positions of each LED element. We find that the random positional deviations of each LED element can bring errors in reconstructed results, which is relative to a feedback parameter. To improve the reconstruction rate and correct random deviations, we combine an initial phase guess and a feedback parameter based on differential phase contrast and extended ptychographical iterative engine to propose an optimized iteration process for FPM. The simulated and experimental results indicate that the proposed method shows the reliability and validity towards the random deviations yet accelerates the convergence. More importantly, it is verified that this method can accelerate the convergence, reduce the requirement of LED array accuracy, and improve the quality of the reconstructed results.",
"title": ""
},
{
"docid": "4322f123ff6a1bd059c41b0037bac09b",
"text": "Nowadays, as a beauty-enhancing product, clothing plays an important role in human's social life. In fact, the key to a proper outfit usually lies in the harmonious clothing matching. Nevertheless, not everyone is good at clothing matching. Fortunately, with the proliferation of fashion-oriented online communities, fashion experts can publicly share their fashion tips by showcasing their outfit compositions, where each fashion item (e.g., a top or bottom) usually has an image and context metadata (e.g., title and category). Such rich fashion data offer us a new opportunity to investigate the code in clothing matching. However, challenges co-exist with opportunities. The first challenge lies in the complicated factors, such as color, material and shape, that affect the compatibility of fashion items. Second, as each fashion item involves multiple modalities (i.e., image and text), how to cope with the heterogeneous multi-modal data also poses a great challenge. Third, our pilot study shows that the composition relation between fashion items is rather sparse, which makes traditional matrix factorization methods not applicable. Towards this end, in this work, we propose a content-based neural scheme to model the compatibility between fashion items based on the Bayesian personalized ranking (BPR) framework. The scheme is able to jointly model the coherent relation between modalities of items and their implicit matching preference. Experiments verify the effectiveness of our scheme, and we deliver deep insights that can benefit future research.",
"title": ""
},
{
"docid": "dc5bb80426556e3dd9090a705d3e17b4",
"text": "OBJECTIVES\nThe aim of this study was to locate the scientific literature dealing with addiction to the Internet, video games, and cell phones and to characterize the pattern of publications in these areas.\n\n\nMETHODS\nOne hundred seventy-nine valid articles were retrieved from PubMed and PsycINFO between 1996 and 2005 related to pathological Internet, cell phone, or video game use.\n\n\nRESULTS\nThe years with the highest numbers of articles published were 2004 (n = 42) and 2005 (n = 40). The most productive countries, in terms of number of articles published, were the United States (n = 52), China (n = 23), the United Kingdom (n = 17), Taiwan (n = 13), and South Korea (n = 9). The most commonly used language was English (65.4%), followed by Chinese (12.8%) and Spanish (4.5%). Articles were published in 96 different journals, of which 22 published 2 or more articles. The journal that published the most articles was Cyberpsychology & Behavior (n = 41). Addiction to the Internet was the most intensely studied (85.3%), followed by addiction to video games (13.6%) and cell phones (2.1%).\n\n\nCONCLUSIONS\nThe number of publications in this area is growing, but it is difficult to conduct precise searches due to a lack of clear terminology. To facilitate retrieval, bibliographic databases should include descriptor terms referring specifically to Internet, video games, and cell phone addiction as well as to more general addictions involving communications and information technologies and other behavioral addictions.",
"title": ""
},
{
"docid": "ede8a7a2ba75200dce83e17609ec4b5b",
"text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.",
"title": ""
},
{
"docid": "58612d7c22f6bd0bf1151b7ca5da0f7c",
"text": "In this paper we present a novel method for clustering words in micro-blogs, based on the similarity of the related temporal series. Our technique, named SAX*, uses the Symbolic Aggregate ApproXimation algorithm to discretize the temporal series of terms into a small set of levels, leading to a string for each. We then define a subset of “interesting” strings, i.e. those representing patterns of collective attention. Sliding temporal windows are used to detect co-occurring clusters of tokens with the same or similar string. To assess the performance of the method we first tune the model parameters on a 2-month 1 % Twitter stream, during which a number of world-wide events of differing type and duration (sports, politics, disasters, health, and celebrities) occurred. Then, we evaluate the quality of all discovered events in a 1-year stream, “googling” with the most frequent cluster n-grams and manually assessing how many clusters correspond to published news in the same temporal slot. Finally, we perform a complexity evaluation and we compare SAX* with three alternative methods for event discovery. Our evaluation shows that SAX* is at least one order of magnitude less complex than other temporal and non-temporal approaches to micro-blog clustering.",
"title": ""
},
{
"docid": "ce86579be146c2f4b19224c0857eff1e",
"text": "A floating raft structure sensor using piezoelectric polyvinylidene fluoride (PVDF) film as sensitive material for grain loss detecting of combine harvester was presented in this paper. Double-layer vibration isolator was proposed to eliminate the vibration influence of combine harvester. Signal processing circuit which composed of a charge amplifier, band-pass filter, envelope detector, absolute value amplifier and square wave generator was constructed to detect the grain impact signal. According to the impact duration of grain on PVDF film, critical frequencies of band-pass filter were determined and the impact responses of filter under different impact durations were numerical simulated. Then, grain detecting experiments were carried out by assembling the sensor on the rear of vibrating cleaning sieve, the results showed that the grain impact can be identified effectively from vibrating noise and the sensor can output a standard square voltage signal while a grain impact is detected.",
"title": ""
},
{
"docid": "2ad906a257b468909ac807aeab5f69e6",
"text": "In this paper, we are implementing facial monitoring system by embedding face detection and face tracking algorithm found in MATLAB with the GPIO pins of Raspberry pi B by using RasPi command such that the array of LEDS follows the facial movement by detecting the face using Haar classifier, tracking its position in the range assigned using the eigenfeatures of the face, which are detected by eigenvectors of MATLAB and by face tracking, which is been carried by geometrical transformation so that motion and gesture of the face can be followed. By doing so we are opening up new way of facial tracking on a live streaming by the help of Viola Jones algorithm and an IR camera.",
"title": ""
},
{
"docid": "8f4ce2d2ec650a3923d27c3188f30f38",
"text": "Synthetic aperture radar (SAR) interferometry is a modern efficient technique that allows reconstructing the height profile of the observed scene. However, apart for the presence of critical nonlinear inversion steps, particularly crucial in abrupt topography scenarios, it does not allow one to separate different scattering mechanisms in the elevation (height) direction within the ground pixel. Overlay of scattering at different elevations in the same azimuth-range resolution cell can be due either to the penetration of the radiation below the surface or to perspective ambiguities caused by the side-looking geometry. Multibaseline three-dimensional (3-D) SAR focusing allows overcoming such a limitation and has thus raised great interest in the recent research. First results with real data have been only obtained in the laboratory and with airborne systems, or with limited time-span and spatial-coverage spaceborne data. This work presents a novel approach for the tomographic processing of European Remote Sensing satellite (ERS) real data for extended scenes and long time span. Besides facing problems common to the airborne case, such as the nonuniformly spaced passes, this processing requires tackling additional difficulties specific to the spaceborne case, in particular a space-varying phase calibration of the data due to atmospheric variations and possible scene deformations occurring for years-long temporal spans. First results are presented that confirm the capability of ERS multipass tomography to resolve multiple targets within the same azimuth-range cell and to map the 3-D scattering properties of the illuminated scene.",
"title": ""
},
{
"docid": "d9710b9a214d95c572bdc34e1fe439c4",
"text": "This paper presents a new method, capable of automatically generating attacks on binary programs from software crashes. We analyze software crashes with a symbolic failure model by performing concolic executions following the failure directed paths, using a whole system environment model and concrete address mapped symbolic memory in S2 E. We propose a new selective symbolic input method and lazy evaluation on pseudo symbolic variables to handle symbolic pointers and speed up the process. This is an end-to-end approach able to create exploits from crash inputs or existing exploits for various applications, including most of the existing benchmark programs, and several large scale applications, such as a word processor (Microsoft office word), a media player (mpalyer), an archiver (unrar), or a pdf reader (foxit). We can deal with vulnerability types including stack and heap overflows, format string, and the use of uninitialized variables. Notably, these applications have become software fuzz testing targets, but still require a manual process with security knowledge to produce mitigation-hardened exploits. Using this method to generate exploits is an automated process for software failures without source code. The proposed method is simpler, more general, faster, and can be scaled to larger programs than existing systems. We produce the exploits within one minute for most of the benchmark programs, including mplayer. We also transform existing exploits of Microsoft office word into new exploits within four minutes. The best speedup is 7,211 times faster than the initial attempt. For heap overflow vulnerability, we can automatically exploit the unlink() macro of glibc, which formerly requires sophisticated hacking efforts.",
"title": ""
},
{
"docid": "cae2b62afbecedc995612ed3a710e9d9",
"text": "Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economicbased systems for peer-to-peer distributed computing by developing users’ quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.",
"title": ""
},
{
"docid": "365fea34e5b3bbba808c23070b2cc533",
"text": "Clustering techniques have more importance in data mining especially when the data size is very large. It is widely used in the fields including pattern recognition system, machine learning algorithms, analysis of images, information retrieval and bio-informatics. Different clustering algorithms are available such as Expectation Maximization (EM), Cobweb, FarthestFirst, OPTICS, SimpleKMeans etc. SimpleKMeans clustering is a simple clustering algorithm. It partitions n data tuples into k groups such that each entity in the cluster has nearest mean. This paper is about the implementation of the clustering techniques using WEKA interface. This paper includes a detailed analysis of various clustering techniques with the different standard online data sets. Analysis is based on the multiple dimensions which include time to build the model, number of attributes, number of iterations, number of clusters and error rate.",
"title": ""
},
{
"docid": "dd5f9767c434c567e4c5948473b36958",
"text": "The rapid emergence of head-mounted devices such as the Microsoft Holo-lens enables a wide variety of continuous vision applications. Such applications often adopt deep-learning algorithms such as CNN and RNN to extract rich contextual information from the first-person-view video streams. Despite the high accuracy, use of deep learning algorithms in mobile devices raises critical challenges, i.e., high processing latency and power consumption. In this paper, we propose DeepMon, a mobile deep learning inference system to run a variety of deep learning inferences purely on a mobile device in a fast and energy-efficient manner. For this, we designed a suite of optimization techniques to efficiently offload convolutional layers to mobile GPUs and accelerate the processing; note that the convolutional layers are the common performance bottleneck of many deep learning models. Our experimental results show that DeepMon can classify an image over the VGG-VeryDeep-16 deep learning model in 644ms on Samsung Galaxy S7, taking an important step towards continuous vision without imposing any privacy concerns nor networking cost.",
"title": ""
},
{
"docid": "63f2caff9f598cf493d6c8a044000aa3",
"text": "There are both public health and food industry initiatives aimed at increasing breakfast consumption among children, particularly the consumption of ready-to-eat cereals. The purpose of this study was to determine whether there were identifiable differences in nutritional quality between cereals that are primarily marketed to children and cereals that are not marketed to children. Of the 161 cereals identified between January and February 2006, 46% were classified as being marketed to children (eg, packaging contained a licensed character or contained an activity directed at children). Multivariate analyses of variance were used to compare children's cereals and nonchildren's cereals with respect to their nutritional content, focusing on nutrients required to be reported on the Nutrition Facts panel (including energy). Compared to nonchildren's cereals, children's cereals were denser in energy, sugar, and sodium, but were less dense in fiber and protein. The proportion of children's and nonchildren's cereals that did and did not meet national nutritional guidelines for foods served in schools were compared using chi2analysis. The majority of children's cereals (66%) failed to meet national nutrition standards, particularly with respect to sugar content. t tests were used to compare the nutritional quality of children's cereals with nutrient-content claims and health claims to those without such claims. Although the specific claims were generally justified by the nutritional content of the product, there were few differences with respect to the overall nutrition profile. Overall, there were important differences in nutritional quality between children's cereals and nonchildren's cereals. Dietary advice for children to increase consumption of ready-to-eat breakfast cereals should identify and recommend those cereals with the best nutrient profiles.",
"title": ""
},
{
"docid": "440e45de4d13e89e3f268efa58f8a51a",
"text": "This letter describes the concept, design, and measurement of a low-profile integrated microstrip antenna for dual-band applications. The antenna operates at both the GPS L1 frequency of 1.575 GHz with circular polarization and 5.88 GHz with a vertical linear polarization for dedicated short-range communication (DSRC) application. The antenna is low profile and meets stringent requirements on pattern/polarization performance in both bands. The design procedure is discussed, and full measured data are presented.",
"title": ""
},
{
"docid": "af3a87d82c1f11a8a111ed4276020161",
"text": "In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.",
"title": ""
}
] |
scidocsrr
|
fa32e35e324ade2228b10d82ed31f3d2
|
A Novel Miniaturized Vivaldi Antenna Using Tapered Slot Edge With Resonant Cavity Structure for Ultrawideband Applications
|
[
{
"docid": "c0600c577850c8286f816396ead9649f",
"text": "A parameter study of dual-polarized tapered slot antenna (TSA) arrays shows the key features that affect the wide-band and widescan performance of these arrays. The overall performance can be optimized by judiciously choosing a combination of parameters. In particular, it is found that smaller circular slot cavities terminating the bilateral slotline improve the performance near the low end of the operating band, especially when scanning in the -plane. The opening rate of the tapered slotline mainly determines the mid-band performance and it is possible to choose an opening rate to obtain balanced overall performance in the mid-band. Longer tapered slotline is shown to increase the bandwidth, especially in the lower end of the operating band. Finally, it is shown that the -plane anomalies are affected by the array element spacing. A design example demonstrates that the results from the parameter study can be used to design a dual-polarized TSA array with about 4.5 : 1 bandwidth for a scan volume of not less than = 45 from broadside in all planes.",
"title": ""
},
{
"docid": "6acc820f32c74ff30730faca2eff9f8f",
"text": "The conventional Vivaldi antenna is known for its ultrawideband characteristic, but low directivity. In order to improve the directivity, a double-slot structure is proposed to design a new Vivaldi antenna. The two slots are excited in uniform amplitude and phase by using a T-junction power divider. The double-slot structure can generate plane-like waves in the E-plane of the antenna. As a result, directivity of the double-slot Vivaldi antenna is significantly improved by comparison to a conventional Vivaldi antenna of the same size. The measured results show that impedance bandwidth of the double-slot Vivaldi antenna is from 2.5 to 15 GHz. Gain and directivity of the proposed antenna is considerably improved at the frequencies above 6 GHz. Furthermore, the main beam splitting at high frequencies of the conventional Vivaldi antenna on thick dielectric substrates is eliminated by the double-slot structure.",
"title": ""
}
] |
[
{
"docid": "8c0a307935858ab631b57e65d1b5f45b",
"text": "Firsthand observations trace the current state, future potential, and obstacles ahead for Japanese academia.",
"title": ""
},
{
"docid": "4eca3018852fd3107cb76d1d95f76a0a",
"text": "Within the past decade, empirical evidence has emerged supporting the use of Acceptance and Commitment Therapy (ACT) targeting shame and self-stigma. Little is known about the role of self-compassion in ACT, but evidence from other approaches indicates that self-compassion is a promising means of reducing shame and self-criticism. The ACT processes of defusion, acceptance, present moment, values, committed action, and self-as-context are to some degree inherently self-compassionate. However, it is not yet known whether the self-compassion inherent in the ACT approach explains ACT’s effectiveness in reducing shame and stigma, and/or whether focused self-compassion work may improve ACT outcomes for highly self-critical, shame-prone people. We discuss how ACT for shame and stigma may be enhanced by existing approaches specifically targeting self-compassion.",
"title": ""
},
{
"docid": "fc0327de912ec8ef6ca33467d34bcd9e",
"text": "In this paper, a progressive fingerprint image compression (for storage or transmission) using edge detection scheme is adopted. The image is decomposed into two components. The first component is the primary component, which contains the edges, the other component is the secondary component, which contains the textures and the features. In this paper, a general grasp for the image is reconstructed in the first stage at a bit rate of 0.0223 bpp for Sample (1) and 0.0245 bpp for Sample (2) image. The quality of the reconstructed images is competitive to the 0.75 bpp target bit set by FBI standard. Also, the compression ratio and the image quality of this algorithm is competitive to other existing methods given in the literature [6]-[9]. The compression ratio for our algorithm is about 45:1 (0.180 bpp).",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "1da3afbc46ee3983bcd1f58470950551",
"text": "Alternating minimization is a widely used and empirically successful heuristic for matrix completion and related low-rank optimization problems. Theoretical guarantees for alternating minimization have been hard to come by and are still poorly understood. This is in part because the heuristic is iterative and non-convex in nature. We give a new algorithm based on alternating minimization that provably recovers an unknown low-rank matrix from a random subsample of its entries under a standard incoherence assumption. Our results reduce the sample size requirements of the alternating minimization approach by at least a quartic factor in the rank and the condition number of the unknown matrix. These improvements apply even if the matrix is only close to low-rank in the Frobenius norm. Our algorithm runs in nearly linear time in the dimension of the matrix and, in a broad range of parameters, gives the strongest sample bounds among all subquadratic time algorithms that we are aware of. Underlying our work is a new robust convergence analysis of the well-known Power Method for computing the dominant singular vectors of a matrix. This viewpoint leads to a conceptually simple understanding of alternating minimization. In addition, we contribute a new technique for controlling the coherence of intermediate solutions arising in iterative algorithms based on a smoothed analysis of the QR factorization. These techniques may be of interest beyond their application here.",
"title": ""
},
{
"docid": "7dcc565c03660fbc1da90164a5cba448",
"text": "Do continuous word embeddings encode any useful information for constituency parsing? We isolate three ways in which word embeddings might augment a stateof-the-art statistical parser: by connecting out-of-vocabulary words to known ones, by encouraging common behavior among related in-vocabulary words, and by directly providing features for the lexicon. We test each of these hypotheses with a targeted change to a state-of-the-art baseline. Despite small gains on extremely small supervised training sets, we find that extra information from embeddings appears to make little or no difference to a parser with adequate training data. Our results support an overall hypothesis that word embeddings import syntactic information that is ultimately redundant with distinctions learned from treebanks in other ways.",
"title": ""
},
{
"docid": "84e47d33a895afd0fab28784c112d8f4",
"text": "Hybrid analog/digital precoding is a promising technique to reduce the hardware cost of radio-frequency components compared with the conventional full-digital precoding approach in millimeter-wave multiple-input multiple output systems. However, the large antenna dimensions of the hybrid precoder design makes it difficult to acquire an optimal full-digital precoder. Moreover, it also requires matrix inversion, which leads to high complexity in the hybrid precoder design. In this paper, we propose a low-complexity optimal full-digital precoder acquisition algorithm, named beamspace singular value decomposition (SVD) that saves power for the base station and user equipment. We exploit reduced-dimension beamspace channel state information (CSI) given by compressive sensing (CS) based channel estimators. Then, we propose a CS-assisted beamspace hybrid precoding (CS-BHP) algorithm that leverages CS-based CSI. Simulation results show that the proposed beamspace-SVD reduces complexity by 99.4% compared with an optimal full-digital precoder acquisition using full-dimension SVD. Furthermore, the proposed CS-BHP reduces the complexity of the state-of-the-art approach by 99.6% and has less than 5% performance loss compared with an optimal full-digital precoder.",
"title": ""
},
{
"docid": "f06a5ef2345be71f2dbd1a21c154f111",
"text": "This article reviews literature on the prevalence of mental health problems among women with a history of intimate partner violence. The weighted mean prevalence of mental health problems among battered women was 47.6% in 18 studies of depression, 17.9% in 13 studies of suicidality, 63.8% in 11 studies of posttraumatic stress disorder (PTSD), 18.5% in 10 studies of alcohol abuse, and 8.9% in four studies of drug abuse. These were typically inconsistent across studies. Weighted mean odds ratios representing associations of these problems with violence ranged from 3.55 to 5.62, and were typically consistent across studies. Variability was accounted for by differences in sampling frames. Dose-response relationships of violence to depression and PTSD were observed. Although research has not addressed many criteria for causal inferences, the existing research is consistent with the hypothesis that intimate partner violence increases risk for mental health problems. The appropriate way to conceptualize these problems deserves careful attention.",
"title": ""
},
{
"docid": "2b42cf158d38153463514ed7bc00e25f",
"text": "The Disney Corporation made their first princess film in 1937 and has continued producing these movies. Over the years, Disney has received criticism for their gender interpretations and lack of racial diversity. This study will examine princess films from the 1990’s and 2000’s and decide whether race or time has an effect on the gender role portrayal of each character. By using a content analysis, this study identified the changes with each princess. The findings do suggest the princess characters exhibited more egalitarian behaviors over time. 1 The Disney Princess franchise began in 1937 with Snow White and the Seven Dwarfs and continues with the most recent film was Tangled (Rapunzel) in 2011. In past years, Disney film makers were criticized by the public audience for lack of ethnic diversity. In 1995, Disney introduced Pocahontas and three years later Mulan emerged creating racial diversity to the collection. Eleven years later, Disney released The Princess and the Frog (2009). The ongoing question is whether diverse princesses maintain the same qualities as their European counterparts. Walt Disney’s legacy lives on, but viewers are still curious about the all white princess collection which did not gain racial counterparts until 58 years later. It is important to recognize the role the Disney Corporation plays in today’s society. The company has several princesses’ films with matching merchandise. Parents purchase the items for their children and through film and merchandise, children are receiving messages such as how a woman ought to act, think or dress. Gender construction in Disney princess films remains important because of the messages it sends to children. We need to know whether gender roles presented in the films downplay the intellect of a woman in a modern society or whether Disney princesses are constricted to the female gender roles such as submissiveness and nurturing. In addition, we need to consider whether the messages are different for diverse princesses. The purpose of the study is to investigate the changes in gender construction in Disney princess characters related to the race of the character. This research also examines how gender construction of Disney princess characters changed from the 1900’s to 2000’s. A comparative content analysis will analyze gender role differences between women of color and white princesses. In particular, the study will ask whether race does matter in the gender roles revealed among each female character. By using social construction perspectives, Disney princesses of color were more masculine, but the most recent films became more egalitarian. 2 LITERATURE REVIEW Women in Disney film Davis (2006) examined women in Disney animated films by creating three categories: The Classic Years, The Middle Era, and The Eisner Era. The Classic Years, 19371967 were described as the beginning of Disney. During this period, women were rarely featured alone in films, but held central roles in the mid-1930s (Davis 2006:84). Three princess films were released and the characters carried out traditional feminine roles such as domestic work and passivity. Davis (2006) argued the princesses during The Classic Era were the least active and dynamic. The Middle Era, 1967-1988, led to a downward spiral for the company after the deaths of Walt and Roy Disney. The company faced increased amounts of debt and only eight Disney films were produced. The representation of women remained largely static (Davis 2006:137). The Eisner Era, 1989-2005, represented a revitalization of Disney with the release of 12 films with leading female roles. Based on the eras, Davis argued there was a shift after Walt Disney’s death which allowed more women in leading roles and released them from traditional gender roles. Independence was a new theme in this era allowing women to be selfsufficient unlike women in The Classic Era who relied on male heroines. Gender Role Portrayal in films England, Descartes, and Meek (2011) examined the Disney princess films and challenged the ideal of traditional gender roles among the prince and princess characters. The study consisted of all nine princess films divided into three categories based on their debut: early, middle and most current. The researchers tested three hypotheses: 1) gender roles among males and female characters would differ, 2) males would rescue or attempt to rescue the princess, and 3) characters would display more egalitarian behaviors over time (England, et al. 2011:557-58). The researchers coded traits as masculine and feminine. They concluded that princesses 3 displayed a mixture of masculine and feminine characteristics. These behaviors implied women are androgynous beings. For example, princesses portrayed bravery almost twice as much as princes (England, et al. 2011). The findings also showed males rescued women more and that women were rarely shown as rescuers. Overall, the data indicated Disney princess films had changed over time as women exhibited more masculine behaviors in more recent films. Choueiti, Granados, Pieper, and Smith (2010) conducted a content analysis regarding gender roles in top grossing Grated films. The researchers considered the following questions: 1) What is the male to female ratio? 2) Is gender related to the presentation of the character demographics such as role, type, or age? and 3) Is gender related to the presentation of character’s likeability, and the equal distribution of male and females from 1990-2005(Choueiti et al. 2010:776-77). The researchers concluded that there were more male characters suggesting the films were patriarchal. However, there was no correlation with demographics of the character and males being viewed as more likeable. Lastly, female representation has slightly decreased from 214 characters or 30.1% in 1990-94 to 281 characters or 29.4% in 2000-2004 (Choueiti et al. 2010:783). From examining gender role portrayals, females have become androgynous while maintaining minimal roles in animated film.",
"title": ""
},
{
"docid": "8c02cff9b8d615e8f7f8b8b3d8e2e88f",
"text": "In recent years a considerable amount of work in graphics and geometric optimization used tools based on the Laplace-Beltrami operator on a surface. The applications of the Laplacian include mesh editing, surface smoothing, and shape interpolations among others. However, it has been shown [13, 24, 26] that the popular cotangent approximation schemes do not provide convergent point-wise (or even L2) estimates, while many applications rely on point-wise estimation. Existence of such schemes has been an open question [13].\n In this paper we propose the first algorithm for approximating the Laplace operator of a surface from a mesh with point-wise convergence guarantees applicable to arbitrary meshed surfaces. We show that for a sufficiently fine mesh over an arbitrary surface, our mesh Laplacian is close to the Laplace-Beltrami operator on the surface at every point of the surface.\n Moreover, the proposed algorithm is simple and easily implementable. Experimental evidence shows that our algorithm exhibits convergence empirically and compares favorably with cotangentbased methods in providing accurate approximation of the Laplace operator for various meshes.",
"title": ""
},
{
"docid": "a2e2117e3d2a01f2f28835350ba1d732",
"text": "Previously, several natural integral transforms of Minkowski question mark function F (x) were introduced by the author. Each of them is uniquely characterized by certain regularity conditions and the functional equation, thus encoding intrinsic information about F (x). One of them the dyadic period function G(z) was defined via certain transcendental integral. In this paper we introduce a family of “distributions” Fp(x) for R p ≥ 1, such that F1(x) is the question mark function and F2(x) is a discrete distribution with support on x = 1. Thus, all the aforementioned integral transforms are calculated for such p. As a consequence, the generating function of moments of F p(x) satisfies the three term functional equation. This has an independent interest, though our main concern is the information it provides about F (x). This approach yields certain explicit series for G(z). This also solves the problem in expressing the moments of F (x) in closed form.",
"title": ""
},
{
"docid": "9c995d980b0b38c7a6cfb2ac56c27b58",
"text": "To solve the problems of heterogeneous data types and large amount of calculation in making decision for big data, an optimized distributed OLAP system for big data is proposed in this paper. The system provides data acquisition for different data sources, and supports two types of OLAP engines, Impala and Kylin. First of all, the architecture of the system is proposed, consisting of four modules, data acquisition, data storage, OLAP analysis and data visualization, and the specific implementation of each module is descripted in great detail. Then the optimization of the system is put forward, which is automatic metadata configuration and the cache for OLAP query. Finally, the performance test of the system is conduct to demonstrate that the efficiency of the system is significantly better than the traditional solution.",
"title": ""
},
{
"docid": "35ceea0ec94578591c7e13464cde9622",
"text": "Autoimmune uveitis is a complex group of sight-threatening diseases that arise without a known infectious trigger. The disorder is often associated with immunological responses to retinal proteins. Experimental models of autoimmune uveitis targeting retinal proteins have led to a better understanding of the basic immunological mechanisms involved in the pathogenesis of uveitis and have provided a template for the development of novel therapies. The disease in humans is believed to be T cell-dependent, as clinical uveitis is ameliorated by T cell-targeting therapies. The roles of T helper 1 (Th1) and Th17 cells have been major topics of interest in the past decade. Studies in uveitis patients and experiments in animal models have revealed that Th1 and Th17 cells can both be pathogenic effectors, although, paradoxically, some cytokines produced by these subsets can also be protective, depending on when and where they are produced. The major proinflammatory as well as regulatory cytokines in uveitis, the therapeutic approaches, and benefits of targeting these cytokines will be discussed in this review.",
"title": ""
},
{
"docid": "527e70797ec7931687d17d26f1f64428",
"text": "We experimentally demonstrate the focusing of visible light with ultra-thin, planar metasurfaces made of concentrically perforated, 30-nm-thick gold films. The perforated nano-voids—Babinet-inverted (complementary) nano-antennas—create discrete phase shifts and form a desired wavefront of cross-polarized, scattered light. The signal-to-noise ratio in our complementary nano-antenna design is at least one order of magnitude higher than in previous metallic nano-antenna designs. We first study our proof-of-concept ‘metalens’ with extremely strong focusing ability: focusing at a distance of only 2.5 mm is achieved experimentally with a 4-mm-diameter lens for light at a wavelength of 676 nm. We then extend our work with one of these ‘metalenses’ and achieve a wavelength-controllable focal length. Optical characterization of the lens confirms that switching the incident wavelength from 676 to 476 nm changes the focal length from 7 to 10 mm, which opens up new opportunities for tuning and spatially separating light at different wavelengths within small, micrometer-scale areas. All the proposed designs can be embedded on-chip or at the end of an optical fiber. The designs also all work for two orthogonal, linear polarizations of incident light. Light: Science & Applications (2013) 2, e72; doi:10.1038/lsa.2013.28; published online 26 April 2013",
"title": ""
},
{
"docid": "0e1cf85f3760a250408d1ad70f49f9ab",
"text": "We construct a multilingual common semantic space based on distributional semantics, where words from multiple languages are projected into a shared space via which all available resources and knowledge can be shared across multiple languages. Beyond word alignment, we introduce multiple cluster-level alignments and enforce the word clusters to be consistently distributed across multiple languages. We exploit three signals for clustering: (1) neighbor words in the monolingual word embedding space; (2) character-level information; and (3) linguistic properties (e.g., apposition, locative suffix) derived from linguistic structure knowledge bases available for thousands of languages. We introduce a new cluster-consistent correlational neural network to construct the common semantic space by aligning words as well as clusters. Intrinsic evaluation on monolingual and multilingual QVEC tasks shows our approach achieves significantly higher correlation with linguistic features which are extracted from manually crafted lexical resources than state-of-the-art multi-lingual embedding learning methods do. Using low-resource language name tagging as a case study for extrinsic evaluation, our approach achieves up to 14.6% absolute F-score gain over the state of the art on cross-lingual direct transfer. Our approach is also shown to be robust even when the size of bilingual dictionary is small.1",
"title": ""
},
{
"docid": "a2c93e5497ab4e0317b9e86db6d31dbb",
"text": "Digital photographs are often used in treatment monitoring for home care of less advanced pressure ulcers. We investigated assessment agreement when stage III and IV pressure ulcers in individuals with spinal cord injury were evaluated in person and with the use of digital photographs. Two wound-care nurses assessed 31 wounds among 15 participants. One nurse assessed all wounds in person, while the other used digital photographs. Twenty-four wound description categories were applied in the nurses' assessments. Kappa statistics were calculated to investigate agreement beyond chance (p < or = 0.05). For 10 randomly selected \"double-rated wounds,\" both nurses applied both assessment methods. Fewer categories were evaluated for the double-rated wounds, because some categories were chosen infrequently and agreement could not be measured. Interrater agreement with the two methods was observed for 12 of the 24 categories (50.0%). However, of the 12 categories with agreement beyond chance, agreement was only \"slight\" (kappa = 0-0.20) or \"fair\" (kappa = 0.21-0.40) for 6 categories. The highest agreement was found for the presence of undermining (kappa = 0.853, p < 0.001). Interrater agreement was similar to intramethod agreement (41.2% of the categories demonstrated agreement beyond chance) for the nurses' in-person assessment of the double-rated wounds. The moderate agreement observed may be attributed to variation in subjective perception of qualitative wound characteristics.",
"title": ""
},
{
"docid": "56b2e01034ea18f5705b8e6ea18a327e",
"text": "Facial expression is central to human experience. Its efficient and valid measurement is a challenge that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are needed. We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains using both person-specific and generic approaches. The work promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action.",
"title": ""
},
{
"docid": "207bac82c5484dcf89d8fe37adb97207",
"text": "In a wireless network system the security is a main concern for a user. It is basically suffering from mainly two security attacks i) Virus Attack ii) Intruders. Intruder does not only mean it want to hack the private information over the network, it also includes using a node bandwidth and increasing the Delay of Service for other host over the network. This paper is basically based on such type of attack. This paper reviews the comparison of different Intrusion Detection System. On the behalf of the reviewed work we proposed a new Network Intrusion System that will mainly detects the most prominent attack of Wireless Network i.e. DoS Attack. The proposed system is an intelligent system that will detect the intrusion dynamically on the bases of Misuse Detection which has very less false negative. The system not only detects the intruders by the IP address, it detects the system with its contents also.",
"title": ""
},
{
"docid": "61b7c35516b8a3f2a387526ef2541434",
"text": "Understanding and quantifying dependence is at the core of all modelling efforts in financial econometrics. The linear correlation coefficient, which is the far most used measure to test dependence in the financial community and also elsewhere, is only a measure of linear dependence. This means that it is a meaningful measure of dependence if asset returns are well represented by an elliptical distribution. Outside the world of elliptical distributions, however, using the linear correlation coefficient as a measure of dependence may lead to misleading conclusions. Hence, alternative methods for capturing co-dependency should be considered. One class of alternatives are copula-based dependence measures. In this survey we consider two parametric families of copulas; the copulas of normal mixture distributions and Archimedean copulas.",
"title": ""
},
{
"docid": "8380a623e744a44f2ab7a077c620db37",
"text": "We present a novel video representation for human action recognition by considering temporal sequences of visual words. Based on state-of-the-art dense trajectories, we introduce temporal bundles of dominant, that is most frequent, visual words. These are employed to construct a complementary action representation of ordered dominant visual word sequences, that additionally incorporates fine grained temporal information. We exploit the introduced temporal information by applying local sub-sequence alignment that quantifies the similarity between sequences. This facilitates the fusion of our representation with the bag-of-visual-words (BoVW) representation. Our approach incorporates sequential temporal structure and results in a low-dimensional representation compared to the BoVW, while still yielding a descent result when combined with it. Experiments on the KTH, Hollywood2 and the challenging HMDB51 datasets show that the proposed framework is complementary to the BoVW representation, which discards temporal order.",
"title": ""
}
] |
scidocsrr
|
b358f55af6805f72f0f7cb5d1a7e0981
|
Interacting Conceptual Spaces I : Grammatical Composition of Concepts
|
[
{
"docid": "812687a5291d786ecda102adda03700c",
"text": "The overall goal is to show that conceptual spaces are more promising than other ways of modelling the semantics of natural language. In particular, I will show how they can be used to model actions and events. I will also outline how conceptual spaces provide a cognitive grounding for word classes, including nouns, adjectives, prepositions and verbs.",
"title": ""
},
{
"docid": "b76f197701fd662955e38c0fe7d2f63c",
"text": "Commonsense reasoning patterns such as interpolation and a fortiori inference have proven useful for dealing with gaps in structured knowledge bases. An important di culty in applying these reasoning patterns in practice is that they rely on fine-grained knowledge of how di↵erent concepts and entities are semantically related. In this paper, we show how the required semantic relations can be learned from a large collection of text documents. To this end, we first induce a conceptual space from the text documents, using multi-dimensional scaling. We then rely on the key insight that the required semantic relations correspond to qualitative spatial relations in this conceptual space. Among others, in an entirely unsupervised way, we identify salient directions in the conceptual space which correspond to interpretable relative properties such as ‘more fruity than’ (in a space of wines), resulting in a symbolic and interpretable representation of the conceptual space. To evaluate the quality of our semantic relations, we show how they can be exploited by a number of commonsense reasoning based classifiers. We experimentally show that these classifiers can outperform standard approaches, while being able to provide intuitive explanations of classification decisions. A number of crowdsourcing experiments provide further insights into the nature of the extracted semantic relations.",
"title": ""
}
] |
[
{
"docid": "9c61ac11d2804323ba44ed91d05a0e46",
"text": "Nostalgia fulfills pivotal functions for individuals, but lacks an empirically derived and comprehensive definition. We examined lay conceptions of nostalgia using a prototype approach. In Study 1, participants generated open-ended features of nostalgia, which were coded into categories. In Study 2, participants rated the centrality of these categories, which were subsequently classified as central (e.g., memories, relationships, happiness) or peripheral (e.g., daydreaming, regret, loneliness). Central (as compared with peripheral) features were more often recalled and falsely recognized (Study 3), were classified more quickly (Study 4), were judged to reflect more nostalgia in a vignette (Study 5), better characterized participants' own nostalgic (vs. ordinary) experiences (Study 6), and prompted higher levels of actual nostalgia and its intrapersonal benefits when used to trigger a personal memory, regardless of age (Study 7). These findings highlight that lay people view nostalgia as a self-relevant and social blended emotional and cognitive state, featuring a mixture of happiness and loss. The findings also aid understanding of nostalgia's functions and identify new methods for future research.",
"title": ""
},
{
"docid": "9c30ef5826b413bab262b7a0884eb119",
"text": "In this survey paper, we review recent uses of convolution neural networks (CNNs) to solve inverse problems in imaging. It has recently become feasible to train deep CNNs on large databases of images, and they have shown outstanding performance on object classification and segmentation tasks. Motivated by these successes, researchers have begun to apply CNNs to the resolution of inverse problems such as denoising, deconvolution, super-resolution, and medical image reconstruction, and they have started to report improvements over state-of-the-art methods, including sparsity-based techniques such as compressed sensing. Here, we review the recent experimental work in these areas, with a focus on the critical design decisions: Where does the training data come from? What is the architecture of the CNN? and How is the learning problem formulated and solved? We also bring together a few key theoretical papers that offer perspective on why CNNs are appropriate for inverse problems and point to some next steps in the field.",
"title": ""
},
{
"docid": "f78534a09317be5097963d068c6af2cd",
"text": "Example-based single image super-resolution (SISR) methods use external training datasets and have recently attracted a lot of interest. Self-example based SISR methods exploit redundant non-local self-similar patterns in natural images and because of that are more able to adapt to the image at hand to generate high quality super-resolved images. In this paper, we propose to combine the advantages of example-based SISR and self-example based SISR. A novel hierarchical random forests based super-resolution (SRHRF) method is proposed to learn statistical priors from external training images. Each layer of random forests reduce the estimation error due to variance by aggregating prediction models from multiple decision trees. The hierarchical structure further boosts the performance by pushing the estimation error due to bias towards zero. In order to further adaptively improve the super-resolved image, a self-example random forests (SERF) is learned from an image pyramid pair constructed from the down-sampled SRHRF generated result. Extensive numerical results show that the SRHRF method enhanced using SERF (SRHRF+) achieves the state-of-the-art performance on natural images and yields substantially superior performance for image with rich self-similar patterns.",
"title": ""
},
{
"docid": "851c99b4e77d4e4bc819f2fb17841061",
"text": "Java virtual machine (JVM) is a core technology, whose reliability is critical. Testing JVM implementations requires painstaking effort in designing test classfiles (*.class) along with their test oracles. An alternative is to employ binary fuzzing to differentially test JVMs by blindly mutating seeding classfiles and then executing the resulting mutants on different JVM binaries for revealing inconsistent behaviors. However, this blind approach is not cost effective in practice because most of the mutants are invalid and redundant. This paper tackles this challenge by introducing classfuzz, a coverage-directed fuzzing approach that focuses on representative classfiles for differential testing of JVMs’ startup processes. Our core insight is to (1) mutate seeding classfiles using a set of predefined mutation operators (mutators) and employ Markov Chain Monte Carlo (MCMC) sampling to guide mutator selection, and (2) execute the mutants on a reference JVM implementation and use coverage uniqueness as a discipline for accepting representative ones. The accepted classfiles are used as inputs to differentially test different JVM implementations and find defects. We have implemented classfuzz and conducted an extensive evaluation of it against existing fuzz testing algorithms. Our evaluation results show that classfuzz can enhance the ratio of discrepancy-triggering classfiles from 1.7% to 11.9%. We have also reported 62 JVM discrepancies, along with the test classfiles, to JVM developers. Many of our reported issues have already been confirmed as JVM defects, and some even match recent clarifications and changes to the Java SE 8 edition of the JVM specification.",
"title": ""
},
{
"docid": "65d938eee5da61f27510b334312afe41",
"text": "This paper reviews the actual and potential use of social media in emergency, disaster and crisis situations. This is a field that has generated intense interest. It is characterised by a burgeoning but small and very recent literature. In the emergencies field, social media (blogs, messaging, sites such as Facebook, wikis and so on) are used in seven different ways: listening to public debate, monitoring situations, extending emergency response and management, crowd-sourcing and collaborative development, creating social cohesion, furthering causes (including charitable donation) and enhancing research. Appreciation of the positive side of social media is balanced by their potential for negative developments, such as disseminating rumours, undermining authority and promoting terrorist acts. This leads to an examination of the ethics of social media usage in crisis situations. Despite some clearly identifiable risks, for example regarding the violation of privacy, it appears that public consensus on ethics will tend to override unscrupulous attempts to subvert the media. Moreover, social media are a robust means of exposing corruption and malpractice. In synthesis, the widespread adoption and use of social media by members of the public throughout the world heralds a new age in which it is imperative that emergency managers adapt their working practices to the challenge and potential of this development. At the same time, they must heed the ethical warnings and ensure that social media are not abused or misused when crises and emergencies occur.",
"title": ""
},
{
"docid": "0e30a01870bbbf32482b5ac346607afc",
"text": "Hypothyroidism is the pathological condition in which the level of thyroid hormones declines to the deficiency state. This communication address the therapies employed for the management of hypothyroidism as per the Ayurvedic and modern therapeutic perspectives on the basis scientific papers collected from accepted scientific basis like Google, Google Scholar, PubMed, Science Direct, using various keywords. The Ayurveda describe hypothyroidism as the state of imbalance of Tridoshas and suggest the treatment via use of herbal plant extracts, life style modifications like practicing yoga and various dietary supplements. The modern medicine practice define hypothyroidism as the disease state originated due to formation of antibodies against thyroid gland and hormonal imbalance and incorporate the use of hormone replacement i.e. Levothyroxine, antioxidants. Various plants like Crataeva nurvula and dietary supplements like Capsaicin, Forskolin, Echinacea, Ginseng and Bladderwrack can serve as a potential area of research as thyrotropic agents.",
"title": ""
},
{
"docid": "714b5db0d1f146c5dde6e4c01de59be9",
"text": "Coilgun electromagnetic launchers have capability for low and high speed applications. Through the development of four guns having projectiles ranging from 10 g to 5 kg and speeds up to 1 km/s, Sandia National Laboratories has succeeded in coilgun design and operations, validating the computational codes and basis for gun system control. Coilguns developed at Sandia consist of many coils stacked end-to-end forming a barrel, with each coil energized in sequence to create a traveling magnetic wave that accelerates a projectile. Active tracking of the projectile location during launch provides precise feedback to control when the coils arc triggered to create this wave. However, optimum performance depends also on selection of coil parameters. This paper discusses issues related to coilgun design and control such as tradeoffs in geometry and circuit parameters to achieve the necessary current risetime to establish the energy in the coils. The impact of switch jitter on gun performance is also assessed for high-speed applications.",
"title": ""
},
{
"docid": "dd080a0ad38076c2693d6bcef574b053",
"text": "We present an approach to detect network configuration errors, which combines the benefits of two prior approaches. Like prior techniques that analyze configuration files, our approach can find errors proactively, before the configuration is applied, and answer “what if” questions. Like prior techniques that analyze data-plane snapshots, our approach can check a broad range of forwarding properties and produce actual packets that violate checked properties. We accomplish this combination by faithfully deriving and then analyzing the data plane that would emerge from the configuration. Our derivation of the data plane is fully declarative, employing a set of logical relations that represent the control plane, the data plane, and their relationship. Operators can query these relations to understand identified errors and their provenance. We use our approach to analyze two large university networks with qualitatively different routing designs and find many misconfigurations in each. Operators have confirmed the majority of these as errors and have fixed their configurations accordingly.",
"title": ""
},
{
"docid": "6d11d47e6549ac4d9f369772e78884d8",
"text": "A novel analytical model of inductively coupled wireless power transfer is presented. For the first time, the effects of coil misalignment and geometry are addressed in a single mathematical expression. In the applications envisaged, such as radio frequency identification (RFID) and biomedical implants, the receiving coil is normally significantly smaller than the transmitting coil. Formulas are derived for the magnetic field at the receiving coil when it is laterally and angularly misaligned from the transmitting coil. Incorporating this magnetic field solution with an equivalent circuit for the inductive link allows us to introduce a power transfer formula that combines coil characteristics and misalignment factors. The coil geometries considered are spiral and short solenoid structures which are currently popular in the RFID and biomedical domains. The novel analytical power transfer efficiency expressions introduced in this study allow the optimization of coil geometry for maximum power transfer and misalignment tolerance. The experimental results show close correlation with the theoretical predictions. This analytic technique can be widely applied to inductive wireless power transfer links without the limitations imposed by numerical methods.",
"title": ""
},
{
"docid": "9006ecc6ff087d6bdaf90bdb73860133",
"text": "Next-generation datacenters (DCs) built on virtualization technologies are pivotal to the effective implementation of the cloud computing paradigm. To deliver the necessary services and quality of service, cloud DCs face major reliability and robustness challenges.",
"title": ""
},
{
"docid": "a3b680c8c9eb00b6cc66ec24aeadaa66",
"text": "With the application of Internet of Things and services to manufacturing, the fourth stage of industrialization, referred to as Industrie 4.0, is believed to be approaching. For Industrie 4.0 to come true, it is essential to implement the horizontal integration of inter-corporation value network, the end-to-end integration of engineering value chain, and the vertical integration of factory inside. In this paper, we focus on the vertical integration to implement flexible and reconfigurable smart factory. We first propose a brief framework that incorporates industrial wireless networks, cloud, and fixed or mobile terminals with smart artifacts such as machines, products, and conveyors.Then,we elaborate the operationalmechanism from the perspective of control engineering, that is, the smart artifacts form a self-organized systemwhich is assistedwith the feedback and coordination blocks that are implemented on the cloud and based on the big data analytics. In addition, we outline the main technical features and beneficial outcomes and present a detailed design scheme. We conclude that the smart factory of Industrie 4.0 is achievable by extensively applying the existing enabling technologies while actively coping with the technical challenges.",
"title": ""
},
{
"docid": "f7366a6a67eb032a1080e000b687929f",
"text": "Internet of things (IoT) is intensely gaining reputation due to its necessity and efficiency in the computer realm. The support of wireless connectivity as well as the emergence of gadgets alleviates its usage essentially in governing systems in various fields. Though these systems are ubiquitous, pervasive and seamless, an issue concerning consumers’ privacy remains debatable. This is most evident in the health sector, as there is an immaculate rise in terms of awareness amongst patients where data privacy is concerned. In this paper, we propose a framework modelling the privacy requirements for IoT-based health applications. We have reviewed several privacy frameworks to derive at the essential principles required to develop privacy-aware IoT health applications. The proposed framework presents important privacy requirements to be addressed in the development of novel IoT health applications.",
"title": ""
},
{
"docid": "3f7ea99c1b03aaf5037f5bfd4956aa04",
"text": "Due to the numerous data breaches, often resulting in the disclosure of a substantial amount of user passwords, the classic authentication scheme where just a password is required to log in, has become inadequate. As a result, many popular web services now employ risk-based authentication systems where various bits of information are requested in order to determine the authenticity of the authentication request. In this risk assessment process, values consisting of geo-location, IP address and browser-fingerprint information, are typically used to detect anomalies in comparison with the user’s regular behavior. In this paper, we focus on risk-based authentication mechanisms in the setting of mobile devices, which are known to fall short of providing reliable device-related information that can be used in the risk analysis process. More specifically, we present a web-based and low-effort system that leverages accelerometer data generated by a mobile device for the purpose of device re-identification. Furthermore, we evaluate the performance of these techniques and assess the viability of embedding such a system as part of existing risk-based authentication processes.",
"title": ""
},
{
"docid": "0bb53802df49097659ec2e9962ef4ede",
"text": "In her 2006 book \"My Stroke of Insight\" Dr. Jill Bolte Taylor relates her experience of suffering from a left hemispheric stroke caused by a congenital arteriovenous malformation which led to a loss of inner speech. Her phenomenological account strongly suggests that this impairment produced a global self-awareness deficit as well as more specific dysfunctions related to corporeal awareness, sense of individuality, retrieval of autobiographical memories, and self-conscious emotions. These are examined in details and corroborated by numerous excerpts from Taylor's book.",
"title": ""
},
{
"docid": "20f98a15433514dc5aa76110f68a71ba",
"text": "We describe a case of secondary syphilis of the tongue in which the main clinical presentation of the disease was similar to oral hairy leukoplakia. In a man who was HIV seronegative, the first symptom was a dryness of the throat followed by a feeling of foreign body in the tongue. Lesions were painful without cutaneous manifestations of secondary syphilis. IgM-fluorescent treponemal antibody test and typical serologic parameters promptly led to the diagnosis of secondary syphilis. We initiated an appropriate antibiotic therapy using benzathine penicillin, which induced healing of the tongue lesions. The differential diagnosis of this lesion may include oral squamous carcinoma, leukoplakia, candidosis, lichen planus, and, especially, hairy oral leukoplakia. This case report emphasizes the importance of considering secondary syphilis in the differential diagnosis of hairy oral leukoplakia. Depending on the clinical picture, the possibility of syphilis should not be overlooked in the differential diagnosis of many diseases of the oral mucosa.",
"title": ""
},
{
"docid": "94ea45d238c4c1a4409bd2c36d6c2693",
"text": "OBJECTIVE\nTo evaluate the effectiveness of inspiratory/expiratory muscle training (IEMT) and neuromuscular electrical stimulation (NMES) to improve dysphagia in stroke.\n\n\nDESIGN\nProspective, single-blind, randomized-controlled trial.\n\n\nSETTING\nTertiary public hospital.\n\n\nSUBJECTS\nSixty-two patients with dysphagia were randomly assigned to standard swallow therapy (SST) (Group I, controls, n=21), SST+ IEMT (Group II, n=21) or SST+ sham IEMT+ NMES (Group III, n=20).\n\n\nINTERVENTIONS\nAll patients followed a 3-week standard multidisciplinary rehabilitation program of SST and speech therapy. The SST+IEMT group's muscle training consisted of 5 sets/10 repetitions, twice-daily, 5 days/week. Group III's sham IEMT required no effort; NMES consisted of 40-minute sessions, 5 days/week, at 80Hz.\n\n\nMAIN OUTCOMES\nDysphagia severity, assessed by Penetration-Aspiration Scale, and respiratory muscle strength (maximal inspiratory and expiratory pressures) at the end of intervention and 3-month follow-up.\n\n\nRESULTS\nMaximal respiratory pressures were most improved in Group II: treatment effect was 12.9 (95% confidence interval 4.5-21.2) and 19.3 (95% confidence interval 8.5-30.3) for maximal inspiratory and expiratory pressures, respectively. Swallowing security signs were improved in Groups II and III at the end of intervention. No differences in Penetration-Aspiration Scale or respiratory complications were detected between the 3 groups at 3-month follow-up.\n\n\nCONCLUSION\nAdding IEMT to SST was an effective, feasible, and safe approach that improved respiratory muscle strength. Both IEMT and NMES were associated with improvement in pharyngeal swallowing security signs at the end of the intervention, but the effect did not persist at 3-month follow-up and no differences in respiratory complications were detected between treatment groups and controls.",
"title": ""
},
{
"docid": "e7100965a09aa55a5fe17959443e9004",
"text": "Prior studies (Gergely et al., 1995; Woodward, 1998) have found that infants focus on the goals of an action over other details. The current studies tested whether infants would distinguish between a behavior that seemed to be goal-directed and one that seemed not to be. Infants in one condition saw an actor grasp one of two toys that sat side by side on a stage. Infants in the other condition saw the actor drop her hand onto one of the toys in a manner that looked unintentional. Once infants had been habituated to these events, they were shown test events in which either the path of motion or the object that was touched had changed. Nine-month-olds differentiated between these two actions. When they saw the actor grasp the toy, they looked longer on trials with a change in goal object than on trials with a change in path. When they saw the actor drop her hand onto the toy, they looked equally at the two test events. These findings did not result from infants being more interested in grasping as compared to inert hands. In a second study, 5-month-old infants showed patterns similar to those seen in 9-month-olds. These findings have implications for theories of the development of the concept of intention. They argue against the claim that infants are innately predisposed to interpret any motion of an animate agent as intentional.",
"title": ""
},
{
"docid": "54465eccc901a8258b5b6633c4c36958",
"text": "Melatonin (5-methoxy-N-acetyltryptamine), dubbed the hormone of darkness, is released following a circadian rhythm with high levels at night. It provides circadian and seasonal timing cues through activation of G protein-coupled receptors (GPCRs) in target tissues (1). The discovery of selective melatonin receptor ligands and the creation of mice with targeted disruption of melatonin receptor genes are valuable tools to investigate the localization and functional roles of the receptors in native systems. Here we describe the pharmacological characteristics of melatonin receptor ligands and their various efficacies (agonist, antagonist, or inverse agonist), which can vary depending on tissue and cellular milieu. We also review melatonin-mediated responses through activation of melatonin receptors (MT1, MT2, and MT3) highlighting their involvement in modulation of CNS, hypothalamic-hypophyseal-gonadal axis, cardiovascular, and immune functions. For example, activation of the MT1 melatonin receptor inhibits neuronal firing rate in the suprachiasmatic nucleus (SCN) and prolactin secretion from the pars tuberalis and induces vasoconstriction. Activation of the MT2 melatonin receptor phase shifts circadian rhythms generated within the SCN, inhibits dopamine release in the retina, induces vasodilation, enhances splenocyte proliferation and inhibits leukocyte rolling in the microvasculature. Activation of the MT3 melatonin receptor reduces intraocular pressure and inhibits leukotriene B4-induced leukocyte adhesion. We conclude that an accurate characterization of melatonin receptors mediating specific functions in native tissues can only be made using receptor specific ligands, with the understanding that receptor ligands may change efficacy in both native tissues and heterologous expression systems.",
"title": ""
},
{
"docid": "716b33237c0fb4e1b803bcdce462960a",
"text": "The order of prenominal adjectival modifiers in English is governed by complex and difficult to describe constraints which straddle the boundary between competence and performance. This paper describes and compares a number of statistical and machine learning techniques for ordering sequences of adjectives in the context of a natural language generation system.",
"title": ""
},
{
"docid": "a41dfbce4138a8422bc7ddfac830e557",
"text": "This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered.",
"title": ""
}
] |
scidocsrr
|
577c6eaf0e46cb818cb2958e454e70b7
|
Scaling blockchain for the energy sector
|
[
{
"docid": "16b08c95aaa4f7db98b00b50cb387014",
"text": "Blockchain-based solutions are one of the major areas of research for institutions, particularly in the financial and the government sectors. There is little disagreement that backbone technologies currently used in these sectors are outdated and need an overhaul to conform to the needs of the times. Distributed or decentralized ledgers in the form of blockchains are one of themost discussed potential solutions to the stated problem. We provide a description of permissioned blockchain systems that could be used in creating secure ledgers or timestamped registries. We contend that the blockchain protocol and data should be accessible to end users to provide a higher level of decentralization and transparency and argue that proof ofwork could be effectively used in permissioned blockchains as a means of providing and diversifying security.",
"title": ""
}
] |
[
{
"docid": "cdc3a11e556cb73f5629135cbb5f0527",
"text": "Reinforcement learning methods are often considered as a potential solution to enable a robot to adapt to changes in real time to an unpredictable environment. However, with continuous action, only a few existing algorithms are practical for real-time learning. In such a setting, most effective methods have used a parameterized policy structure, often with a separate parameterized value function. The goal of this paper is to assess such actor-critic methods to form a fully specified practical algorithm. Our specific contributions include 1) developing the extension of existing incremental policy-gradient algorithms to use eligibility traces, 2) an empirical comparison of the resulting algorithms using continuous actions, 3) the evaluation of a gradient-scaling technique that can significantly improve performance. Finally, we apply our actor-critic algorithm to learn on a robotic platform with a fast sensorimotor cycle (10ms). Overall, these results constitute an important step towards practical real-time learning control with continuous action.",
"title": ""
},
{
"docid": "bf14fb39f07e01bd6dc01b3583a726b6",
"text": "To provide a general context for library implementations of open source software (OSS), the purpose of this paper is to assess and evaluate the awareness and adoption of OSS by the LIS professionals working in various engineering colleges of Odisha. The study is based on survey method and questionnaire technique was used for collection data from the respondents. The study finds that although the LIS professionals of engineering colleges of Odisha have knowledge on OSS, their uses in libraries are in budding stage. Suggests that for the widespread use of OSS in engineering college libraries of Odisha, a cooperative and participatory organisational system, positive attitude of authorities and LIS professionals, proper training provision for LIS professionals need to be developed.",
"title": ""
},
{
"docid": "33985b2a3a5ef35539cd72532505374b",
"text": "A 36‐year‐old female patient presented with gradually enlarging and painful bleeding of her lower lip within 20 years. The patient did not define mechanical irritation, smoking, atopic state, chronic sun exposure, or photosensitivity. She was on oral antidiabetic treatment for Type 1 diabetes mellitus for 5 years. She did not have xerophthalmia, xerostomia, or arthritis. Organomegaly, lymphadenopathy, or palpable mass or any glandular involvement such as submandibular, sublingual, lacrimal, and parotid glands were not detected. Dermatological examination revealed fine desquamation, lacy white streaks, dilated and erythematous multiple milimetric ductal openings, and mild serous discharge by palpation [Figure 1]. There was no vermilion or adjacent skin involvement. A wedge resection biopsy of the lower lip showed epidermal keratinization, granular layer, apoptotic cells lined in the basal layer, and lichenoid inflammation. Chronic lymphocytic inflammation of the minor salivary glands, periductal dense lymphocytic inflammation, and mild ductal ectasia were detected in the dermis [Figure 2]. The inflammatory infiltrate in the tissue did not contain any plasma cells staining with CD138. Periodic acid Schiff/alcian blue (PAS/AB) staining did not show any dermal mucin or thick basal membrane. Fibrosis and obliterative phlebitis within the tissue were not present.",
"title": ""
},
{
"docid": "fdd91fe14a1fb81acbbc4b7bc34aed8d",
"text": "PURPOSE\nTo investigate the reliability, minimal detectable change (MDC90) and concurrent validity of a gravity-based bubble inclinometer (inclinometer) and iPhone® application for measuring standing lumbar lordosis.\n\n\nMETHODS\nTwo investigators used both an inclinometer and an iPhone® with an inclinometer application to measure lumbar lordosis of 30 asymptomatic participants.\n\n\nRESULTS\nICC models 3,k and 2,k were used for the intrarater and interrater analysis, respectively. Good interrater and intrarater reliability was present for the inclinometer with Intraclass Correlation Coefficients (ICC) of 0.90 and 0.85, respectively and the iPhone® application with ICC values of 0.96 and 0.81. The minimal detectable change (MDC90) indicates that a change greater than or equal to 7° and 6° is needed to exceed the threshold of error using the iPhone® and inclinometer, respectively. The concurrent validity between the two instruments was good with a Pearson product-moment coefficient of correlation (r) of 0.86 for both raters. Ninety-five percent limits of agreement identified differences ranging from 9° greater in regards to the iPhone® to 8° less regarding the inclinometer.\n\n\nCONCLUSION\nBoth the inclinometer and iPhone® application possess good interrater reliability, intrarater reliability and concurrent validity for measuring standing lumbar lordosis. This investigation provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying standing lumbar lordosis. Clinicians should recognize potential individual differences when using these devices interchangeably.",
"title": ""
},
{
"docid": "593aae604e5ecd7b6d096ed033a303f8",
"text": "We describe the first mobile app for identifying plant species using automatic visual recognition. The system – called Leafsnap – identifies tree species from photographs of their leaves. Key to this system are computer vision components for discarding non-leaf images, segmenting the leaf from an untextured background, extracting features representing the curvature of the leaf’s contour over multiple scales, and identifying the species from a dataset of the 184 trees in the Northeastern United States. Our system obtains state-of-the-art performance on the real-world images from the new Leafsnap Dataset – the largest of its kind. Throughout the paper, we document many of the practical steps needed to produce a computer vision system such as ours, which currently has nearly a million users.",
"title": ""
},
{
"docid": "f753712eed9e5c210810d2afd1366eb8",
"text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.",
"title": ""
},
{
"docid": "799ab91ba69e8b81fb1c10ffd747eb27",
"text": "The concept of stochastic configuration networks (SCNs) offers a solid framework for fast implementation of feedforward neural networks through randomized learning. Unlike conventional randomized approaches, SCNs provide an avenue to select appropriate scope of random parameters to ensure the universal approximation property. In this paper, a deep version of stochastic configuration networks, namely deep stacked stochastic configuration network (DSSCN), is proposed for modeling non-stationary data streams. As an extension of evolving stochastic configuration networks (eSCNs), this work contributes a way to grow and shrink the structure of deep stochastic configuration networks autonomously from data streams. The performance of DSSCN is evaluated by six benchmark datasets. Simulation results, compared with prominent data stream algorithms, show that the proposed method is capable of achieving comparable accuracy and evolving compact and parsimonious deep stacked network architecture.",
"title": ""
},
{
"docid": "c37bfee87d4fd0a011fb6a132c3e779b",
"text": "Increasingly, methods to identify community structure in networks have been proposed which allow groups to overlap. These methods have taken a variety of forms, resulting in a lack of consensus as to what characteristics overlapping communities should have. Furthermore, overlapping community detection algorithms have been justified using intuitive arguments, rather than quantitative observations. This lack of consensus and empirical justification has limited the adoption of methods which identify overlapping communities. In this text, we distil from previous literature a minimal set of axioms which overlapping communities should satisfy. Additionally, we modify a previously published algorithm, Iterative Scan, to ensure that these properties are met. By analyzing the community structure of a large blog network, we present both structural and attribute based verification that overlapping communities naturally and frequently occur.",
"title": ""
},
{
"docid": "54db77ff3aa390760e6fc54c7e1c6dec",
"text": "A small signal analysis of DC-DC converters with Average Current Mode Control (ACMC) in Continuous Conduction Mode (CCM) is performed in a unified manner. Design-oriented equations are derived for all three types of DC-DC converters. These equations are such that they are expressed explicitly with converter circuit and operating parameters, making them easy to be directly applicable to practical design purposes. Experimental results of an ACMC Boost converter are given to validate the model prediction with good correlation.",
"title": ""
},
{
"docid": "eb271acef996a9ba0f84a50b5055953b",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "0e766418af18260be49c41050f571595",
"text": "In this article we present a survey on threats and vulnerability attacks on Bluetooth security mechanism. Bluetooth is the personal area network (PAN). It is the kind of wireless Ad hoc network. Low cost, low power, low complexity and robustness are the basic features of Bluetooth. It works on Radio frequency. Bluetooth Technology has many benefits like replacement of cable, easy file sharing, wireless synchronization and internet connectivity. As Bluetooth Technology becomes widespread, vulnerabilities in its security protocols are increasing which can be potentially dangerous to the privacy of a user’s personal information. Security in Bluetooth communication has been an active area of research for last few years. The article presents various security threats and vulnerability attacks on Bluetooth technology. Keywords— Bluetooth security; security protocol; vulnerability; security threats; bluejacking; eavesdropping; malicious attackers.",
"title": ""
},
{
"docid": "b0950aaea13e1eaf13a17d64feddf9b0",
"text": "In this paper, we describe the development of CiteSpace as an integrated environment for identifying and tracking thematic trends in scientific literature. The goal is to simplify the process of finding not only highly cited clusters of scientific articles, but also pivotal points and trails that are likely to characterize fundamental transitions of a knowledge domain as a whole. The trails of an advancing research field are captured through a sequence of snapshots of its intellectual structure over time in the form of Pathfinder networks. These networks are subsequently merged with a localized pruning algorithm. Pivotal points in the merged network are algorithmically identified and visualized using the betweenness centrality metric. An example of finding clinical evidence associated with reducing risks of heart diseases is included to illustrate how CiteSpace could be used. The contribution of the work is its integration of various change detection algorithms and interactive visualization capabilities to simply users' tasks.",
"title": ""
},
{
"docid": "b633fbaab6e314535312709557ef1139",
"text": "The purification of recombinant proteins by affinity chromatography is one of the most efficient strategies due to the high recovery yields and purity achieved. However, this is dependent on the availability of specific affinity adsorbents for each particular target protein. The diversity of proteins to be purified augments the complexity and number of specific affinity adsorbents needed, and therefore generic platforms for the purification of recombinant proteins are appealing strategies. This justifies why genetically encoded affinity tags became so popular for recombinant protein purification, as these systems only require specific ligands for the capture of the fusion protein through a pre-defined affinity tag tail. There is a wide range of available affinity pairs \"tag-ligand\" combining biological or structural affinity ligands with the respective binding tags. This review gives a general overview of the well-established \"tag-ligand\" systems available for fusion protein purification and also explores current unconventional strategies under development.",
"title": ""
},
{
"docid": "f4ea679d2c09107b1313a4795c749ca2",
"text": "Math word problems form a natural abstraction to a range of quantitative reasoning problems, such as understanding financial news, sports results, and casualties of war. Solving such problems requires the understanding of several mathematical concepts such as dimensional analysis, subset relationships, etc. In this paper, we develop declarative rules which govern the translation of natural language description of these concepts to math expressions. We then present a framework for incorporating such declarative knowledge into word problem solving. Our method learns to map arithmetic word problem text to math expressions, by learning to select the relevant declarative knowledge for each operation of the solution expression. This provides a way to handle multiple concepts in the same problem while, at the same time, supporting interpretability of the answer expression. Our method models the mapping to declarative knowledge as a latent variable, thus removing the need for expensive annotations. Experimental evaluation suggests that our domain knowledge based solver outperforms all other systems, and that it generalizes better in the realistic case where the training data it is exposed to is biased in a different way than the test data.",
"title": ""
},
{
"docid": "29ca6b4cafc7298f430e06a01a0b2602",
"text": "Various control techniques, especially LQG optimal control, have been applied to the design of active and semi-active vehicle suspensions over the past several decades. However passive suspensions remain dominant in the automotive marketplace because they are simple, reliable, and inexpensive. The force generated by a passive suspension at a given wheel can depend only on the relative displacement and velocity at that wheel, and the suspension parameters for the left and right wheels are usually required to be equal. Therefore, a passive vehicle suspension can be viewed as a decentralized feedback controller with constraints to guarantee suspension symmetry. In this paper, we cast the optimization of passive vehicle suspensions as structure-constrained LQG=H2 optimal control problems. Correlated road random excitations are taken as the disturbance inputs; ride comfort, road handling, suspension travel, and vehicle-body attitude are included in the cost outputs. We derive a set of necessary conditions for optimality and then develop a gradient-based method to efficiently solve the structure-constrained H2 optimization problem. An eight-DOF four-wheel-vehicle model is studied as an example to illustrate application of the procedure, which is useful for design of both passive suspensions and active suspensions with controller-structure constraints.",
"title": ""
},
{
"docid": "34c4c9420b45ed9e7dbc7214c88086c4",
"text": "Lexical fluency tests are frequently used in clinical practice to assess language and executive function. As part of the Spanish multicenter normative studies (NEURONORMA project), we provide age- and education-adjusted norms for three semantic fluency tasks (animals, fruit and vegetables, and kitchen tools), three formal lexical tasks (words beginning with P, M, and R), and three excluded letter fluency tasks (excluded A, E, and S). The sample consists of 346 participants who are cognitively normal, community dwelling, and ranging in age from 50 to 94 years. Tables are provided to convert raw scores to age-adjusted scaled scores. These were further converted into education-adjusted scaled scores by applying regression-based adjustments. The current norms should provide clinically useful data for evaluating elderly Spanish people. These data may also be of considerable use for comparisons with other international normative studies. Finally, these norms should help improve the interpretation of verbal fluency tasks and allow for greater diagnostic accuracy.",
"title": ""
},
{
"docid": "5a912359338b6a6c011e0d0a498b3e8d",
"text": "Learning Granger causality for general point processes is a very challenging task. In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes — Hawkes process. According to the relationship between Hawkes process’s impact function and its Granger causality graph, our model represents impact functions using a series of basis functions and recovers the Granger causality graph via group sparsity of the impact functions’ coefficients. We propose an effective learning algorithm combining a maximum likelihood estimator (MLE) with a sparsegroup-lasso (SGL) regularizer. Additionally, the flexibility of our model allows to incorporate the clustering structure event types into learning framework. We analyze our learning algorithm and propose an adaptive procedure to select basis functions. Experiments on both synthetic and real-world data show that our method can learn the Granger causality graph and the triggering patterns of the Hawkes processes simultaneously.",
"title": ""
},
{
"docid": "784720919b860d9f0606d65036ef8297",
"text": "Conventional word embedding models do not leverage information from document metadata, and they do not model uncertainty. We address these concerns with a model that incorporates document covariates to estimate conditional word embedding distributions. Our model allows for (a) hypothesis tests about the meanings of terms, (b) assessments as to whether a word is near or far from another conditioned on different covariate values, and (c) assessments as to whether estimated differences are statistically significant.",
"title": ""
},
{
"docid": "4bd103c73779e6dd5fbf7f7b1395ba5f",
"text": "Open Data’ has become very important in a wide range of fields. However for linguistics, much data is still published in proprietary, closed formats and is not made available on the web. We propose the use of linked data principles to enable language resources to be published and interlinked openly on the web, and we describe the application of this paradigm to the modeling of two resources, WordNet and the MASC corpus. Here, WordNet and the MASC corpus serve as representative examples for two major classes of linguistic resources, lexicalsemantic resources and annotated corpora, respectively. Furthermore, we argue that modeling and publishing language resources as linked data offers crucial advantages as compared to existing formalisms. In particular, it is explained how this can enhance the interoperability and the integration of linguistic resources. Further benefits of this approach include unambiguous identifiability of elements of linguistic description, the creation of dynamic, but unambiguous links between different resources, the possibility to query across distributed resources, and the availability of a mature technological infrastructure. Finally, recent community activities are described. C. Chiarcos ( ) Information Sciences Institute, University of Southern California, Marina del Rey, CA, USA e-mail: chiarcos@isi.edu J. McCrae P. Cimiano Semantic Computing Group, Cognitive Interaction Technology Center of Excellence (CITEC), University of Bielefeld, Bielefeld, Germany e-mail: jmccrae@cit-ec.uni-bielefeld.de; cimiano@cit-ec.uni-bielefeld.de C. Fellbaum Computer Science Department, Princeton University, Princeton, NJ, USA e-mail: fellbaum@princeton.edu A. Oltramari et al. (eds.), New Trends of Research in Ontologies and Lexical Resources, Theory and Applications of Natural Language Processing, DOI 10.1007/978-3-642-31782-8 2, © Springer-Verlag Berlin Heidelberg 2013 7 8 C. Chiarcos et al. 2.1 Motivation and Overview Language is arguably one of the most complex forms of human behaviour, and accordingly, its investigation involves a broad width of formalisms and resources used to analyze, to process and to generate natural language. An important challenge is to store, to connect and to exploit the wealth of language data assembled in half a century of computational linguistics research. The key issue is the interoperability of language resources, a problem that is at best partially solved [25]. Closely related to this is the challenge of information integration, i.e., how information from different sources can be retrieved and combined in an efficient way. As a principal solution, Tim Berners-Lee – the founder of the World Wide Web – proposed the so called linked data principles to publish open data on the Web. These principles represent rules of best practice that should be followed when publishing data on the Web [4]: 1. Use URIs as (unique) names for things. 2. Use HTTP URIs so that people can look up those names. 3. When someone looks up a URI, provide useful information, using Web standards such as RDF, and SPARQL. 4. Include links to other URIs, so that they can discover more things. We argue that applying the linked data principles to lexical and other linguistic resources has a number of advantages and represents an effective approach to publishing language resources as open data. The first principle means that we assign a unique identifier (URI) to every element of a resource, i.e., each entry in a lexicon, each document in a corpus, every token in a corpus as well as to each data category that we use for annotation purposes. The benefit is that this makes the above mentioned resources uniquely and globally identifiable in an unambiguous fashion. The second principle entails that any agent wishing to obtain information about the resource can contact the corresponding web server and retrieve this information using a well-established protocol (HTTP) that also supports different ‘views’ on the same resource. That is, computer agents might request a machine readable format, while web browsers might request a human-readable and browseable view of this information as HTML. The third principle requires the use of standardized, and thus, inter-operable data models for representing (RDF, [29]) and querying linked data (SPARQL, [35]). The fourth principle fosters the creation of a network of language resources where equivalent senses are linked across different lexicalsemantic resources, annotations are linked to their corresponding data categories in data category repositories, etc. In the definition of linked data, the Resource Description Framework (RDF) receives special attention. RDF was originally designed as a language to provide metadata about resources that are available both offline (e.g., books in a library) and online (e.g., eBooks in a store). RDF provides a data model that is based on labelled directed (multi-)graphs, which can be serialized in different formats, where 2 Towards Open Data for Linguistics: Linguistic Linked Data 9 Table 2.1 Selected relations from existing RDF vocabularies and possible fields of application Domain Example Reference Meta data creator Dublin core meta data categories General relations between resources sameAs Web ontology language (OWL) Concept hierarchies subClassOf RDF schema Relations between vocabularies broader Simple knowledge organization scheme Linguistic annotation lemma NLP interchange format the nodes identified by URIs are referred to as ‘resources’.1 On this basis, RDF represents information in terms of triples – a property (relation, in graph-theoretical terms a labelled edge) that connects a subject (a resource, in graph-theoretical terms a labelled node) with its object (another resource, or a literal, e.g., a string). Every RDF resource and every property is uniquely identified by a URI. They are thus globally unambiguous in the web of data. This allows resources hosted at different locations to refer to each other, and thereby to create a network of data collections. A number of RDF-based vocabularies are already available, and many of them can be directly applied to linguistic resources. A few examples are given in Table 2.1. In this way, the RDF specification provides only elementary data structures, whereas the actual vocabularies and domain-specific semantics need to be defined independently. For reasons of interoperability, existing vocabularies should be re-used whenever possible, but if a novel type of resource requires a new set of properties, RDF also provides the means to introduce new relations, etc. RDF has been applied for various purposes beyond its original field of application. In particular, it evolved into a generic format for data exchange on the Web. It was readily adapted by disciplines as diverse as biomedicine and bibliography, and eventually it became one of the building stones of the Semantic Web. Due to its application across discipline boundaries, RDF is maintained by a large and active community of users and developers, and it comes with a rich infrastructure of APIs, tools, databases, and query languages. Further, RDF vocabularies do not only define the labels that should be used to represent RDF data, but they also can introduce additional constraints to formalize specialized RDF sub-languages. For example, the Web Ontology Language (OWL) defines the data types necessary for the representation of ontologies as an extension of RDF, i.e., classes (concepts), instances (individuals) and properties (relations). In the remainder of this chapter, we explore the benefits of linked data, considering in particular the following advantages: Representation and modelling Lexical-semantic resources can be described as labelled directed graphs (feature structures, [27]), as can annotated corpora [3]. 1The term ‘resource’ is ambiguous here. As understood in this chapter, resources are structured collections of data which can be represented, for example, in RDF. Hence, we prefer the terms ‘node’ or ‘concept’ whenever RDF resources are meant. 10 C. Chiarcos et al. RDF is based on labelled directed graphs and thus particularly well-suited for modelling both types of language resources. Structural interoperability Using a common data model eases the integration of different resources. In particular, merging multiple RDF documents yields another valid RDF document, while this is not necessarily the case for other formats. Federation In contrast to traditional methods, where it may be difficult to query across even multiple parts of the same resource, linked data allows for federated querying across multiple, distributed databases maintained by different data providers. Ecosystem Linked data is supported by a community of developers in other fields beyond linguistics, and the ability to build on a broad range of existing tools and systems is clearly an advantage. Expressivity Semantic Web languages (OWL in particular) support the definition of axioms that allow to constrain the usage of the vocabulary, thus introducing formal data types and the possibility of checking a lexicon or an annotated corpus for consistency. Conceptual interoperability The linked data principles have the potential to make the interoperability problem less severe in that globally unique identifiers for concepts or categories can be used to define the vocabulary that we use and these URIs can be used by many parties who have the same interpretation of the concept. Furthermore, linking by OWL axioms allows us to define the exact relation between two different concepts beyond simple equivalence statements. Dynamic import URIs can be used to refer to external resources such that one can thus import other linguistic resources “dynamically”. By using URIs to point to external content, the URIs can be resolved when needed in order to integrate the most recent version of the dynamically imported resources. We elaborate further on these aspects in this chapter. It",
"title": ""
},
{
"docid": "ee4416a05b955cdbd83b1819f0152665",
"text": "relative densities of pharmaceutical solids play an important role in determining their performance (e.g., flow and compaction properties) in both tablet and capsule dosage forms. In this article, the authors report the densities of a wide variety of solid pharmaceutical formulations and intermediates. The variance of density with chemical structure, processing history, and dosage-form type is significant. This study shows that density can be used as an equipment-independent scaling parameter for several common drug-product manufacturing operations. any physical responses of powders, granules, and compacts such as powder flow and tensile strength are determined largely by their absolute and relative densities (1–8). Although measuring these properties is a simple task, a review of the literature reveals that a combined source of density data that formulation scientists can refer to does not exist. The purpose of this article is to provide such a reference source and to give insight about how these critical properties can be measured for common pharmaceutical solids and how they can be used for monitoring common drugproduct manufacturing operations.",
"title": ""
}
] |
scidocsrr
|
c6ed66423b14988ee7cd832a7d89a42d
|
ON LI-ION BATTERY AGEING ESTIMATION TECHNIQUES FOR GREEN ENERGY VEHICLES
|
[
{
"docid": "ede12c734b2fb65b427b3d47e1f3c3d8",
"text": "Battery management systems in hybrid-electric-vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state-of-charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose methods, based on extended Kalman filtering (EKF), that are able to accomplish these goals for a lithium ion polymer battery pack. We expect that they will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. This third paper concludes the series by presenting five additional applications where either an EKF or results from EKF may be used in typical BMS algorithms: initializing state estimates after the vehicle has been idle for some time; estimating state-of-charge with dynamic error bounds on the estimate; estimating pack available dis/charge power; tracking changing pack parameters (including power fade and capacity fade) as the pack ages, and therefore providing a quantitative estimate of state-of-health; and determining which cells must be equalized. Results from pack tests are presented. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2968dc7cceaa404b9940e7786a0c48b6",
"text": "This paper presents an empirical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices. *",
"title": ""
},
{
"docid": "f5f1300baf7ed92626c912b98b6308c9",
"text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.",
"title": ""
}
] |
[
{
"docid": "3a81f0fc24dd90f6c35c47e60db3daa4",
"text": "Advances in information and Web technologies have open numerous opportunities for online retailing. The pervasiveness of the Internet coupled with the keenness in competition among online retailers has led to virtual experiential marketing (VEM). This study examines the relationship of five VEM elements on customer browse and purchase intentions and loyalty, and the moderating effects of shopping orientation and Internet experience on these relationships. A survey was conducted of customers who frequently visited two online game stores to play two popular games in Taiwan. The results suggest that of the five VEM elements, three have positive effects on browse intention, and two on purchase intentions. Both browse and purchase intentions have positive effects on customer loyalty. Economic orientation was found to moderate that relationships between the VEM elements and browse and purchase intentions. However, convenience orientation moderated only the relationships between the VEM elements and browse intention.",
"title": ""
},
{
"docid": "9159ffb919402640381775f76b701ac8",
"text": "With the vigorous development of the World Wide Web, many large-scale knowledge bases (KBs) have been generated. To improve the coverage of KBs, an important task is to integrate the heterogeneous KBs. Several automatic alignment methods have been proposed which achieve considerable success. However, due to the inconsistency and uncertainty of large-scale KBs, automatic techniques for KBs alignment achieve low quality (especially recall). Thanks to the open crowdsourcing platforms, we can harness the crowd to improve the alignment quality. To achieve this goal, in this paper we propose a novel hybrid human-machine framework for large-scale KB integration. We rst partition the entities of different KBs into many smaller blocks based on their relations. We then construct a partial order on these partitions and develop an inference model which crowdsources a set of tasks to the crowd and infers the answers of other tasks based on the crowdsourced tasks. Next we formulate the question selection problem, which, given a monetary budget B, selects B crowdsourced tasks to maximize the number of inferred tasks. We prove that this problem is NP-hard and propose greedy algorithms to address this problem with an approximation ratio of 1--1/e. Our experiments on real-world datasets indicate that our method improves the quality and outperforms state-of-the-art approaches.",
"title": ""
},
{
"docid": "8e896b9006ecc82fcfa4f6905a3dc5ae",
"text": "In this paper, we present a generalized Wishart classifier derived from a non-Gaussian model for polarimetric synthetic aperture radar (PolSAR) data. Our starting point is to demonstrate that the scale mixture of Gaussian (SMoG) distribution model is suitable for modeling PolSAR data. We show that the distribution of the sample covariance matrix for the SMoG model is given as a generalization of the Wishart distribution and present this expression in integral form. We then derive the closed-form solution for one particular SMoG distribution, which is known as the multivariate K-distribution. Based on this new distribution for the sample covariance matrix, termed as the K -Wishart distribution, we propose a Bayesian classification scheme, which can be used in both supervised and unsupervised modes. To demonstrate the effect of including non-Gaussianity, we present a detailed comparison with the standard Wishart classifier using airborne EMISAR data.",
"title": ""
},
{
"docid": "862f795008ce9f622b5418430adcdeda",
"text": "BACKGROUND\nFeedback is an essential element of the educational process for clinical trainees. Performance-based feedback enables good habits to be reinforced and faulty ones to be corrected. Despite its importance, most trainees feel that they do not receive adequate feedback and if they do, the process is not effective.\n\n\nAIMS AND METHODS\nThe authors reviewed the literature on feedback and present the following 12 tips for clinical teachers to provide effective feedback to undergraduate and graduate medical trainees. In most of the tips, the focus is the individual teacher in clinical settings, although some of the suggestions are best adopted at the institutional level.\n\n\nRESULTS\nClinical educators will find the tips practical and easy to implement in their day-to-day interactions with learners. The techniques can be applied in settings whether the time for feedback is 5 minutes or 30 minutes.\n\n\nCONCLUSIONS\nClinical teachers can improve their skills for giving feedback to learners by using the straightforward and practical tools described in the subsequent sections. Institutions should emphasise the importance of feedback to their clinical educators, provide staff development and implement a mechanism by which the quantity and quality of feedback is monitored.",
"title": ""
},
{
"docid": "2ee9ed8260e63721b8525724b0d65d5e",
"text": "Deep neural network classifiers are vulnerable to small input perturbations carefully generated by the adversaries. Injecting adversarial inputs during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation. During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial). This additional regularization encourages two similar images (clean and perturbed versions) to produce the same outputs, not necessarily the true labels, enhancing classifier’s robustness against pixel level perturbation. Next, we show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspired by this observation, we also propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training. We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained. Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhance the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks. We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.",
"title": ""
},
{
"docid": "f9b99ad1fcf9963cca29e7ddfca20428",
"text": "Nested Named Entities (nested NEs), one containing another, are commonly seen in biomedical text, e.g., accounting for 16.7% of all named entities in GENIA corpus. While many works have been done in recognizing non-nested NEs, nested NEs have been largely neglected. In this work, we treat the task as a binary classification problem and solve it using Support Vector Machines. For each token in nested NEs, we use two schemes to set its class label: labeling as the outmost entity or the inner entity. Our preliminary results show that while the outmost labeling tends to work better in recognizing the outmost entities, the inner labeling recognizes the inner NEs better. This result should be useful for recognition of nested NEs.",
"title": ""
},
{
"docid": "a014644ccccb2a06d820ee975cfdfa88",
"text": "Analyzing customer feedback is the best way to channelize the data into new marketing strategies that benefit entrepreneurs as well as customers. Therefore an automated system which can analyze the customer behavior is in great demand. Users may write feedbacks in any language, and hence mining appropriate information often becomes intractable. Especially in a traditional feature-based supervised model, it is difficult to build a generic system as one has to understand the concerned language for finding the relevant features. In order to overcome this, we propose deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) based approaches that do not require handcrafting of features. We evaluate these techniques for analyzing customer feedback sentences on four languages, namely English, French, Japanese and Spanish. Our empirical analysis shows that our models perform well in all the four languages on the setups of IJCNLP Shared Task on Customer Feedback Analysis. Our model achieved the second rank in French, with an accuracy of 71.75% and third ranks for all the other languages.",
"title": ""
},
{
"docid": "61282d5ef37e5821a5a856f0bbe26cc2",
"text": "Second language teachers are great consumers of grammar. They are mainly interested in pedagogical grammar, but they are generally unaware of the work of theoretical linguists, such as Chomsky and Halliday. Whereas Chomsky himself has never suggested in any way that his work might be of benefit to L2 teaching, Halliday and his many disciples, have. It seems odd that language teachers should choose to ignore the great gurus of grammar. Even if their work is deemed too technical and theoretical for classroom application, it may still shed light on pedagogical grammar and provide a rationale for the way one goes about teaching grammar. In order to make informed decisions about what grammar to teach and how best to teach it, one should take stock of the various schools of grammar that seem to speak in very different voices. In the article, the writer outlines the kinds of grammar that come out of five of these schools, and assesses their usefulness to the L2 teacher.",
"title": ""
},
{
"docid": "b17e909f1301880e93797ed75d26ce57",
"text": "We propose a simple, yet effective, Word Sense Disambiguation method that uses a combination of a lexical knowledge-base and embeddings. Similar to the classic Lesk algorithm, it exploits the idea that overlap between the context of a word and the definition of its senses provides information on its meaning. Instead of counting the number of words that overlap, we use embeddings to compute the similarity between the gloss of a sense and the context. Evaluation on both Dutch and English datasets shows that our method outperforms other Lesk methods and improves upon a state-of-theart knowledge-based system. Additional experiments confirm the effect of the use of glosses and indicate that our approach works well in different domains.",
"title": ""
},
{
"docid": "a8a131f37e3cf2c06fcfe8692bf6d16d",
"text": "Near Field Communication(NFC) technology is one of the most promising technologies in the field of mobile application services recently. The integration of NFC technology and smart mobile device (e.g., smart phones, tablet PC and etc.) stimulates the daily increasing popularity of NFC-based mobile applications which having proliferated in the mobile society. However, this proliferation of NFC-based mobile services in a mobile environment can cause another security threat in the field of mobile application services. Recently, mobile phishing and smishing are one of the most serious security issues in the mobile application services. And, the NFC tag-based mobile services (i.e. NFC tag based services) also have the same problem because an NFC tag have security vulnerabilities. Actually, NFC-enabled device can communicate with NFC tag using specified data format, be called NFC Data Exchange Format(NDEF). The NDEF message is composed one or more NDEF records such as text, URI, Smart post(text and URL) and so on. Therefore, if an attacker overwrite the NDEF message in a tag or replace a NFC tag with hacked tag, they might deliver a mobile malware to an NFC-enabled device. In this paper, a secure and lightweight authentication protocols for NFC tag based services is proposed which effectively achieves security with preventing spoofing, DoS, data modification and phishing attack. And, this authentication protocols are also requires less memory storage and computational power for low-cost NFC tags.",
"title": ""
},
{
"docid": "c59652c2166aefb00469517cd270dea2",
"text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM (Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection.",
"title": ""
},
{
"docid": "e243677212e628d84d5e207fe451ce43",
"text": "Based on analysis of the structure and control requirements of ice-storage air conditioning system, a distributed control system design was introduced. The hardware environment was mainly based on Programmable Logic Controller ¿PLC¿, and a touching screen was also applied as the local platforms of SCADA ( Supervisory Control and Data Acquisition) ; The software were CX-Programmer 7.1 and EV5000 configuration soft ware respectively. Tests results show that the PLC based control system is not only capable of running stably and reliably, but also has higher control accuracy. The touching screen can communicate precisely with PLC, and monitor and control the statuses of ice-storage air conditioning system promptly via MPI(Multi-Point Interface) protocol.",
"title": ""
},
{
"docid": "004f2be5924afc4d6de21681cf9ab4c8",
"text": "Training deep recurrent neural network (RNN) architectures is complicated due to the increased network complexity. This disrupts the learning of higher order abstracts using deep RNN. In case of feed-forward networks training deep structures is simple and faster while learning long-term temporal information is not possible. In this paper we propose a residual memory neural network (RMN) architecture to model short-time dependencies using deep feed-forward layers having residual and time delayed connections. The residual connection paves way to construct deeper networks by enabling unhindered flow of gradients and the time delay units capture temporal information with shared weights. The number of layers in RMN signifies both the hierarchical processing depth and temporal depth. The computational complexity in training RMN is significantly less when compared to deep recurrent networks. RMN is further extended as bi-directional RMN (BRMN) to capture both past and future information. Experimental analysis is done on AMI corpus to substantiate the capability of RMN in learning long-term information and hierarchical information. Recognition performance of RMN trained with 300 hours of Switchboard corpus is compared with various state-of-the-art LVCSR systems. The results indicate that RMN and BRMN gains 6 % and 3.8 % relative improvement over LSTM and BLSTM networks.",
"title": ""
},
{
"docid": "25ce68e2b2d9e9d8ff741e4e9ad1e378",
"text": "Advances in electronic banking technology have created novel ways of handling daily banking affairs, especially via the online banking channel. The acceptance of online banking services has been rapid in many parts of the world, and in the leading ebanking countries the number of e-banking contracts has exceeded 50 percent. Investigates online banking acceptance in the light of the traditional technology acceptance model (TAM), which is leveraged into the online environment. On the basis of a focus group interview with banking professionals, TAM literature and e-banking studies, we develop a model indicating onlinebanking acceptance among private banking customers in Finland. The model was tested with a survey sample (n 1⁄4 268). The findings of the study indicate that perceived usefulness and information on online banking on the Web site were the main factors influencing online-banking acceptance.",
"title": ""
},
{
"docid": "d12e99d6dc078d24a171f921ac0ef4d3",
"text": "An omni-directional rolling spherical robot equipped with a high-rate flywheel (BYQ-V) is presented, the gyroscopic effects of high-rate flywheel can further enhance the dynamic stability of the spherical robot. This robot is designed for territory or lunar exploration in the future. The mechanical structure and control system of the robot are given particularly. Using the constrained Lagrangian method, the simplified dynamic model of the robot is derived under some assumptions, Moreover, a Linear Quadratic Regulator (LQR) controller and Percentage Derivative (PD) controller are designed to implement the pose and velocity control of the robot respectively, Finally, the dynamic model and the controllers are validated through simulation study and prototype experiment.",
"title": ""
},
{
"docid": "6ae9da259125e0173f41fa3506641ca4",
"text": "We study the Maximum Weighted Matching problem in a partial information setting where the agents’ utilities for being matched to other agents are hidden and the mechanism only has access to ordinal preference information. Our model is motivated by the fact that in many settings, agents cannot express the numerical values of their utility for different outcomes, but are still able to rank the outcomes in their order of preference. Specifically, we study problems where the ground truth exists in the form of a weighted graph, and look to design algorithms that approximate the true optimum matching using only the preference orderings for each agent (induced by the hidden weights) as input. If no restrictions are placed on the weights, then one cannot hope to do better than the simple greedy algorithm, which yields a half optimal matching. Perhaps surprisingly, we show that by imposing a little structure on the weights, we can improve upon the trivial algorithm significantly: we design a 1.6-approximation algorithm for instances where the hidden weights obey the metric inequality. Our algorithm is obtained using a simple but powerful framework that allows us to combine greedy and random techniques in unconventional ways. These results are the first non-trivial ordinal approximation algorithms for such problems, and indicate that we can design robust matchings even when we are agnostic to the precise agent utilities.",
"title": ""
},
{
"docid": "b3998d818b12e9dc376afea3094ae23f",
"text": "1. Andrew Borthwick and Ralph Grishman. 1999. A maximum entropy approach to named entity recognition. Ph. D. Thesis, Dept. of Computer Science, New York University. 2. Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645–6649. 3. Xuezhe Ma and Eduard Hovy. 2016. End-to-end se-quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). The Ohio State University",
"title": ""
},
{
"docid": "76ecc824566f51b5dd18e888fe7a4e50",
"text": "In this paper, we present a novel deep learning framework that derives discriminative local descriptors for 3D surface shapes. In contrast to previous convolutional neural networks (CNNs) that rely on rendering multi-view images or extracting intrinsic shape properties, we parameterize the multi-scale localized neighborhoods of a keypoint into regular 2D grids, which are termed as ‘geometry images’. The benefits of such geometry images include retaining sufficient geometric information, as well as allowing the usage of standard CNNs. Specifically, we leverage a triplet network to perform deep metric learning, which takes a set of triplets as input, and a newly designed triplet loss function is minimized to distinguish between similar and dissimilar pairs of keypoints. At the testing stage, given a geometry image of a point of interest, our network outputs a discriminative local descriptor for it. Experimental results for non-rigid shape matching on several benchmarks demonstrate the superior performance of our learned descriptors over traditional descriptors and the state-of-the-art learning-based alternatives.",
"title": ""
},
{
"docid": "f260c00b06987378f9e502f0a9b1696e",
"text": "With reconfigurable devices fast becoming complete systems in their own right, interest in their security properties has increased. While research on \" FPGA security \" has been active since the early 2000s, few have treated the field as a whole, or framed its challenges in the context of the unique FPGA usage model and application space. This dissertation sets out to examine the role of FPGAs within a security system and how solutions to security challenges can be provided. I offer the following contributions. I motivate authenticating configurations as an additional capability to FPGA configuration logic, and then describe a flexible security protocol for remote recon-figuration of FPGA-based systems over insecure networks. Non-volatile memory devices are used for persistent storage when required, and complement the lack of features in some FPGAs with tamper proofing in order to maintain specified security properties. A unique advantage of the protocol is that it can be implemented on some existing FPGAs (i.e., it does not require FPGA vendors to add function-ality to their devices). Also proposed is a solution to the \" IP distribution problem \" where designs from multiple sources are integrated into a single bitstream, yet must maintain their confidentiality. I discuss the difficulty of reproducing and comparing FPGA implementation results reported in the academic literature. Concentrating on cryptographic implementations , problems are demonstrated through designing three architecture-optimized variants of the AES block cipher and analyzing the results to show that single figures of merit, namely \" throughput \" or \" throughput per slice \" , are often meaningless without the context of an application. To set a precedent for reproducibility in our field, the HDL source code, simulation testbenches and compilation instructions are made publicly available for scrutiny and reuse. Finally, I examine payment systems as ubiquitous embedded devices, and evaluate their security vulnerabilities as they interact in a multi-chip environment. Using FPGAs as an adversarial tool, a man-in-the-middle attack against these devices is demonstrated. An FPGA-based defense is also demonstrated: the first secure wired \" distance bounding \" protocol implementation. This is then put in the context of securing reconfigurable systems. Acknowledgments I dedicate this dissertation to my parents, Mika and Gideon, for their unconditional love and support throughout my life, and to my kind siblings Hadar and Oz, and their families. They have all seen less of me than they deserved in the past twelve years as I was …",
"title": ""
},
{
"docid": "0d750d31bcd0a998bd944910e707830c",
"text": "In this paper we focus on estimating the post-click engagement on native ads by predicting the dwell time on the corresponding ad landing pages. To infer relationships between features of the ads and dwell time we resort to the application of survival analysis techniques, which allow us to estimate the distribution of the length of time that the user will spend on the ad. This information is then integrated into the ad ranking function with the goal of promoting the rank of ads that are likely to be clicked and consumed by users (dwell time greater than a given threshold). The online evaluation over live tra c shows that considering post-click engagement has a consistent positive e↵ect on both CTR, decreases the number of bounces and increases the average dwell time, hence leading to a better user post-click experience.",
"title": ""
}
] |
scidocsrr
|
696859b5c5d856c8a01a9d0bc41d2470
|
Neural Architecture Construction using EnvelopeNets
|
[
{
"docid": "808a6c959eb79deb6ac5278805f5b855",
"text": "Recently there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1-norm, average percentage of zeros, etc) and retain only the top ranked filters. Once the low scoring filters are pruned away the remainder of the network is fine tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counter-intuitive results wherein by randomly pruning 25-50% filters from deep CNNs we are able to obtain the same performance as obtained by using state of the art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class specific pruning and show that even here a random pruning strategy gives close to state of the art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection. We show that using a simple random pruning strategy we can achieve significant speed up in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster RCNN model.",
"title": ""
},
{
"docid": "c10dd691e79d211ab02f2239198af45c",
"text": "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-ofthe-art.",
"title": ""
}
] |
[
{
"docid": "68a3f9fb186289f343b34716b2e087f6",
"text": "User interface (UI) is one of the most important components of a mobile app and strongly influences users' perception of the app. However, UI design tasks are typically manual and time-consuming. This paper proposes a novel approach to (semi)-automate those tasks. Our key idea is to develop and deploy advanced deep learning models based on recurrent neural networks (RNN) and generative adversarial networks (GAN) to learn UI design patterns from millions of currently available mobile apps. Once trained, those models can be used to search for UI design samples given user-provided descriptions written in natural language and generate professional-looking UI designs from simpler, less elegant design drafts.",
"title": ""
},
{
"docid": "4236e1b86150a9557b518b789418f048",
"text": "Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.",
"title": ""
},
{
"docid": "f738f79a9d516389e1ad0c7343d525c4",
"text": "The I-V curves for Schottky diodes with two different contact areas and geometries fabricated through 1.2 μm CMOS process are presented. These curves are described applying the analysis and practical layout design. It takes into account the resistance, capacitance and reverse breakdown voltage in the semiconductor structure and the dependence of these parameters to improve its operation. The described diodes are used for a charge pump circuit implementation.",
"title": ""
},
{
"docid": "97ad45410c0b613d08f1b0202777d124",
"text": "Much of the emerging literature on social media in the workplace is characterized by an “ideology of openness” that assumes social media use will increase knowledge sharing in organizations, and that open communication is effective and desirable. We argue that affordances of social media may in fact promote both overt and covert behavior, creating dialectical tensions for distributed workers that must be communicatively managed. Drawing on a case study of the engineering division of a distributed high tech start-up, we find our participants navigate tensions in visibility-invisibility, engagement-disengagement, and sharing-control and strategically manage these tensions to preserve both openness and ambiguity. These findings highlight ways in which organizational members limit as well as share knowledge through social media, and the productive role of tensions in enabling them to attend to multiple goals.",
"title": ""
},
{
"docid": "5c598998ffcf3d6008e8e5eed94fc396",
"text": "Music information retrieval (MIR) is an emerging research area that receives growing attention from both the research community and music industry. It addresses the problem of querying and retrieving certain types of music from large music data set. Classification is a fundamental problem in MIR. Many tasks in MIR can be naturally cast in a classification setting, such as genre classification, mood classification, artist recognition, instrument recognition, etc. Music annotation, a new research area in MIR that has attracted much attention in recent years, is also a classification problem in the general sense. Due to the importance of music classification in MIR research, rapid development of new methods, and lack of review papers on recent progress of the field, we provide a comprehensive review on audio-based classification in this paper and systematically summarize the state-of-the-art techniques for music classification. Specifically, we have stressed the difference in the features and the types of classifiers used for different classification tasks. This survey emphasizes on recent development of the techniques and discusses several open issues for future research.",
"title": ""
},
{
"docid": "8e65630f39f96c281e206bdacf7a1748",
"text": "Precise measurement of the local position of moveable targets in three dimensions is still considered to be a challenge. With the presented local position measurement technology, a novel system, consisting of small and lightweight measurement transponders and a number of fixed base stations, is introduced. The system is operating in the 5.8-GHz industrial-scientific-medical band and can handle up to 1000 measurements per second with accuracies down to a few centimeters. Mathematical evaluation is based on a mechanical equivalent circuit. Measurement results obtained with prototype boards demonstrate the feasibility of the proposed technology in a practical application at a race track.",
"title": ""
},
{
"docid": "10aff8f514d474b8b6d7ccf220c96768",
"text": "In this article, we develop a real-time mobile phone-based gaze tracking and eye-blink detection system on Android platform. Our eye-blink detection scheme is developed based on the time difference between two open eye states. We develop our system by finding the greatest circle—pupil of an eye. So we combine the both Haar classifier and Normalized Summation of Square of Difference template-matching method. We define the eyeball area that is extracted from the eye region as the region of interest (ROI). The ROI helps to differentiate between the open state and closed state of the eyes. The output waveform of the scheme is analogous to binary trend, which alludes the blink detection distinctly. We categorize short, medium, and long blink, depending on the degree of closure and blink duration. Our analysis is operated on medium blink under 15 frames/s. This combined solution for gaze tracking and eye-blink detection system has high detection accuracy and low time consumption. We obtain 98% accuracy at 0° angles for blink detection from both eyes. The system is also extensively experimented with various environments and setups, including variations in illuminations, subjects, gender, angles, processing speed, RAM capacity, and distance. We found that the system performs satisfactorily under varied conditions in real time for both single eye and two eyes detection. These concepts can be exploited in different applications, e.g., to detect drowsiness of a driver, or to operate the computer cursor to develop an eye-operated mouse for disabled people.",
"title": ""
},
{
"docid": "19acb49d484c0a5d949e2f7813253759",
"text": "In this paper we present a PDR (Pedestrian Dead Reckoning) system with a phone location awareness algorithm. PDR is a device which provides position information of the pedestrian. In general, the step length is estimated using a linear combination of the walking frequency and the acceleration variance for the mobile phone. It means that the step length estimation accuracy is affected by coefficients of the walking frequency and the acceleration variance which are called step length estimation parameters. Developed PDR is assumed that it is embedded in the mobile phone. Thus, parameters can be different from each phone location such as hand with swing motion, hand without any motion and pants pocket. It means that different parameters can degrade the accuracy of the step length estimation. Step length estimation result can be improved when appropriate parameters which are determined by phone location awareness algorithm are used. In this paper, the phone location awareness algorithm for PDR is proposed.",
"title": ""
},
{
"docid": "f60426bdd66154a7d2cb6415abd8f233",
"text": "In the rapidly expanding field of parallel processing, job schedulers are the “operating systems” of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. More recently, big data workloads have created a need for a new class of computations consisting of many short computations taking seconds or minutes that process enormous quantities of data. For both supercomputers and big data systems, the efficiency of the job scheduler represents a fundamental limit on the efficiency of the system. Detailed measurement and modeling of the performance of schedulers are critical for maximizing the performance of a large-scale computing system. This paper presents a detailed feature analysis of 15 supercomputing and big data schedulers. For big data workloads, the scheduler latency is the most important performance characteristic of the scheduler. A theoretical model of the latency of these schedulers is developed and used to design experiments targeted at measuring scheduler latency. Detailed benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine, Mesos, and Hadoop YARN) are conducted. The theoretical model is compared with data and demonstrates that scheduler performance can be characterized by two key parameters: the marginal latency of the scheduler ts and a nonlinear exponent αs. For all four schedulers, the utilization of the computing system decreases to <10% for computations lasting only a few seconds. Multi-level schedulers (such as LLMapReduce) that transparently aggregate short computations can improve utilization for these short computations to >90% for all four of the schedulers that were tested.",
"title": ""
},
{
"docid": "7b5bc6fada4fba92ce81d5955b813f4c",
"text": "Intrusion Detection Systems (IDSs) attempt to identify unauthorized use, misuse, and abuse of computer systems. In response to the growth in the use and development of IDSs, we have developed a methodology for testing IDSs. The methodology consists of techniques from the eld of software testing which we have adapted for the speci c purpose of testing IDSs. In this paper, we identify a set of general IDS performance objectives which is the basis for the methodology. We present the details of the methodology, including strategies for test-case selection and speci c testing procedures. We include quantitative results from testing experiments on the Network Security Monitor (NSM), an IDS developed at UC Davis. We present an overview of the software platform that we have used to create user-simulation scripts for testing experiments. The platform consists of the UNIX tool expect and enhancements that we have developed, including mechanisms for concurrent scripts and a record-and-replay feature. We also provide background information on intrusions and IDSs to motivate our work.",
"title": ""
},
{
"docid": "4a761bed54487cb9c34fc0ff27883944",
"text": "We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can’t be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to (Sabour et al., 2017) on MNIST (LeCun et al., 1998) and then test classification accuracy on affNIST1 using an SVM layer. Accuracy is improved from benchmark 79% to 90%.",
"title": ""
},
{
"docid": "9fb5db3cdcffb968b54c7d23d8a690a2",
"text": "BACKGROUND\nPhysical activity is associated with many physical and mental health benefits, however many children do not meet the national physical activity guidelines. While schools provide an ideal setting to promote children's physical activity, adding physical activity to the school day can be difficult given time constraints often imposed by competing key learning areas. Classroom-based physical activity may provide an opportunity to increase school-based physical activity while concurrently improving academic-related outcomes. The primary aim of this systematic review and meta-analysis was to evaluate the impact of classroom-based physical activity interventions on academic-related outcomes. A secondary aim was to evaluate the impact of these lessons on physical activity levels over the study duration.\n\n\nMETHODS\nA systematic search of electronic databases (PubMed, ERIC, SPORTDiscus, PsycINFO) was performed in January 2016 and updated in January 2017. Studies that investigated the association between classroom-based physical activity interventions and academic-related outcomes in primary (elementary) school-aged children were included. Meta-analyses were conducted in Review Manager, with effect sizes calculated separately for each outcome assessed.\n\n\nRESULTS\nThirty-nine articles met the inclusion criteria for the review, and 16 provided sufficient data and appropriate design for inclusion in the meta-analyses. Studies investigated a range of academic-related outcomes including classroom behaviour (e.g. on-task behaviour), cognitive functions (e.g. executive function), and academic achievement (e.g. standardised test scores). Results of the meta-analyses showed classroom-based physical activity had a positive effect on improving on-task and reducing off-task classroom behaviour (standardised mean difference = 0.60 (95% CI: 0.20,1.00)), and led to improvements in academic achievement when a progress monitoring tool was used (standardised mean difference = 1.03 (95% CI: 0.22,1.84)). However, no effect was found for cognitive functions (standardised mean difference = 0.33 (95% CI: -0.11,0.77)) or physical activity (standardised mean difference = 0.40 (95% CI: -1.15,0.95)).\n\n\nCONCLUSIONS\nResults suggest classroom-based physical activity may have a positive impact on academic-related outcomes. However, it is not possible to draw definitive conclusions due to the level of heterogeneity in intervention components and academic-related outcomes assessed. Future studies should consider the intervention period when selecting academic-related outcome measures, and use an objective measure of physical activity to determine intervention fidelity and effects on overall physical activity levels.",
"title": ""
},
{
"docid": "6a2e6492695beab2c0a6d479bffd65e1",
"text": "Electroencephalogram (EEG) signal based emotion recognition, as a challenging pattern recognition task, has attracted more and more attention in recent years and widely used in medical, Affective Computing and other fields. Traditional approaches often lack of the high-level features and the generalization ability is poor, which are difficult to apply to the practical application. In this paper, we proposed a novel model for multi-subject emotion classification. The basic idea is to extract the high-level features through the deep learning model and transform traditional subject-independent recognition tasks into multi-subject recognition tasks. Experiments are carried out on the DEAP dataset, and our results demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "cc08118c532cbe4665f8a3ac8b7d5fd7",
"text": "We evaluated the use of gamification to facilitate a student-centered learning environment within an undergraduate Year 2 Personal and Professional Development (PPD) course. In addition to face-to-face classroom practices, an information technology-based gamified system with a range of online learning activities was presented to students as support material. The implementation of the gamified course lasted two academic terms. The subsequent evaluation from a cohort of 136 students indicated that student performance was significantly higher among those who participated in the gamified system than in those who engaged with the nongamified, traditional delivery, while behavioral engagement in online learning activities was positively related to course performance, after controlling for gender, attendance, and Year 1 PPD performance. Two interesting phenomena appeared when we examined the influence of student background: female students participated significantly more in online learning activities than male students, and students with jobs engaged significantly more in online learning activities than students without jobs. The gamified course design advocated in this work may have significant implications for educators who wish to develop engaging technology-mediated learning environments that enhance students’ learning, or for a broader base of professionals who wish to engage a population of potential users, such as managers engaging employees or marketers engaging customers.",
"title": ""
},
{
"docid": "9ae0daf70e2389f2924f5568d74a9df5",
"text": "The paper describes the CAp 2017 challenge. The challenge concerns the problem of Named Entity Recognition (NER) for tweets written in French. We first present the data preparation steps we followed for constructing the dataset released in the framework of the challenge. We begin by demonstrating why NER for tweets is a challenging problem especially when the number of entities increases. We detail the annotation process and the necessary decisions we made. We provide statistics on the inter-annotator agreement, and we conclude the data description part with examples and statistics for the data. We, then, describe the participation in the challenge, where 8 teams participated, with a focus on the methods employed by the challenge participants and the scores achieved in terms of F1 measure. Importantly, the constructed dataset comprising ∼6,000 tweets annotated for 13 types of entities, which to the best of our knowledge is the first such dataset in French, is publicly available at http://cap2017.imag.fr/competition.html .",
"title": ""
},
{
"docid": "cffbb69ca7df3b0762d246ac358d5e5b",
"text": "This paper presents a 22 nm CMOS technology analog front-end (AFE) for biomedical applications. The circuit is designed for low power and small size implementations, especially for battery-powered implantable devices, and is capable of reading out biomedical signals in the range of 0.01 Hz to 300 Hz in frequency, while rejecting power-line frequency of 50/60Hz. It employs Operational Transconductance Amplifiers (OTAs) in an OTA-C structure to realize a notch filter. The OTA designed has a very low transconductance, which is programmable from 1.069 nA/V to 2.114 nA/V. The notch at power-line frequency (50/60 Hz) achieves an attenuation of 20 dB. The power consumption of the entire AFE was found to be 11.34 nW at ±0.95V supply.",
"title": ""
},
{
"docid": "a3ac978e59bdedc18c45d460dd8fc154",
"text": "Searching for information in distributed ledgers is currently not an easy task, as information relating to an entity may be scattered throughout the ledger with no index. As distributed ledger technologies become more established, they will increasingly be used to represent real world transactions involving many parties and the search requirements will grow. An index providing the ability to search using domain specific terms across multiple ledgers will greatly enhance to power, usability and scope of these systems. We have implemented a semantic index to the Ethereum blockchain platform, to expose distributed ledger data as Linked Data. As well as indexing blockand transactionlevel data according to the BLONDiE ontology, we have mapped smart contracts to the Minimal Service Model ontology, to take the first steps towards connecting smart contracts with Semantic Web Services.",
"title": ""
},
{
"docid": "01f29e15732a48949b41a193073bcbe3",
"text": "Annotation is the process of adding semantic metadata to resources so that data becomes more meaningful. Creating additional metadata by document annotation is considered one of the main techniques that make machines understand and deal with data on the web. Our paper presents a semantic framework that annotates RSS News Feeds. Semantic Annotation for News Feeds Frameworks (SANF) aims to enrich web content by using semantic metadata easily which facilitates searching and adding of web content.",
"title": ""
},
{
"docid": "168f2c2b4e8bc52debf81eb800860cae",
"text": "Optimal reconfigurable hardware implementations may require the use of arbitrary floating-point formats that do not necessarily conform to IEEE specified sizes. We present a variable precision floating-point library (VFloat) that supports general floating-point formats including IEEE standard formats. Most previously published floating-point formats for use with reconfigurable hardware are subsets of our format. Custom datapaths with optimal bitwidths for each operation can be built using the variable precision hardware modules in the VFloat library, enabling a higher level of parallelism. The VFloat library includes three types of hardware modules for format control, arithmetic operations, and conversions between fixed-point and floating-point formats. The format conversions allow for hybrid fixed- and floating-point operations in a single design. This gives the designer control over a large number of design possibilities including format as well as number range within the same application. In this article, we give an overview of the components in the VFloat library and demonstrate their use in an implementation of the K-means clustering algorithm applied to multispectral satellite images.",
"title": ""
},
{
"docid": "319285416d58c9b2da618bb6f0c8021c",
"text": "Facial expression analysis is one of the popular fields of research in human computer interaction (HCI). It has several applications in next generation user interfaces, human emotion analysis, behavior and cognitive modeling. In this paper, a facial expression classification algorithm is proposed which uses Haar classifier for face detection purpose, Local Binary Patterns(LBP) histogram of different block sizes of a face image as feature vectors and classifies various facial expressions using Principal Component Analysis (PCA). The algorithm is implemented in real time for expression classification since the computational complexity of the algorithm is small. A customizable approach is proposed for facial expression analysis, since the various expressions and intensity of expressions vary from person to person. The system uses grayscale frontal face images of a person to classify six basic emotions namely happiness, sadness, disgust, fear, surprise and anger.",
"title": ""
}
] |
scidocsrr
|
a717bf31931d0500366da4435f4a77c1
|
Color constancy using 3D scene geometry
|
[
{
"docid": "dd1b20766f2b8099b914c780fb8cc03c",
"text": "Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic single-view reconstruction.",
"title": ""
},
{
"docid": "7b1e2439e3be5110f8634394f266da7c",
"text": "ÐIn the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges, and junctions may provide a 3D model of the scene but it will not provide information about the actual ªscaleº of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, object recognition, under unconstrained conditions, remains difficult and unreliable for current computational approaches. Here, we propose a source of information for absolute depth estimation based on the whole scene structure that does not rely on specific objects. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene and, therefore, its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.",
"title": ""
}
] |
[
{
"docid": "822d60e376da2701649754f5e56eb691",
"text": "Auto-encoding is an important task which is typically realized by deep neural networks (DNNs) such as convolutional neural networks (CNN). In this paper, we propose EncoderForest (abbrv. eForest), the first tree ensemble based auto-encoder. We present a procedure for enabling forests to do backward reconstruction by utilizing the MaximalCompatible Rule (MCR) defined by the decision paths of the trees, and demonstrate its usage in both supervised and unsupervised setting. Experiments show that, compared with DNN based auto-encoders, eForest is able to obtain lower reconstruction error with fast training speed, while the model itself is reusable and damage-tolerable.",
"title": ""
},
{
"docid": "b7956722389df722029b005d0f7566a2",
"text": "Social media platforms such as Twitter are becoming increasingly mainstream which provides valuable user-generated information by publishing and sharing contents. Identifying interesting and useful contents from large text-streams is a crucial issue in social media because many users struggle with information overload. Retweeting as a forwarding function plays an important role in information propagation where the retweet counts simply reflect a tweet's popularity. However, the main reason for retweets may be limited to personal interests and satisfactions. In this paper, we use a topic identification as a proxy to understand a large number of tweets and to score the interestingness of an individual tweet based on its latent topics. Our assumption is that fascinating topics generate contents that may be of potential interest to a wide audience. We propose a novel topic model called Trend Sensitive-Latent Dirichlet Allocation (TS-LDA) that can efficiently extract latent topics from contents by modeling temporal trends on Twitter over time. The experimental results on real world data from Twitter demonstrate that our proposed method outperforms several other baseline methods. With the rise of the Internet, blogs, and mobile devices, social media has also evolved into an information provider by publishing and sharing user-generated contents. By analyzing textual data which represents the thoughts and communication between users, it is possible to understand the public needs and concerns about what constitutes valuable information from an academic, marketing , and policy-making perspective. Twitter (http://twitter.com) is one of the social media platforms that enables its users to generate and consume useful information about issues and trends from text streams in real-time. Twitter and its 500 million registered users produce over 340 million tweets, which are text-based messages of up to 140 characters, per day 1. Also, users subscribe to other users in order to view their followers' relationships and timelines which show tweets in reverse chronological order. Although tweets may contain valuable information, many do not and are not interesting to users. A large number of tweets can overwhelm users when they check their Twitter timeline. Thus, finding and recommending tweets that are of potential interest to users from a large volume of tweets that is accumulated in real-time is a crucial but challenging task. A simple but effective way to solve these problems is to use the number of retweets. A retweet is a function that allows a user to re-post another user's tweet and other information such …",
"title": ""
},
{
"docid": "de48faf1dc4d276460b8369c9d8f36a8",
"text": "Momentum is primarily driven by firms’ performance 12 to seven months prior to portfolio formation, not by a tendency of rising and falling stocks to keep rising and falling. Strategies based on recent past performance generate positive returns but are less profitable than those based on intermediate horizon past performance, especially among the largest, most liquid stocks. These facts are not particular to the momentum observed in the cross section of US equities. Similar results hold for momentum strategies trading international equity indices, commodities, and currencies.",
"title": ""
},
{
"docid": "5694ebf4c1f1e0bf65dd7401d35726ed",
"text": "Data collection is not a big issue anymore with available honeypot software and setups. However malware collections gathered from these honeypot systems often suffer from massive sample counts, data analysis systems like sandboxes cannot cope with. Sophisticated self-modifying malware is able to generate new polymorphic instances of itself with different message digest sums for each infection attempt, thus resulting in many different samples stored for the same specimen. Scaling analysis systems that are fed by databases that rely on sample uniqueness based on message digests is only feasible to a certain extent. In this paper we introduce a non cryptographic, fast to calculate hash function for binaries in the Portable Executable format that transforms structural information about a sample into a hash value. Grouping binaries by hash values calculated with the new function allows for detection of multiple instances of the same polymorphic specimen as well as samples that are broken e.g. due to transfer errors. Practical evaluation on different malware sets shows that the new function allows for a significant reduction of sample counts.",
"title": ""
},
{
"docid": "38fab4cc5cffea363eecbc8b2f2c6088",
"text": "Domain adaptation algorithms are useful when the distributions of the training and the test data are different. In this paper, we focus on the problem of instrumental variation and time-varying drift in the field of sensors and measurement, which can be viewed as discrete and continuous distributional change in the feature space. We propose maximum independence domain adaptation (MIDA) and semi-supervised MIDA to address this problem. Domain features are first defined to describe the background information of a sample, such as the device label and acquisition time. Then, MIDA learns a subspace which has maximum independence with the domain features, so as to reduce the interdomain discrepancy in distributions. A feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation. The proposed algorithms are flexible and fast. Their effectiveness is verified by experiments on synthetic datasets and four real-world ones on sensors, measurement, and computer vision. They can greatly enhance the practicability of sensor systems, as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change.",
"title": ""
},
{
"docid": "dbf3a58ffe71e6ef61d6c69e85a3c743",
"text": "A conventional automatic speech recognizer does not perform well in the presence of noise, while human listeners are able to segregate and recognize speech in noisy conditions. We study a novel feature based on an auditory periphery model for robust speech recognition. Specifically, gammatone frequency cepstral coefficients are derived by applying a cepstral analysis on gammatone filterbank responses. Our evaluations show that the proposed feature performs considerably better than conventional acoustic features. We further demonstrate that integrating the proposed feature with a computational auditory scene analysis system yields promising recognition performance.",
"title": ""
},
{
"docid": "978c1712bf6b469059218697ea552524",
"text": "Project-based cross-sector partnerships to address social issues (CSSPs) occur in four “arenas”: business-nonprofit, business-government, government-nonprofit, and trisector. Research on CSSPs is multidisciplinary, and different conceptual “platforms” are used: resource dependence, social issues, and societal sector platforms. This article consolidates recent literature on CSSPs to improve the potential for cross-disciplinary fertilization and especially to highlight developments in various disciplines for organizational researchers. A number of possible directions for future research on the theory, process, practice, method, and critique of CSSPs are highlighted. The societal sector platform is identified as a particularly promising framework for future research.",
"title": ""
},
{
"docid": "3a86f1f91cfaa398a03a56abb34f497c",
"text": "We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as nonoverlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, for example, line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation to approximate generalized blue noise properties. To generate these samples with the desired properties, we first construct a set of nonoverlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach that combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum..",
"title": ""
},
{
"docid": "76e15dd4090301ec855fdc3e22ff238f",
"text": "Robert Godwin-Jones Virginia Commonwealth University It wasn’t that long ago that the most exciting thing you could so with your new mobile phone was to download a ringtone. Today, new iPhone or Android phone users face the quandary of which of the hundreds of thousands of apps (applications) they should choose. It seems that everyone from federal government agencies to your local bakery has an app available. This phenomenon, not surprisingly has led to tremendous interest among educators. Mobile learning (often “m-learning”) is in itself not new, but new devices with enhanced capabilities have dramatically increased the interest level, including among language educators. The Apple iPad and other new tablet computers are adding to the mobile app frenzy. In this column we will explore the state of language learning apps, the devices they run on, and how they are developed.",
"title": ""
},
{
"docid": "97a68243cafa6f4118d0d1067f8e54f7",
"text": "To make sound economic decisions, the brain needs to compute several different value-related signals. These include goal values that measure the predicted reward that results from the outcome generated by each of the actions under consideration, decision values that measure the net value of taking the different actions, and prediction errors that measure deviations from individuals' previous reward expectations. We used functional magnetic resonance imaging and a novel decision-making paradigm to dissociate the neural basis of these three computations. Our results show that they are supported by different neural substrates: goal values are correlated with activity in the medial orbitofrontal cortex, decision values are correlated with activity in the central orbitofrontal cortex, and prediction errors are correlated with activity in the ventral striatum.",
"title": ""
},
{
"docid": "b77b4786128a214b9d91caec1232d513",
"text": "FANET are wireless ad hoc networks on unmanned aerial vehicles, and are characterized by high nodes mobility, dynamically changing topology and movement in 3D-space. FANET routing is an extremely complicated problem. The article describes the bee algorithm and the routing process based on the mentioned algorithm in ad hoc networks. The classification of FANET routing methods is given. The overview of the routing protocols based on the bee colony algorithms is provided. Owing to the experimental analysis, bio-inspired algorithms based on the bee colony were proved to show good results, having better efficiency than traditional FANET routing algorithms in most cases.",
"title": ""
},
{
"docid": "a9b769e33467cdcc86ab47b5183e5a5b",
"text": "The focus of this study is to examine the motivations of online community members to share information and rumors. We investigated an online community of interest, the members of which voluntarily associate and communicate with people with similar interests. Community members, posters and lurkers alike, were surveyed on the influence of extrinsic and intrinsic motivations, as well as normative influences, on their willingness to share information and rumors with others. The results indicated that posters and lurkers are differently motivated by intrinsic factors to share, and that extrinsic rewards like improved reputation and status-building within the community are motivating factors for rumor mongering. The results are discussed and future directions for this area of research are offered.",
"title": ""
},
{
"docid": "718cf9a405a81b9a43279a1d02f5e516",
"text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.",
"title": ""
},
{
"docid": "8bcf5693c512df2429b49521239f2d87",
"text": "Reliable segmentation of cell nuclei from three dimensional (3D) microscopic images is an important task in many biological studies. We present a novel, fully automated method for the segmentation of cell nuclei from 3D microscopic images. It was designed specifically to segment nuclei in images where the nuclei are closely juxtaposed or touching each other. The segmentation approach has three stages: 1) a gradient diffusion procedure, 2) gradient flow tracking and grouping, and 3) local adaptive thresholding. Both qualitative and quantitative results on synthesized and original 3D images are provided to demonstrate the performance and generality of the proposed method. Both the over-segmentation and under-segmentation percentages of the proposed method are around 5%. The volume overlap, compared to expert manual segmentation, is consistently over 90%. The proposed algorithm is able to segment closely juxtaposed or touching cell nuclei obtained from 3D microscopy imaging with reasonable accuracy.",
"title": ""
},
{
"docid": "a238ba310374a78d9c0e09bee5aaf123",
"text": "Automatically constructed knowledge bases (KB’s) are a powerful asset for search, analytics, recommendations and data integration, with intensive use at big industrial stakeholders. Examples are the knowledge graphs for search engines (e.g., Google, Bing, Baidu) and social networks (e.g., Facebook), as well as domain-specific KB’s (e.g., Bloomberg, Walmart). These achievements are rooted in academic research and community projects. The largest general-purpose KB’s with publicly accessible contents are BabelNet, DBpedia, Wikidata, and Yago. They contain millions of entities, organized in hundreds to hundred thousands of semantic classes, and billions of relational facts on entities. These and other knowledge and data resources are interlinked at the entity level, forming the Web of Linked Open Data.",
"title": ""
},
{
"docid": "be9b4827de5d58197e0611fdd69ee953",
"text": "Recent research on the mechanism underlying the interaction of bacterial pathogens with their host has shifted the focus to secreted microbial proteins affecting the physiology and innate immune response of the target cell. These proteins either traverse the plasma membrane via specific entry pathways involving host cell receptors or are directly injected via bacterial secretion systems into the host cell, where they frequently target mitochondria. The import routes of bacterial proteins are mostly unknown, whereas the effect of mitochondrial targeting by these proteins has been investigated in detail. For a number of them, classical leader sequences recognized by the mitochondrial protein import machinery have been identified. Bacterial outer membrane beta-barrel proteins can also be recognized and imported by mitochondrial transporters. Besides an obvious importance in pathogenicity, understanding import of bacterial proteins into mitochondria has a highly relevant evolutionary aspect, considering the endosymbiotic, proteobacterial origin of mitochondria. The review covers the current knowledge on the mitochondrial targeting and import of bacterial pathogenicity factors.",
"title": ""
},
{
"docid": "3a45f11cadd76c5430fd6cbf87a804fa",
"text": "Newborn swapping and abduction is a global problem and traditional approaches such as ID bracelets and footprinting do not provide the required level of security. This paper introduces the concept of using face recognition for identifying newborns and presents an automatic face recognition algorithm. The proposed multiresolution algorithm extracts Speeded up robust features and local binary patterns from different levels of Gaussian pyramid. The feature descriptors obtained at each Gaussian level are combined using weighted sum rule. On a newborn face database of 34 babies, the proposed algorithm yields rank-1 identification accuracy of 86.9%.",
"title": ""
},
{
"docid": "745562de56499ff0030f35afa8d84b7f",
"text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.",
"title": ""
},
{
"docid": "9b9a2a9695f90a6a9a0d800192dd76f6",
"text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.",
"title": ""
}
] |
scidocsrr
|
cf5dc85683075b7c3b74a593fb9501d2
|
Variational Dropout and the Local Reparameterization Trick
|
[
{
"docid": "b70746423a35b7a55df1beb5be4fc411",
"text": "We propose a technique for increasing the efficiency of gradient-based inference and learning in Bayesian networks with multiple layers of continuous latent variables. We show that, in many cases, it is possible to express such models in an auxiliary form, where continuous latent variables are conditionally deterministic given their parents and a set of independent auxiliary variables. Variables of models in this auxiliary form have much larger Markov blankets, leading to significant speedups in gradient-based inference, e.g. rapid mixing Hybrid Monte Carlo and efficient gradient-based optimization. The relative efficiency is confirmed in experiments.",
"title": ""
},
{
"docid": "91ad02ab816f7897f86916e9c9106ef4",
"text": "Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout.",
"title": ""
},
{
"docid": "0c9a76222f885b95f965211e555e16cd",
"text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.",
"title": ""
}
] |
[
{
"docid": "632fd895e8920cd9b25b79c9d4bd4ef4",
"text": "In minimally invasive surgery, instruments are inserted from the exterior of the patient’s body into the surgical field inside the body through the minimum incision, resulting in limited visibility, accessibility, and dexterity. To address this problem, surgical instruments with articulated joints and multiple degrees of freedom have been developed. The articulations in currently available surgical instruments use mainly wire or link mechanisms. These mechanisms are generally robust and reliable, but the miniaturization of the mechanical parts required often results in problems with size, weight, durability, mechanical play, sterilization, and assembly costs. We thus introduced a compliant mechanism to a laparoscopic surgical instrument with multiple degrees of freedom at the tip. To show the feasibility of the concept, we developed a prototype with two degrees of freedom articulated surgical instruments that can perform the grasping and bending movements. The developed prototype is roughly the same size of the conventional laparoscopic instrument, within the diameter of 4 mm. The elastic parts were fabricated by Ni-Ti alloy and SK-85M, rigid parts ware fabricated by stainless steel, covered by 3D- printed ABS resin. The prototype was designed using iterative finite element method analysis, and has a minimal number of mechanical parts. The prototype showed hysteresis in grasping movement presumably due to the friction; however, the prototype showed promising mechanical characteristics and was fully functional in two degrees of freedom. In addition, the prototype was capable to exert over 15 N grasping that is sufficient for the general laparoscopic procedure. The evaluation tests thus positively showed the concept of the proposed mechanism. The prototype showed promising characteristics in the given mechanical evaluation experiments. Use of a compliant mechanism such as in our prototype may contribute to the advancement of surgical instruments in terms of simplicity, size, weight, dexterity, and affordability.",
"title": ""
},
{
"docid": "d1afaada6bf5927d9676cee61d3a1d49",
"text": "t-Closeness is a privacy model recently defined for data anonymization. A data set is said to satisfy t-closeness if, for each group of records sharing a combination of key attributes, the distance between the distribution of a confidential attribute in the group and the distribution of the attribute in the entire data set is no more than a threshold t. Here, we define a privacy measure in terms of information theory, similar to t-closeness. Then, we use the tools of that theory to show that our privacy measure can be achieved by the postrandomization method (PRAM) for masking in the discrete case, and by a form of noise addition in the general case.",
"title": ""
},
{
"docid": "3d0a6b490a80e79690157a9ed690fdcc",
"text": "In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easily acquired yet hard to display. Despite considerable progress in 3D display technologies, most are still expensive and require special glasses for viewing, so RGBD content is primarily viewed on 2D screens, removing the depth channel from the final viewing experience. We train a generative convolutional neural network that predicts the 2D viewing saliency map for a given frame using the RGBD pixel values and previous fixation estimates in the video. To evaluate the performance of our approach, we present a new comprehensive database of 2D viewing eye-fixation ground-truth for RGBD videos. Our experiments indicate that it is beneficial to integrate depth into video saliency estimates for content that is viewed on a 2D display. We demonstrate that our approach outperforms state-of-the-art methods for video saliency, achieving 15% relative improvement.",
"title": ""
},
{
"docid": "2abfa229fa2d315d9c1550549a9deb42",
"text": "Twenty-five adolescents reported their daily activities and the quality of their experiences for a total of 753 times during a normal week, in response to random beeps transmitted by an electronic paging device. In this sample adolescents were found to spend most of their time either in conversation with peers or in watching television. Negative affects were prevalent in most activities involving socialization into adult roles. Television viewing appears to be an affectless state associated with deviant behavior and antisocial personality traits. The research suggests the importance of a systemic approach which studies persons' activities and experiences in an ecological context. The experiential sampling method described in this paper provides a tool for collecting such systemic data.",
"title": ""
},
{
"docid": "751fffb80b29e2463117461fde03e54c",
"text": "Many applications using wireless sensor networks (WSNs) aim at providing friendly and intelligent services based on the recognition of human's activities. Although the research result on wearable computing has been fruitful, our experience indicates that a user-free sensor deployment is more natural and acceptable to users. In our system, activities were recognized through matching the movement patterns of the objects, to which tri-axial accelerometers had been attached. Several representative features, including accelerations and their fusion, were calculated and three classifiers were tested on these features. Compared with decision tree (DT) C4.5 and multiple-layer perception (MLP), support vector machine (SVM) performs relatively well across different tests. Additionally, feature selection are discussed for better system performance for WSNs",
"title": ""
},
{
"docid": "a13ff1e2192c9a7e4bcfdf5e1ac39538",
"text": "Before graduating from X as Waymo, Google's self-driving car project had been using custom lidars for several years. In their latest revision, the lidars are designed to meet the challenging requirements we discovered in autonomously driving 2 million highly-telemetered miles on public roads. Our goal is to approach price points required for advanced driver assistance systems (ADAS) while meeting the performance needed for safe self-driving. This talk will review some history of the project and describe a few use-cases for lidars on Waymo cars. Out of that will emerge key differences between lidars for self-driving and traditional applications (e.g. mapping) which may provide opportunities for semiconductor lasers.",
"title": ""
},
{
"docid": "90dd589be3f8f78877367486e0f66e11",
"text": "Patch-level descriptors underlie several important computer vision tasks, such as stereo-matching or content-based image retrieval. We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval. The proposed family of descriptors, called Patch-CKN, adapt the recently introduced Convolutional Kernel Network (CKN), an unsupervised framework to learn convolutional architectures. We present a comparison framework to benchmark current deep convolutional approaches along with Patch-CKN for both patch and image retrieval, including our novel \"RomePatches\" dataset. Patch-CKN descriptors yield competitive results compared to supervised CNN alternatives on patch and image retrieval.",
"title": ""
},
{
"docid": "a325d0761491f814d3f5743e44868c74",
"text": "This paper reviews the literature on child neglect with respect to child outcomes, prevention and intervention, and implications for policy. First, the prevalence of the problem is discussed and then potential negative outcomes for neglected children, including behavior problems, low self-esteem, poor school performance, and maladjustment/psychopathology, are discussed. Risk factors and current child neglect interventions are then reviewed. Popular family support programs, such as family preservation, have mixed success rates for preventing child neglect. The successes and shortcomings of other programs are also examined with a focus on implications for future research and policy. Overall, the research supports a multidisciplinary approach to assessment, intervention, and research on child neglect. Furthermore, the need for a combined effort among parents, community members, professionals, and policymakers to increase awareness and prevention endeavors is discussed. Targeted attempts to educate all involved parties should focus on early intervention during specific encounters with atrisk families via medical settings, school settings, and parent education programs.",
"title": ""
},
{
"docid": "d02d4382a4dda41af4e1b8393d24e377",
"text": "Many implantable systems have been designed for long-term, pulsatile delivery of insulin, but the lifetime of these devices is limited by the need for battery replacement and consequent replacement surgery. Here we propose a batteryless, fully implantable insulin pump that can be actuated by a magnetic field. The pump is prepared by simple-assembly of magnets and constituent units and comprises a drug reservoir and actuator equipped with a plunger and barrel, each assembled with a magnet. The plunger moves to noninvasively infuse insulin only when a magnetic field is applied on the exterior surface of the body. Here we show that the dose is easily controlled by varying the number of magnet applications. Also, pump implantation in diabetic rats results in profiles of insulin concentration and decreased blood glucose levels similar to those observed in rats treated with conventional subcutaneous insulin injections.",
"title": ""
},
{
"docid": "2bd5ca4cbb8ef7eea1f7b2762918d18b",
"text": "Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the uncertainty in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. For only a 2.8% increase in miss rate, we have succeeded in training a student network that is 8 times faster and 21 times smaller than the teacher network.",
"title": ""
},
{
"docid": "1d386e2c1be2ad7bfa80d0fe0b80bd9f",
"text": "The SMAS was described more than 25 years ago, yet its full potential in face-lift surgery has become appreciated only more recently. A reappraisal of the various aspects of SMAS surgery is now appropriate. These include aspects of its release from the deep fascia, the several considerations underlying the vectors of flap redistribution, and the rationale underlying the methods of flap fixation. These are unique, compared with the traditional considerations in subcutaneous face lifts and en bloc subperiosteal lifts. (Plast. Reconstr. Surg. 107: 1545, 2001.)",
"title": ""
},
{
"docid": "403310053251e81cdad10addedb64c87",
"text": "Many types of data are best analyzed by fitting a curve using nonlinear regression, and computer programs that perform these calculations are readily available. Like every scientific technique, however, a nonlinear regression program can produce misleading results when used inappropriately. This article reviews the use of nonlinear regression in a practical and nonmathematical manner to answer the following questions: Why is nonlinear regression superior to linear regression of transformed data? How does nonlinear regression differ from polynomial regression and cubic spline? How do nonlinear regression programs work? What choices must an investigator make before performing nonlinear regression? What do the final results mean? How can two sets of data or two fits to one set of data be compared? What problems can cause the results to be wrong? This review is designed to demystify nonlinear regression so that both its power and its limitations will be appreciated.",
"title": ""
},
{
"docid": "2da2ae2bd558f233ea50ee06bbad26bd",
"text": "Distributed denial-of-service (DDoS) attacks are a major security threat, the prevention of which is very hard, like when it comes to highly distributed daemon-based attacks. The early discovery of these attacks, although difficult, is necessary to protect network resources as well as the end users. In this paper, we address the problem of DDoS attacks and present the foundation and algorithms of our IDS. The base of our system is composed of intrusion detection systems (IDSs) which use the KDD Cup dataset to detect intrusion. The IDS scans all the files being transmitted from the routers for malicious content and known virus signatures. The evaluation of our system, using the KDD testing dataset, shows a better ratio of detecting attacks and a low false positives ratio. It also supports easy modifiability, scalability and usability.",
"title": ""
},
{
"docid": "15fa73633d6ec7539afc91bb1f45098f",
"text": "Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.",
"title": ""
},
{
"docid": "2739acca1a61ca8b2738b1312ab857ab",
"text": "The Telecare Medical Information System (TMIS) provides a set of different medical services to the patient and medical practitioner. The patients and medical practitioners can easily connect to the services remotely from their own premises. There are several studies carried out to enhance and authenticate smartcard-based remote user authentication protocols for TMIS system. In this article, we propose a set of enhanced and authentic Three Factor (3FA) remote user authentication protocols utilizing a smartphone capability over a dynamic Cloud Computing (CC) environment. A user can access the TMIS services presented in the form of CC services using his smart device e.g. smartphone. Our framework transforms a smartphone to act as a unique and only identity required to access the TMIS system remotely. Methods, Protocols and Authentication techniques are proposed followed by security analysis and a performance analysis with the two recent authentication protocols proposed for the healthcare TMIS system.",
"title": ""
},
{
"docid": "077b5d7c04f321083278f0aa016f3e34",
"text": "Obtaining the right set of data for evaluating the fulfillment of different quality standards in the extract-transform-load (ETL) process design is rather challenging. First, the real data might be out of reach due to different privacy constraints, while providing a synthetic set of data is known as a labor-intensive task that needs to take various combinations of process parameters into account. Additionally, having a single dataset usually does not represent the evolution of data throughout the complete process lifespan, hence missing the plethora of possible test cases. To facilitate such demanding task, in this paper we propose an automatic data generator (i.e., Bijoux). Starting from a given ETL process model, Bijoux extracts the semantics of data transformations, analyzes the constraints they imply over data, and automatically generates testing datasets. At the same time, it considers different dataset and transformation characteristics (e.g., size, distribution, selectivity, etc.) in order to cover a variety of test scenarios. We report our experimental findings showing the effectiveness and scalability of our approach.",
"title": ""
},
{
"docid": "f2c058a53fa4aea6febc12e2ce87750b",
"text": "This research aims to develop a multiple-choice Web-based quiz-game-like formative assessment system, named GAM-WATA. The unique design of ‘Ask-Hint Strategy’ turns the Web-based formative assessment into an online quiz game. ‘Ask-Hint Strategy’ is composed of ‘Prune Strategy’ and ‘Call-in Strategy’. ‘Prune Strategy’ removes one incorrect option and turns the original 4-option item into a 3-option one. ‘Call-in Strategy’ provides the rate at which other test takers choose each option when answering a question. This research also compares the effectiveness of three different types of formative assessment in an e-Learning environment: paper-and-pencil test (PPT), normal Web-based test (NWBT) and GAM-WATA. In total, 165 fifth grade elementary students (from six classes) in central Taiwan participated in this research. The six classes of students were then divided into three groups and each group was randomly assigned one type of formative assessment. Overall results indicate that different types of formative assessment have significant impacts on e-Learning effectiveness and that the e-Learning effectiveness of the students in the GAM-WATA group appears to be better. Students in the GAM-WATA group more actively participate in Web-based formative assessment to do self-assessment than students in the N-WBT group. The effectiveness of formative assessment will not be significantly improved only by replacing the paper-and-pencil test with Web-based test. The strategies included in GAMWATA are recommended to be taken into consideration when researchers design Web-based formative assessment systems in the future. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "22026affeb5cb0489e67c71c9b9d6d6c",
"text": "In this letter, a two-pole tunable combline filter that allows for operational agility of the center frequency, passband bandwidth, and locations of upper and lower transmission zeros is proposed. To increase the tunable range of constant absolute-bandwidth for all frequency tuning states, a novel T-type bandwidth-control network was devised between resonators to flexibly tune the coupling coefficient without limitations imposed by the available varactor capacitance. By use of tunable source-load coupling, two tunable transmission zeros were produced on both sides of the passband to significantly improve filter selectivity and to dynamically afford interference suppression. A second-order 1.7-2.7 GHz filter with 1 dB constant bandwidth tuning from 50 to 110 MHz and transmission zero locations tuning from 300 to 700 MHz with respect to the passband was developed and fabricated. Good agreement was obtained between the simulated and experimental results.",
"title": ""
}
] |
scidocsrr
|
85d2310570521a9afa72d42946a00c1c
|
Cost-value requirements prioritization in Requirements Engineering Student
|
[
{
"docid": "42045752f292585bf20ad960f2b30469",
"text": "eveloping software systems that meet stakeholders' needs and expectations is the ultimate goal of any software provider seeking a competitive edge. To achieve this, you must effectively and accurately manage your stakeholders' system requirements: the features, functions , and attributes they need in their software system. 1 Once you agree on these requirements, you can use them as a focal point for the development process and produce a software system that meets the expectations of both customers and users. However, in real-world software development, there are usually more requirements than you can implement given stakeholders' time and resource constraints. Thus, project managers face a dilemma: How do you select a subset of the customers' requirements and still produce a system that meets their needs? Deciding which requirements really matter is a difficult task and one increasingly demanded because of time and budget constraints. The authors developed a cost–value approach for prioritizing requirements and applied it to two commercial projects.",
"title": ""
},
{
"docid": "1406e39d95505da3d7ab2b5c74c2e068",
"text": "Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases. It is a major step taken in making crucial decisions so as to increase the economic value of a system. Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions. Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins. Results: 73 Primary studies were selected from the search processes. Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers. Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13. Conclusion: Prioritization has been significantly discussed in the requirements engineering domain. However , it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues. Also, the applicability of existing techniques in complex and real setting has not been reported yet.",
"title": ""
}
] |
[
{
"docid": "ca7afb87dae38ee0cf079f91dbd91d43",
"text": "Diet is associated with the development of CHD. The incidence of CHD is lower in southern European countries than in northern European countries and it has been proposed that this difference may be a result of diet. The traditional Mediterranean diet emphasises a high intake of fruits, vegetables, bread, other forms of cereals, potatoes, beans, nuts and seeds. It includes olive oil as a major fat source and dairy products, fish and poultry are consumed in low to moderate amounts. Many observational studies have shown that the Mediterranean diet is associated with reduced risk of CHD, and this result has been confirmed by meta-analysis, while a single randomised controlled trial, the Lyon Diet Heart study, has shown a reduction in CHD risk in subjects following the Mediterranean diet in the secondary prevention setting. However, it is uncertain whether the benefits of the Mediterranean diet are transferable to other non-Mediterranean populations and whether the effects of the Mediterranean diet will still be feasible in light of the changes in pharmacological therapy seen in patients with CHD since the Lyon Diet Heart study was conducted. Further randomised controlled trials are required and if the risk-reducing effect is confirmed then the best methods to effectively deliver this public health message worldwide need to be considered.",
"title": ""
},
{
"docid": "90bb9a4740e9fa028932b68a34717b43",
"text": "Recently, the increase of interconnectivity has led to a rising amount of IoT enabled devices in botnets. Such botnets are currently used for large scale DDoS attacks. To keep track with these malicious activities, Honeypots have proven to be a vital tool. We developed and set up a distributed and highly-scalable WAN Honeypot with an attached backend infrastructure for sophisticated processing of the gathered data. For the processed data to be understandable we designed a graphical frontend that displays all relevant information that has been obtained from the data. We group attacks originating in a short period of time in one source as sessions. This enriches the data and enables a more in-depth analysis. We produced common statistics like usernames, passwords, username/password combinations, password lengths, originating country and more. From the information gathered, we were able to identify common dictionaries used for brute-force login attacks and other more sophisticated statistics like login attempts per session and attack efficiency.",
"title": ""
},
{
"docid": "b470aef5ad73ead4f0fff8012fab89cb",
"text": "In this paper, an approach based on a genetic algorithm is presented in order to optimize the connection topology of an offshore wind farm network. The main objective is to introduce a technique of coding a network topology to a binary string. The first advantage is that the optimal connections of both middle- and high-voltage alternating-current grids are considered, i.e., the radial clustering of wind turbines, the number and locations of the offshore electrical substations, and the number of high-voltage cables. The second improvement consists of removing infeasible network configurations as designs with crossing cables and thereby reduces the search space of solutions.",
"title": ""
},
{
"docid": "2f8eb33eed4aabce1d31f8b7dfe8e7de",
"text": "A pre-trained convolutional deep neural network (CNN) is a feed-forward computation perspective, which is widely used for the embedded systems, requires high power-and-area efficiency. This paper realizes a binarized CNN which treats only binary 2-values (+1/-1) for the inputs and the weights. In this case, the multiplier is replaced into an XNOR circuit instead of a dedicated DSP block. For hardware implementation, using binarized inputs and weights is more suitable. However, the binarized CNN requires the batch normalization techniques to retain the classification accuracy. In that case, the additional multiplication and addition require extra hardware, also, the memory access for its parameters reduces system performance. In this paper, we propose the batch normalization free CNN which is mathematically equivalent to the CNN using batch normalization. The proposed CNN treats the binarized inputs and weights with the integer bias. We implemented the VGG-16 benchmark CNN on the NetFPGA-SUME FPGA board, which has the Xilinx Inc. Virtex7 FPGA and three off-chip QDR II+ Synchronous SRAMs. Compared with the conventional FPGA realizations, although the classification error rate is 6.5% decayed, the performance is 2.82 times faster, the power efficiency is 1.76 times lower, and the area efficiency is 11.03 times smaller. Thus, our method is suitable for the embedded computer system.",
"title": ""
},
{
"docid": "dd1e7bb3ba33c5ea711c0d066db53fa9",
"text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.",
"title": ""
},
{
"docid": "be7f7d9c6a28b7d15ec381570752de95",
"text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.",
"title": ""
},
{
"docid": "f5a5f29c2e45a4065832a2bec23a2219",
"text": "Teleoperation remains an important aspect for robotic systems especially when deployed in unstructured environments. While a range of research strives for robots that are completely autonomous, many robotic applications still require some level of human-in-the-loop control. In any situation where teleoperation is required an effective User Interface (UI) remains a key component within the systems design. Current advancements in Virtual Reality (VR) software and hardware such as the Oculus Rift, HTC Vive and Google Cardboard combined with greater transparency to robotic systems afforded by middleware such as the Robot Operating System (ROS) provides an opportunity to rapidly improve traditional teleoperation interfaces. This paper uses a System of System (SoS) approach to present the concept of a Virtual Reality Dynamic User Interface (VRDUI) for the teleoperation of heterogeneous robots. Different geometric virtual workspaces are discussed and a cylindrical workspace aligned with interactive displays is presented as a virtual control room. A presentation mode within the proposed VRDUI is also detailed, this shows how point cloud information obtained from the Microsoft Kinect can be incorporated within the proposed virtual workspace. This point cloud data is successfully processed into an OctoMap utilizing the octree data structure to create a voxelized representation of the 3D scanned environment. The resulting OctoMap is then displayed to an operator as a 3D point cloud using the Oculus Rift Head Mounted Display (HMD).",
"title": ""
},
{
"docid": "c998270736000da12e509103af2c70ec",
"text": "Flash memory grew from a simple concept in the early 1980s to a technology that generated close to $23 billion in worldwide revenue in 2007, and this represents one of the many success stories in the semiconductor industry. This success was made possible by the continuous innovation of the industry along many different fronts. In this paper, the history, the basic science, and the successes of flash memories are briefly presented. Flash memories have followed the Moore’s Law scaling trend for which finer line widths, achieved by improved lithographic resolution, enable more memory bits to be produced for the same silicon area, reducing cost per bit. When looking toward the future, significant challenges exist to the continued scaling of flash memories. In this paper, I discuss possible areas that need development in order to overcome some of the size-scaling challenges. Innovations are expected to continue in the industry, and flash memories will continue to follow the historical trend in cost reduction of semiconductor memories through the rest of this decade.",
"title": ""
},
{
"docid": "d1ba5b5b2f5929b6c2c909e396301caa",
"text": "Failure of an aerospace component can arise through the long term exposure to fatigue damaging events such as large numbers of low amplitude random events and/or relatively fewer high amplitude events. Mission profiling and test synthesis is a process for deriving a simple laboratory test that has at least the same damage potential as the real environment but in a fraction of the real time. In this paper we introduce the technical concepts and present a case study showing how new technology has dramatically reduced the time it takes to prepare and reduce the original test data.",
"title": ""
},
{
"docid": "d9e4a4303a7949b51510cf95098e4248",
"text": "Recent increased regulatory scrutiny concerning subvisible particulates (SbVPs) in parenteral formulations of biologics has led to the publication of numerous articles about the sources, characteristics, implications, and approaches to monitoring and detecting SbVPs. Despite varying opinions on the level of associated risks and method of regulation, nearly all industry scientists and regulators agree on the need for monitoring and reporting visible and subvisible particles. As prefillable drug delivery systems have become a prominent packaging option, silicone oil, a common primary packaging lubricant, may play a role in the appearance of particles. The goal of this article is to complement the current SbVP knowledge base with new insights into the evolution of silicone-oil-related particulates and their interactions with components in prefillable systems. We propose a \"toolbox\" for improved silicone-oil-related particulate detection and enumeration, and discuss the benefits and limitations of approaches for lowering and controlling silicone oil release in parenterals. Finally, we present surface cross-linking of silicone as the recommended solution for achieving significant SbVP reduction without negatively affecting functional performance.",
"title": ""
},
{
"docid": "76c75c11ade707808a2d877674300685",
"text": "Modern aircraft increasingly rely on electric power, resulting in high safety criticality and complexity in their electric powergenerationanddistribution systems.Motivatedby the resulting rapid increase in the costs andduration of the design cycles for such systems, the use of formal specification and automated correct-by-construction control protocols synthesis for primarydistribution in vehicular electric power networks is investigated.Adesignworkflow is discussed that aims to transition from the traditional “design and verify” approach to a “specify and synthesize” approach. An overview is given of a subset of the recent advances in the synthesis of reactive control protocols. These techniques are applied in the context of reconfiguration of the networks in reaction to the changes in their operating environment. These automatically synthesized control protocols are also validated on high-fidelity simulationmodels and on an academic-scale hardware testbed.",
"title": ""
},
{
"docid": "e6ba843b871f6783fb486ab598fd1027",
"text": "To prevent the further loss of species from landscapes used for productive enterprises such as agriculture, forestry, and grazing, it is necessary to determine the composition, quantity, and configuration of landscape elements required to meet the needs of the species present. I present a multi-species approach for defining the attributes required to meet the needs of the biota in a landscape and the management regimes that should be applied. The approach builds on the concept of umbrella species, whose requirements are believed to encapsulate the needs of other species. It identifies a suite of “focal species,” each of which is used to define different spatial and compositional attributes that must be present in a landscape and their appropriate management regimes. All species considered at risk are grouped according to the processes that threaten their persistence. These threats may include habitat loss, habitat fragmentation, weed invasion, and fire. Within each group, the species most sensitive to the threat is used to define the minimum acceptable level at which that threat can occur. For example, the area requirements of the species most limited by the availability of particular habitats will define the minimum suitable area of those habitat types; the requirements of the most dispersal-limited species will define the attributes of connecting vegetation; species reliant on critical resources will define essential compositional attributes; and species whose populations are limited by processes such as fire, predation, or weed invasion will define the levels at which these processes must be managed. For each relevant landscape parameter, the species with the most demanding requirements for that parameter is used to define its minimum acceptable value. Because the most demanding species are selected, a landscape designed and managed to meet their needs will encompass the requirements of all other species. Especies Focales: Una Sombrilla Multiespecífica para Conservar la Naturaleza Resumen: Para evitar mayores pérdidas de especies en paisajes utilizados para actividades productivas como la agricultura, la ganadería y el pastoreo, es necesario determinar la composición, cantidad y configuración de elementos del paisaje que se requieren para satisfacer las necesidades de las especies presentes. Propongo un enfoque multiespecífico para definir los atributos requeridos para satisfacer las necesidades de la biota en un paisaje y los regímenes de manejo que deben ser aplicados. El enfoque se basa en el concepto de las especies sombrilla, de las que se piensa que sus requerimientos engloban a las necesidades de otras especies. El concepto identifica una serie de “especies focales”, cada una de las cuales se utiliza para definir distintos atributos espaciales y de composición que deben estar presentes en un paisaje, así como sus requerimientos adecuados de manejo. Todas las especies consideradas en riesgo se agrupan de acuerdo con los procesos que amenazan su persistencia. Estas amenazas pueden incluir pérdida de hábitat, fragmentación de hábitat, invasión de hierbas y fuego. Dentro de cada grupo, se utiliza a la especie más sensible a la amenaza para definir el nivel mínimo aceptable en que la amenaza ocurre. Por ejemplo, los requerimientos espaciales de especies limitadas por la disponibilidad de hábitats particulares definirán el área mínima adecuada de esos tipos de hábitat; los requerimientos de la especie más limitada en su dispersión definirán los atributos de la vegetación conectante, las especies dependientes de recursos críticos definirán los atributos de composición esenciales; y especies cuyas poblaciones están limitadas por procesos como el fuego, la depredación o invasión de hierbas definirán los niveles en que deberán manejarse estos procesos. Para cada parámetro relevante del Paper submitted September 19, 1996; revised manuscript accepted February 24, 1997. 850 Focal Species for Nature Conservation Lambeck Conservation Biology Volume 11, No. 4, August 1997 Introduction Throughout the world, changing patterns of land use have resulted in the loss of natural habitat and the increasing fragmentation of that which remains. Not only have these changes altered habitat composition and configuration, but they have modified the rates and intensities of many ecological processes essential for ecosystems to retain their integrity. As a consequence, many landscapes that are being used for productive purposes such as agriculture, grazing, and forestry, are suffering species declines and losses (Saunders 1989; Saunders et al. 1991; Hobbs et al. 1993). Attempts to prevent further loss of biological diversity from such landscapes requires a capacity to define the spatial, compositional, and functional attributes that must be present if the needs of the plants and animals are to be met. There has been considerable debate in the ecological literature about whether the requirements of single species should serve as the basis for defining conservation requirements or whether the analysis of landscape pattern and process should underpin conservation planning (Franklin 1993; Hansen et al. 1993; Orians 1993; Franklin 1994; Hobbs 1994; Tracy & Brussard 1994). Speciesbased approaches have taken the form of either singlespecies studies, often targeted at rare or vulnerable species, or the study of groups of species considered to represent components of biodiversity (Soulé & Wilcox 1980; Simberloff 1988; Wilson & Peter 1988; Pimm & Gilpin 1989; Brussard 1991; Kohm 1991). Species-based approaches have been criticized on the grounds that they do not provide whole-landscape solutions to conservation problems, that they cannot be conducted at a rate sufficient to deal with the urgency of the threats, and that they consume a disproportionate amount of conservation funding (Franklin 1993; Hobbs 1994; Walker 1995). Consequently, critics of single-species studies are calling for approaches that consider higher levels of organization such as ecosystems and landscapes (Noss 1983; Noss & Harris 1986; Noss 1987; Gosselink et al . 1990; Dyer & Holland 1991; Salwasser 1991; Franklin 1993; Hobbs 1994). These alternative approaches place a greater emphasis on the relationship between landscape pattern and processes and community measures such as species diversity or species richness (Janzen 1983; Newmark 1985; Saunders et al . 1991; Anglestam 1992; Hobbs 1993, 1994). Although approaches that consider pattern and processes at a landscape scale help to identify the elements that need to be present in a landscape, they are unable to define the appropriate quantity and distribution of those elements. Such approaches have tended, by and large, to be descriptive. They can identify relationships between landscape patterns and measures such as species richness, but they are unable to define the composition, configuration, and quantity of landscape features required for a landscape to retain its biota. Ultimately, questions such as what type of pattern is required in a landscape, or at what rate a given process should proceed, cannot be answered without reference to the needs of the species in that landscape. Therefore, we cannot ignore the requirements of species if we wish to define the characteristics of a landscape that will ensure their retention. The challenge then is to find an efficient means of meeting the needs of all species without studying each one individually. In order to overcome this dilemma, proponents of single-species studies have developed the concept of umbrella species (Murphy & Wilcox 1986; Noss 1990; Cutler 1991; Ryti 1992; Hanley 1993; Launer & Murphy 1994; Williams & Gaston 1994). These are species whose requirements for persistence are believed to encapsulate those of an array of additional species. The attractiveness of umbrella species to land managers is obvious. If it is indeed possible to manage a whole community or ecosystem by focusing on the needs of one or a few species, then the seemingly intractable problem of considering the needs of all species is resolved. Species as diverse as Spotted Owls (Franklin 1994), desert tortoises (Tracy & Brussard 1994), black-tailed deer (Hanley 1993) and butterflies (Launer & Murphy 1994) have been proposed to serve an umbrella function for the ecosystems in which they occur. But given that the majority of species within an ecosystem have widely differing habitat requirements, it seems unlikely that any single species could serve as an umbrella for all others. As Franklin (1994) points out, landscapes designed and managed around the needs of single species may fail to capture other critical elements of the ecosystems in which they occur. It would therefore appear that if the concept of umbrella species is to be useful, it will be necessary to search for multi-species approaches that identify a set of species whose spatial, compositional, and functional requirements encompass those of all other species in the region. I present a method for selecting, from the total pool of species in a landscape, a subset of “focal species” whose paisaje, se utiliza a la especies con los mayores requerimientos para ese parámetro para definir su valor aceptable mínimo. Debido a que se seleccionan las especies más demandantes, un paisaje diseñado y manejado para satisfacer sus necesidades abarcará los requerimientos de todas las demás especies.",
"title": ""
},
{
"docid": "7f30588d74d08c2ca15f69b7dd814b52",
"text": "Access to a heterogeneous distributed collection of databases can be simplified by providing users with a logically integrated interface or global view. There are two aspects to database integration. Firstly, the local schemas may model objects and relationships differently and, secondly, the databases may contain mutually inconsistent data. This paper identifies several kinds of structural and data inconsistencies that might exist. It describes a versatile view definition facility for the functional data model and illustrates the use of this facility for resolving inconsistencies. In particular, the concept of generalization is extended to this model, and its importance to database integration is emphasized. The query modification algorithm for the relational model is extended to the semantically richer functional data model with generalization.",
"title": ""
},
{
"docid": "091cd37683d7e1b8ceef19b4042f4ac3",
"text": "Closed or nearly closed regions are an important form of perceptual structure arising both in natural imagery and in many forms of human-created imagery including sketches, line art, graphics, and formal drawings. This paper presents an effective algorithm especially suited for finding perceptually salient, compact closed region structure in hand-drawn sketches and line art. We start with a graph of curvilinear fragments whose proximal endpoints form junctions. The key problem is to manage the search of possible path continuations through junctions in an effort to find paths satisfying global criteria for closure and figural salience. We identify constraints particular to this domain for ranking path continuations through junctions, based on observations of the ways that junctions arise in line drawings. In particular, we delineate the roles of the principle of good continuation versus maximally turning paths. Best-first bidirectional search checks for the cleanest, most obvious paths first, then reverts to more exhaustive search to find paths cluttered by blind alleys. Results are demonstrated on line drawings from several sources including line art, engineering drawings, sketches on whiteboards, as well as contours from photographic imagery.",
"title": ""
},
{
"docid": "7afe4444a805f1994a40f98e01908509",
"text": "It is well known that CMOS scaling trends are now accompanied by less desirable byproducts such as increased energy dissipation. To combat the aforementioned challenges, solutions are sought at both the device and architectural levels. With this context, this work focuses on embedding a low voltage device, a Tunneling Field Effect Transistor (TFET) within a Cellular Neural Network (CNN) -- a low power analog computing architecture. Our study shows that TFET-based CNN systems, aside from being fully functional, also provide significant power savings when compared to the conventional resistor-based CNN. Our initial studies suggest that power savings are possible by carefully engineering lower voltage, lower current TFET devices without sacrificing performance. Moreover, TFET-based CNN reduces implementation footprints by eliminating the hardware required to realize output transfer functions. Application dynamics are verified through simulations. We conclude the paper with a discussion of desired device characteristics for CNN architectures with enhanced functionality.",
"title": ""
},
{
"docid": "603c82380d4896b324f4511c301972e5",
"text": "Pseudolymphomatous folliculitis (PLF), which clinically mimicks cutaneous lymphoma, is a rare manifestation of cutaneous pseudolymphoma and cutaneous lymphoid hyperplasia. Here, we report on a 45-year-old Japanese woman with PLF. Dermoscopy findings revealed prominent arborizing vessels with small perifollicular and follicular yellowish spots and follicular red dots. A biopsy specimen also revealed dense lymphocytes, especially CD1a+ cells, infiltrated around the hair follicles. Without any additional treatment, the patient's nodule rapidly decreased. The presented case suggests that typical dermoscopy findings could be a possible supportive tool for the diagnosis of PLF.",
"title": ""
},
{
"docid": "2f9f21740603b7a84abd57d7c7c02c11",
"text": "Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC).\n In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory.<sup;>1</sup;> The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical.\n The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5% for memory-intensive single-threaded benchmarks and 10.8% for multicore workloads. It yields a geometric mean speedup of 5.1% for single-thread applications and 7.6% for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1% for single-thread applications and 7.6% for multicore workloads.",
"title": ""
},
{
"docid": "6927647b1e1f6bf9bcf65db50e9f8d6e",
"text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.",
"title": ""
},
{
"docid": "60ce48da045d521c198837f9172c4ef1",
"text": "Scalar functions defined on manifold triangle meshes is a starting point for many geometry processing algorithms such as mesh parametrization, skeletonization, and segmentation. In this paper, we propose the Auto Diffusion Function (ADF) which is a linear combination of the eigenfunctions of the Laplace-Beltrami operator in a way that has a simple physical interpretation. The ADF of a given 3D object has a number of further desirable properties: Its extrema are generally at the tips of features of a given object, its gradients and level sets follow or encircle features, respectively, it is controlled by a single parameter which can be interpreted as feature scale, and, finally, the ADF is invariant to rigid and isometric deformations. We describe the ADF and its properties in detail and compare it to other choices of scalar functions on manifolds. As an example of an application, we present a pose invariant, hierarchical skeletonization and segmentation algorithm which makes direct use of the ADF.",
"title": ""
}
] |
scidocsrr
|
d48dcd6c79f5957d5f52aba697867dc6
|
Iris Recognition Using Possibilistic Fuzzy Matching on Local Features
|
[
{
"docid": "e8eab2f5481f10201bc82b7a606c1540",
"text": "This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics.",
"title": ""
}
] |
[
{
"docid": "27c942f988353e89413862c7bd5ee677",
"text": "Serial section Microscopy is an established method for volumetric anatomy reconstruction. Section series imaged with Electron Microscopy are currently vital for the reconstruction of the synaptic connectivity of entire animal brains such as that of Drosophila melanogaster. The process of removing ultrathin layers from a solid block containing the specimen, however, is a fragile procedure and has limited precision with respect to section thickness. We have developed a method to estimate the relative z-position of each individual section as a function of signal change across the section series. First experiments show promising results on both serial section Transmission Electron Microscopy (ssTEM) data and Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) series. We made our solution available as Open Source plugins for the TrakEM2 software and the ImageJ distribution Fiji.",
"title": ""
},
{
"docid": "6d156108949b6c02281b9478415db64f",
"text": "The measurement of textual patent similarities is crucial for important tasks in patent management, be it prior art analysis, infringement analysis, or patent mapping. In this paper the common theory of similarity measurement is applied to the field of patents, using solitary concepts as basic textual elements of patents. After unfolding the term ‘similarity’ in a content and formal oriented level and presenting a basic model of understanding, a segmented approach to the measurement of underlying variables, similarity coefficients, and the criteria-related profiles of their combinations is lined out. This leads to a guided way to the application of textual patent similarities, interesting both for theory and practice.",
"title": ""
},
{
"docid": "4fb5658723d791803c1fe0fdbd7ebdeb",
"text": "WAP-8294A2 (lotilibcin, 1) is a potent antibiotic with superior in vivo efficacy to vancomycin against methicillin-resistant Staphylococcus aureus (MRSA). Despite the great medical importance, its molecular mode of action remains unknown. Here we report the total synthesis of complex macrocyclic peptide 1 comprised of 12 amino acids with a β-hydroxy fatty-acid chain, and its deoxy analogue 2. A full solid-phase synthesis of 1 and 2 enabled their rapid assembly and the first detailed investigation of their functions. Compounds 1 and 2 were equipotent against various strains of Gram-positive bacteria including MRSA. We present evidence that the antimicrobial activities of 1 and 2 are due to lysis of the bacterial membrane, and their membrane-disrupting effects depend on the presence of menaquinone, an essential factor for the bacterial respiratory chain. The established synthetic routes and the menaquinone-targeting mechanisms provide valuable information for designing and developing new antibiotics based on their structures.",
"title": ""
},
{
"docid": "8f025fda5bbf9468dc65c16539d0aa0d",
"text": "Image compression is one of the key image processing techniques in signal processing and communication systems. Compression of images leads to reduction of storage space and reduces transmission bandwidth and hence also the cost. Advances in VLSI technology are rapidly changing the technological needs of common man. One of the major technological domains that are directly related to mankind is image compression. Neural networks can be used for image compression. Neural network architectures have proven to be more reliable, robust, and programmable and offer better performance when compared with classical techniques. In this work the main focus is on development of new architectures for hardware implementation of 3-D neural network based image compression optimizing area, power and speed as specific to ASIC implementation, and comparison with FPGA.",
"title": ""
},
{
"docid": "4031f4141333b9c0b95c175e22885ccc",
"text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app’s API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1K to 33K malware apps, and 38K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%− 99% and a false positive rate of 0.06%− 2%, under all tested datasets and settings.",
"title": ""
},
{
"docid": "ead92535c188bebd2285358c83fc0a07",
"text": "BACKGROUND\nIndigenous peoples of Australia, Canada, United States and New Zealand experience disproportionately high rates of suicide. As such, the methodological quality of evaluations of suicide prevention interventions targeting these Indigenous populations should be rigorously examined, in order to determine the extent to which they are effective for reducing rates of Indigenous suicide and suicidal behaviours. This systematic review aims to: 1) identify published evaluations of suicide prevention interventions targeting Indigenous peoples in Australia, Canada, United States and New Zealand; 2) critique their methodological quality; and 3) describe their main characteristics.\n\n\nMETHODS\nA systematic search of 17 electronic databases and 13 websites for the period 1981-2012 (inclusive) was undertaken. The reference lists of reviews of suicide prevention interventions were hand-searched for additional relevant studies not identified by the electronic and web search. The methodological quality of evaluations of suicide prevention interventions was assessed using a standardised assessment tool.\n\n\nRESULTS\nNine evaluations of suicide prevention interventions were identified: five targeting Native Americans; three targeting Aboriginal Australians; and one First Nation Canadians. The main intervention strategies employed included: Community Prevention, Gatekeeper Training, and Education. Only three of the nine evaluations measured changes in rates of suicide or suicidal behaviour, all of which reported significant improvements. The methodological quality of evaluations was variable. Particular problems included weak study designs, reliance on self-report measures, highly variable consent and follow-up rates, and the absence of economic or cost analyses.\n\n\nCONCLUSIONS\nThere is an urgent need for an increase in the number of evaluations of preventive interventions targeting reductions in Indigenous suicide using methodologically rigorous study designs across geographically and culturally diverse Indigenous populations. Combining and tailoring best evidence and culturally-specific individual strategies into one coherent suicide prevention program for delivery to whole Indigenous communities and/or population groups at high risk of suicide offers considerable promise.",
"title": ""
},
{
"docid": "a2faba3e69563acf9e874bf4c4327b5d",
"text": "We analyze a mobile wireless link comprising M transmitter andN receiver antennas operating in a Rayleigh flat-fading environment. The propagation coef fici nts between every pair of transmitter and receiver antennas are statistically independent and un known; they remain constant for a coherence interval ofT symbol periods, after which they change to new independent v alues which they maintain for anotherT symbol periods, and so on. Computing the link capacity, associated with channel codin g over multiple fading intervals, requires an optimization over the joint density of T M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater t han the length of the coherence interval: the capacity forM > T is equal to the capacity for M = T . Capacity is achieved when the T M transmitted signal matrix is equal to the product of two stat i ically independent matrices: a T T isotropically distributed unitary matrix times a certain T M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity f or many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence i nterval increases, the capacity approaches the capacity obtained as if the receiver knew the propagatio n coefficients. Index Terms —Multi-element antenna arrays, wireless communications, space-time modulation",
"title": ""
},
{
"docid": "3f2e76d16149b2591262befc0957e4e2",
"text": "In order to improve the performance of the high-speed brushless direct current motor drives, a novel high-precision sensorless drive has been developed. It is well known that the inevitable voltage pulses, which are generated during the commutation periods, will impact the rotor position detecting accuracy, and further impact the performance of the overall sensorless drive, especially in the higher speed range or under the heavier load conditions. For this reason, the active compensation method based on the virtual third harmonic back electromotive force incorporating the SFF-SOGI-PLL (synchronic-frequency filter incorporating the second-order generalized integrator based phase-locked loop) is proposed to precise detect the commutation points for sensorless drive. An experimental driveline system used for testing the electrical performance of the developed magnetically suspended motor is built. The mathematical analysis and the comparable experimental results have been shown to validate the effectiveness of the proposed sensorless drive algorithm.",
"title": ""
},
{
"docid": "5d44a55eedbebe5c0cbacc0892d8b479",
"text": "We consider the problem of learning a causal graph over a set of variables with interventions. We study the cost-optimal causal graph learning problem: For a given skeleton (undirected version of the causal graph), design the set of interventions with minimum total cost, that can uniquely identify any causal graph with the given skeleton. We show that this problem is solvable in polynomial time. Later, we consider the case when the number of interventions is limited. For this case, we provide polynomial time algorithms when the skeleton is a tree or a clique tree. For a general chordal skeleton, we develop an efficient greedy algorithm, which can be improved when the causal graph skeleton is an interval graph.",
"title": ""
},
{
"docid": "e3e9532e873739e8024ba7d55de335c3",
"text": "We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically. We demonstrate the model selection capabilities of the algorithm in a range of experiments. In line with the development of our method, we present a simple view on sparse approximations for GP models and their underlying assumptions and show relations to other methods.",
"title": ""
},
{
"docid": "629b774e179a446ac2cbaef683daef25",
"text": "Flux-switching permanent magnet (FSPM) motors have a doubly salient structure, the magnets being housed in the stator and the stator winding comprising concentrated coils. They have attracted considerable interest due to their essentially sinusoidal phase back electromotive force (EMF) waveform. However, to date, the inherent nature of this desirable feature has not been investigated in detail. Thus, a typical three-phase FSPM motor with 12 stator teeth and ten rotor poles is considered. It is found that, since there is a significant difference in the magnetic flux paths associated with the coils of each phase, this results in harmonics in the coil back EMF waveforms being cancelled, resulting in essentially sinusoidal phase back EMF waveforms. In addition, the influence of the rotor pole-arc on the phase back EMF waveform is evaluated by finite-element analysis, and an optimal pole-arc for minimum harmonic content in the back EMF is obtained and verified experimentally.",
"title": ""
},
{
"docid": "085ef3104f22263be11f3a2b5f16ff34",
"text": "ARTICLE INFO Tumor is the one of the most common brain diesease and this is the reason for the diagnosis & treatment of the brain tumor has vital importance. MRI is the technique used to produce computerised image of internal body tissues. Cells are growing in uncontrollable manner this results in mass of unwanted tissue which is called as tumor. CT-Scan and MRI image which are diagnostic technique are used to detect brain tumor and classifies in types malignant & benign. This is difficult due to variations hence techniques like image preprocessing, feature extraction are used, there are many methods developed but they have different results. In this paper we are going to discuss the methods for detection of brain tumor and evaluate them.",
"title": ""
},
{
"docid": "cbd6e6c75cae86426c21a38bd523200f",
"text": "Schottky junctions have been realized by evaporating gold spots on top of sexithiophen (6T), which is deposited on TiO 2 or ZnO with e-beam and spray pyrolysis. Using Mott-Schottky analysis of 6T/TiO2 and 6T/ZnO devices acceptor densities of 4.5x10(16) and 3.7x10(16) cm(-3) are obtained, respectively. For 6T/TiO2 deposited with the e-beam evaporation a conductivity of 9x10(-8) S cm(-1) and a charge carrier mobility of 1.2x10(-5) cm2/V s is found. Impedance spectroscopy is used to model the sample response in detail in terms of resistances and capacitances. An equivalent circuit is derived from the impedance measurements. The high-frequency data are analyzed in terms of the space-charge capacitance. In these frequencies shallow acceptor states dominate the heterojunction time constant. The high-frequency RC time constant is 8 micros. Deep acceptor states are represented by a resistance and a CPE connected in series. The equivalent circuit is validated in the potential range (from -1.2 to 0.8 V) for 6T/ZnO obtained with spray pyrolysis.",
"title": ""
},
{
"docid": "2ccbe363a448e796ad7a93d819d12444",
"text": "With the ever-growing performance gap between memory systems and disks, and rapidly improving CPU performance, virtual memory (VM) management becomes increasingly important for overall system performance. However, one of its critical components, the page replacement policy, is still dominated by CLOCK, a replacement policy developed almost 40 years ago. While pure LRU has an unaffordable cost in VM, CLOCK simulates the LRU replacement algorithm with a low cost acceptable in VM management. Over the last three decades, the inability of LRU as well as CLOCK to handle weak locality accesses has become increasingly serious, and an effective fix becomes increasingly desirable. Inspired by our I/O buffer cache replacement algorithm, LIRS [13], we propose an improved CLOCK replacement policy, called CLOCK-Pro. By additionally keeping track of a limited number of replaced pages, CLOCK-Pro works in a similar fashion as CLOCK with a VM-affordable cost. Furthermore, it brings all the much-needed performance advantages from LIRS into CLOCK. Measurements from an implementation of CLOCK-Pro in Linux Kernel 2.4.21 show that the execution times of some commonly used programs can be reduced by up to 47%.",
"title": ""
},
{
"docid": "49ffd8624fc677ce51d0c079ca2e52f3",
"text": "Chatbots have been around since the 1960's, but recently they have risen in popularity especially due to new compatibility with social networks and messenger applications. Chatbots are different from traditional user interfaces, for they unveil themselves to the user one sentence at a time. Because of that, users may struggle to interact with them and to understand what they can do. Hence, it is important to support designers in deciding how to convey chatbots' features to users, as this might determine whether the user continues to chat or not. As a first step in this direction, in this paper our goal is to analyze the communicative strategies that have been used by popular chatbots to convey their features to users. To perform this analysis we use the Semiotic Inspection Method (SIM). As a result we identify and discuss the different strategies used by the analyzed chatbots to present their features to users. We also discuss the challenges and limitations of using SIM on such interfaces.",
"title": ""
},
{
"docid": "71f757b2e42466bef1df379cdb852c7e",
"text": "Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a \"video see through\" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.",
"title": ""
},
{
"docid": "7d6c441d745adf8a7f6d833da9e46716",
"text": "X-ray computed tomography is a widely used method for nondestructive visualization of the interior of different samples - also of wooden material. Different to usual applications very high resolution is needed to use such CT images in dendrochronology and to evaluate wood species. In dendrochronology big samples (up to 50 cm) are necessary to scan. The needed resolution is - depending on the species - about 20 mum. In wood identification usually very small samples have to be scanned, but wood anatomical characters of less than 1 mum in width have to be visualized. This paper deals with four examples of X-ray CT scanned images to be used for dendrochronology and wood identification.",
"title": ""
},
{
"docid": "2801a7eea00bc4db7d6aacf71071de20",
"text": "Internet of Things (IoT) devices are rapidly becoming ubiquitous while IoT services are becoming pervasive. Their success has not gone unnoticed and the number of threats and attacks against IoT devices and services are on the increase as well. Cyber-attacks are not new to IoT, but as IoT will be deeply interwoven in our lives and societies, it is becoming necessary to step up and take cyber defense seriously. Hence, there is a real need to secure IoT, which has consequently resulted in a need to comprehensively understand the threats and attacks on IoT infrastructure. This paper is an attempt to classify threat types, besides analyze and characterize intruders and attacks facing IoT devices and services.",
"title": ""
},
{
"docid": "10e66f0c9cc3532029de388c2018f8ed",
"text": "1. ABSTRACT WC have developed a series of lifelike computer characters called Virtual Petz. These are autonomous agents with real-time layered 3D animation and sound. Using a mouse the user moves a hand-shaped cursor to directly touch, pet, and pick up the characters, as well as use toys and objects. Virtual Petz grow up over time on the user’s PC computer desktop, and strive to be the user’s friends and companions. They have evolving social relationships with the user and each other. To implement these agents we have invented hybrid techniques that draw from cartoons, improvisational drama, AI and video games. 1.1",
"title": ""
},
{
"docid": "17806963c91f6d6981f1dcebf3880927",
"text": "The ability to assess the reputation of a member in a web community is a need addressed in many different ways according to the many different stages in which the nature of communities has evolved over time. In the case of reputation of goods/services suppliers, the solutions available to prevent the feedback abuse are generally reliable but centralized under the control of few big Internet companies. In this paper we show how a decentralized and distributed feedback management system can be built on top of the Bitcoin blockchain.",
"title": ""
}
] |
scidocsrr
|
28a8e628352cfeda6fd5f00ecc8c5567
|
Variable stiffness fabrics with embedded shape memory materials for wearable applications
|
[
{
"docid": "ca768eb654b323354b7d78969162cb81",
"text": "Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.",
"title": ""
},
{
"docid": "9b61ddcc5312a33ac9b22fe185a95e18",
"text": "INTRODUCTION Treatments for gait pathologies associated with neuromuscular disorders (such as dropfoot, spasticity, etc.) may employ a passive mechanical brace [1]. Depending on the gait pathology, the brace may be applied to the hip, knee, ankle, or any combination thereof. While passive mechanical braces provide certain benefits, they may lead to additional medical problems. For example, an ankle-foot orthotic (AFO) is typically used to prevent the toe from dragging on the ground in the case of drop-foot. Rigid versions of the AFO constrain the ankle to a specific position. By limiting the range of motion, the toe can clear the ground, thus allowing gait to progress more naturally. However, the use of the AFO may result in a reduction in power generation at the ankle, as it limits active plantar flexion. Moreover, the AFO may lead to disuse atrophy of the muscles, such as the tibialis anterior muscle, potentially leading to long-term dependence [2]. While previous researchers have examined actuating a rigid orthotic [3], we examine using NiTi shape memory alloy (SMA) wires to embed actuation within a soft material. In this manner, the orthotic can provide variable assistance depending on the gait cycle phase, activity level, and needs of the wearer. Thus, the subject can have individualized control, causing the muscles to be used more appropriately, possibly leading to a reeducation of the motor system and eventual independence from the orthotic system.",
"title": ""
}
] |
[
{
"docid": "984a289e33debae553dffc4f601dc203",
"text": "Nowadays, the prevailing detectors of steganographic communication in digital images mainly consist of three steps, i.e., residual computation, feature extraction, and binary classification. In this paper, we present an alternative approach to steganalysis of digital images based on convolutional neural network (CNN), which is shown to be able to well replicate and optimize these key steps in a unified framework and learn hierarchical representations directly from raw images. The proposed CNN has a quite different structure from the ones used in conventional computer vision tasks. Rather than a random strategy, the weights in the first layer of the proposed CNN are initialized with the basic high-pass filter set used in the calculation of residual maps in a spatial rich model (SRM), which acts as a regularizer to suppress the image content effectively. To better capture the structure of embedding signals, which usually have extremely low SNR (stego signal to image content), a new activation function called a truncated linear unit is adopted in our CNN model. Finally, we further boost the performance of the proposed CNN-based steganalyzer by incorporating the knowledge of selection channel. Three state-of-the-art steganographic algorithms in spatial domain, e.g., WOW, S-UNIWARD, and HILL, are used to evaluate the effectiveness of our model. Compared to SRM and its selection-channel-aware variant maxSRMd2, our model achieves superior performance across all tested algorithms for a wide variety of payloads.",
"title": ""
},
{
"docid": "a14665d8ae0a471a56607bb175e6c8c6",
"text": "Multiple modalities often co-occur when describing natural phenomena. Learning a joint representation of these modalities should yield deeper and more useful representations. Previous generative approaches to multi-modal input either do not learn a joint distribution or require additional computation to handle missing data. Here, we introduce a multimodal variational autoencoder (MVAE) that uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multi-modal inference problem. Notably, our model shares parameters to efficiently learn under any combination of missing modalities. We apply the MVAE on four datasets and match state-of-the-art performance using many fewer parameters. In addition, we show that the MVAE is directly applicable to weaklysupervised learning, and is robust to incomplete supervision. We then consider two case studies, one of learning image transformations—edge detection, colorization, segmentation—as a set of modalities, followed by one of machine translation between two languages. We find appealing results across this range of tasks.",
"title": ""
},
{
"docid": "ecc91205bcc64e049a30ce61d305589d",
"text": "Software architecture has become a centerpiece subject for software engineers, both researchers and practitioners alike. At the heart of every software system is its software architecture, i.e., \"the set of principal design decisions about the system\". Architecture permeates all major facets of a software system, for principal design decisions may potentially be made at any time during a system's lifetime, and potentially by any stakeholder. Such decisions encompass structural concerns, such as the system's high-level building blocks---components, connectors, and configurations; the system's deployment; the system's non-functional properties; and the system's evolution patterns, including runtime adaptation. Software architectures found particularly useful for families of systems---product lines---are often codified into architectural patterns, architectural styles, and reusable, parameterized reference architectures. This tutorial affords the participant an extensive treatment of the field of software architecture, its foundation, principles, and elements, including those mentioned above. Additionally, the tutorial introduces the participants to the state-of-the-art as well as the state-of-the-practice in software architecture, and looks at emerging and likely future trends in this field. The discussion is illustrated with numerous real-world examples. One example given prominent treatment is the architecture of the World Wide Web and its underlying architectural style, REpresentational State Transfer (REST).",
"title": ""
},
{
"docid": "a69747683329667c0d697f3127fa58c1",
"text": "Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. Our paper introduces an effective algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its effectiveness.",
"title": ""
},
{
"docid": "82535c102f41dc9d47aa65bd71ca23be",
"text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.",
"title": ""
},
{
"docid": "bb0ef8084d0693d7ea453cd321b13e0b",
"text": "Distributed computation is increasingly important for deep learning, and many deep learning frameworks provide built-in support for distributed training. This results in a tight coupling between the neural network computation and the underlying distributed execution, which poses a challenge for the implementation of new communication and aggregation strategies. We argue that decoupling the deep learning framework from the distributed execution framework enables the flexible development of new communication and aggregation strategies. Furthermore, we argue that Ray [12] provides a flexible set of distributed computing primitives that, when used in conjunction with modern deep learning libraries, enable the implementation of a wide range of gradient aggregation strategies appropriate for different computing environments. We show how these primitives can be used to address common problems, and demonstrate the performance benefits empirically.",
"title": ""
},
{
"docid": "601e4594392e033532b802798d2ab929",
"text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.03.013 * Corresponding author. Tel.: +31 53 489 2322; fax E-mail addresses: a.beldad@utwente.nl, adbeldad@ jong@utwente.nl (M. de Jong), m.f.steehouder@utwen 1 Tel.: +31 53 489 3313; fax: +31 53 489 4259. 2 Tel.: +31 53 489 3315; fax: +31 53 489 4259. Trust is generally assumed to be an important precondition for people’s adoption of electronic services. This paper provides an overview of the available research into the antecedents of trust in both commercial and non-commercial online transactions and services. A literature review was conducted covering empirical studies on people’s trust in and adoption of computer-mediated services. Results are described using a framework of three clusters of antecedents: customer/client-based, website-based, and company/ organization-based antecedents. Results show that there are many possible antecedents of trust in electronic services. The majority of the research has been conducted in the context of e-commerce; only few studies are available in the domains of e-government and e-health. For many antecedents, some empirical support can be found, but the results are far from univocal. The research calls for more, and particularly more systematic, research attention for the antecedents of trust in electronic services. The review presented in this paper offers practitioners an overview of possibly relevant variables that may affect people’s trust in electronic services. It also gives a state-of-the-art overview of the empirical support for the relevance of these variables. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5c4f20fcde1cc7927d359fd2d79c2ba5",
"text": "There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \\ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiative",
"title": ""
},
{
"docid": "e5f6d7ed8d2dbf0bc2cde28e9c9e129b",
"text": "Change detection is the process of finding out difference between two images taken at two different times. With the help of remote sensing the . Here we will try to find out the difference of the same image taken at different times. here we use mean ratio and log ratio to find out the difference in the images. Log is use to find background image and fore ground detected by mean ratio. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "c6fd4ed41ed4f047e8e17d66a8b1957e",
"text": "In this paper, we propose a new simple and learning-free deep learning network named MomentsNet, whose convolution layer, nonlinear processing layer and pooling layer are constructed by Moments kernels, binary hashing and block-wise histogram, respectively. Twelve typical moments (including geometrical moment, Zernike moment, Tchebichef moment, etc.) are used to construct the MomentsNet whose recognition performance for binary image is studied. The results reveal that MomentsNet has better recognition performance than its corresponding moments in almost all cases and ZernikeNet achieves the best recognition performance among MomentsNet constructed by twelve moments. ZernikeNet also shows better recognition performance on a binary image database than that of PCANet, which is a learning-based deep learning network.",
"title": ""
},
{
"docid": "2dbeb1a7d1177d0afcef6b21f45f47d4",
"text": "In this paper, the effects of new methods for risk classification, e.g., genetic tests, on health insurance markets are studied using an insurance model with state contingent utility functions. The analysis focuses on the case of treatment costs higher than the patient's willingness to pay where standard models of asymmetric information are not applicable. In this case, the benefit from signing a fair insurance contract will be positive only if illness probability is low. In contrast to the common perception, additional risk classification under symmetric information can be efficiency enhancing. Under asymmetric information about illness risks, however, there can be complete market failure.",
"title": ""
},
{
"docid": "04079b6ec122ea9828fc514c275d7821",
"text": "The thermoelectric effect enables direct and reversible conversion between thermal and electrical energy, and provides a viable route for power generation from waste heat. The efficiency of thermoelectric materials is dictated by the dimensionless figure of merit, ZT (where Z is the figure of merit and T is absolute temperature), which governs the Carnot efficiency for heat conversion. Enhancements above the generally high threshold value of 2.5 have important implications for commercial deployment, especially for compounds free of Pb and Te. Here we report an unprecedented ZT of 2.6 ± 0.3 at 923 K, realized in SnSe single crystals measured along the b axis of the room-temperature orthorhombic unit cell. This material also shows a high ZT of 2.3 ± 0.3 along the c axis but a significantly reduced ZT of 0.8 ± 0.2 along the a axis. We attribute the remarkably high ZT along the b axis to the intrinsically ultralow lattice thermal conductivity in SnSe. The layered structure of SnSe derives from a distorted rock-salt structure, and features anomalously high Grüneisen parameters, which reflect the anharmonic and anisotropic bonding. We attribute the exceptionally low lattice thermal conductivity (0.23 ± 0.03 W m−1 K−1 at 973 K) in SnSe to the anharmonicity. These findings highlight alternative strategies to nanostructuring for achieving high thermoelectric performance.",
"title": ""
},
{
"docid": "36cb5fa9af45fcd34d6c1114d6cd9be5",
"text": "The quality of requirements is typically considered as an important factor for the quality of the end product. For traditional up-front requirements specifications, a number of standards have been defined on what constitutes good quality : Requirements should be complete, unambiguous, specific, time-bounded, consistent, etc. For agile requirements specifications, no new standards have been defined yet, and it is not clear yet whether traditional quality criteria still apply. To investigate what quality criteria for assessing the correctness of written agile requirements exist, we have conducted a systematic literature review. The review resulted in a list of 16 selected papers on this topic. These selected papers describe 28 different quality criteria for agile requirements specifications. We categorize and analyze these criteria and compare them with those from traditional requirements engineering. We discuss findings from the 16 papers in the form of recommendations for practitioners on quality assessment of agile requirements. At the same time, we indicate the open points in the form of a research agenda for researchers working on this topic .",
"title": ""
},
{
"docid": "ed7c4d2c562a4ad6d9e8d0fc0fc589e3",
"text": "The reported research extends classic findings that after briefly viewing structured, but not random, chess positions, chess masters reproduce these positions much more accurately than less-skilled players. Using a combination of the gaze-contingent window paradigm and the change blindness flicker paradigm, we documented dramatically larger visual spans for experts while processing structured, but not random, chess positions. In addition, in a check-detection task, a minimized 3 x 3 chessboard containing a King and potentially checking pieces was displayed. In this task, experts made fewer fixations per trial than less-skilled players, and had a greater proportion of fixations between individual pieces, rather than on pieces. Our results provide strong evidence for a perceptual encoding advantage for experts attributable to chess experience, rather than to a general perceptual or memory superiority.",
"title": ""
},
{
"docid": "1d6bc809c0870ea88d7c66d330456da3",
"text": "Orodispersible films (ODFs) are intended to disintegrate within seconds when placed onto the tongue. The common way of manufacturing is the solvent casting method. Flexographic printing on drug-free ODFs is introduced as a highly flexible and cost-effective alternative manufacturing method in this study. Rasagiline mesylate and tadalafil were used as model drugs. Printing of rasagiline solutions and tadalafil suspensions was feasible. Up to four printing cycles were performed. The possibility to employ several printing cycles enables a continuous, highly flexible manufacturing process, for example for individualised medicine. The obtained ODFs were characterised regarding their mechanical properties, their disintegration time, API crystallinity and homogeneity. Rasagiline mesylate did not recrystallise after the printing process. Relevant film properties were not affected by printing. Results were comparable to the results of ODFs manufactured with the common solvent casting technique, but the APIs are less stressed through mixing, solvent evaporation and heat. Further, loss of material due to cutting jumbo and daughter rolls can be reduced. Therefore, a versatile new manufacturing technology particularly for processing high-potent low-dose or heat sensitive drugs is introduced in this study.",
"title": ""
},
{
"docid": "d146a363006aa6cc5dde35f740a28aab",
"text": "Website privacy policies are often ignored by Internet users, because these documents tend to be long and difficult to understand. However, the significance of privacy policies greatly exceeds the attention paid to them: these documents are binding legal agreements between website operators and their users, and their opaqueness is a challenge not only to Internet users but also to policy regulators. One proposed alternative to the status quo is to automate or semi-automate the extraction of salient details from privacy policy text, using a combination of crowdsourcing, natural language processing, and machine learning. However, there has been a relative dearth of datasets appropriate for identifying data practices in privacy policies. To remedy this problem, we introduce a corpus of 115 privacy policies (267K words) with manual annotations for 23K fine-grained data practices. We describe the process of using skilled annotators and a purpose-built annotation tool to produce the data. We provide findings based on a census of the annotations and show results toward automating the annotation procedure. Finally, we describe challenges and opportunities for the research community to use this corpus to advance research in both privacy and language technologies.",
"title": ""
},
{
"docid": "97ec7149cbaedc6af3a26030067e2dba",
"text": "Skype is a peer-to-peer VoIP client developed by KaZaa in 2003. Skype claims that it can work almost seamlessly across NATs and firewalls and has better voice quality than the MSN and Yahoo IM applications. It encrypts calls end-to-end, and stores user information in a decentralized fashion. Skype also supports instant messaging and conferencing. This report analyzes key Skype functions such as login, NAT and firewall traversal, call establishment, media transfer, codecs, and conferencing under three different network setups. Analysis is performed by careful study of Skype network traffic.",
"title": ""
},
{
"docid": "78bf0b1d4065fd0e1740589c4e060c70",
"text": "This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.",
"title": ""
},
{
"docid": "35c904cdbaddec5e7cd634978c0b415d",
"text": "Life-long visual localization is one of the most challenging topics in robotics over the last few years. The difficulty of this task is in the strong appearance changes that a place suffers due to dynamic elements, illumination, weather or seasons. In this paper, we propose a novel method (ABLE-M) to cope with the main problems of carrying out a robust visual topological localization along time. The novelty of our approach resides in the description of sequences of monocular images as binary codes, which are extracted from a global LDB descriptor and efficiently matched using FLANN for fast nearest neighbor search. Besides, an illumination invariant technique is applied. The usage of the proposed binary description and matching method provides a reduction of memory and computational costs, which is necessary for long-term performance. Our proposal is evaluated in different life-long navigation scenarios, where ABLE-M outperforms some of the main state-of-the-art algorithms, such as WI-SURF, BRIEF-Gist, FAB-MAP or SeqSLAM. Tests are presented for four public datasets where a same route is traversed at different times of day or night, along the months or across all four seasons.",
"title": ""
},
{
"docid": "2ae96a524ba3b6c43ea6bfa112f71a30",
"text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.",
"title": ""
}
] |
scidocsrr
|
f2ca1c3a0de114fbb17e95e6c2fe6833
|
RNN Approaches to Text Normalization: A Challenge
|
[
{
"docid": "0da4b25ce3d4449147f7258d0189165f",
"text": "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set.",
"title": ""
}
] |
[
{
"docid": "1349c5daedd71bdfccaa0ea48b3fd54a",
"text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.",
"title": ""
},
{
"docid": "e8c7f00d775254bd6b8c5393397d05a6",
"text": "PURPOSE\nVirtual reality devices, including virtual reality head-mounted displays, are becoming increasingly accessible to the general public as technological advances lead to reduced costs. However, there are numerous reports that adverse effects such as ocular discomfort and headache are associated with these devices. To investigate these adverse effects, questionnaires that have been specifically designed for other purposes such as investigating motion sickness have often been used. The primary purpose of this study was to develop a standard questionnaire for use in investigating symptoms that result from virtual reality viewing. In addition, symptom duration and whether priming subjects elevates symptom ratings were also investigated.\n\n\nMETHODS\nA list of the most frequently reported symptoms following virtual reality viewing was determined from previously published studies and used as the basis for a pilot questionnaire. The pilot questionnaire, which consisted of 12 nonocular and 11 ocular symptoms, was administered to two groups of eight subjects. One group was primed by having them complete the questionnaire before immersion; the other group completed the questionnaire postviewing only. Postviewing testing was carried out immediately after viewing and then at 2-min intervals for a further 10 min.\n\n\nRESULTS\nPriming subjects did not elevate symptom ratings; therefore, the data were pooled and 16 symptoms were found to increase significantly. The majority of symptoms dissipated rapidly, within 6 min after viewing. Frequency of endorsement data showed that approximately half of the symptoms on the pilot questionnaire could be discarded because <20% of subjects experienced them.\n\n\nCONCLUSIONS\nSymptom questionnaires to investigate virtual reality viewing can be administered before viewing, without biasing the findings, allowing calculation of the amount of change from pre- to postviewing. However, symptoms dissipate rapidly and assessment of symptoms needs to occur in the first 5 min postviewing. Thirteen symptom questions, eight nonocular and five ocular, were determined to be useful for a questionnaire specifically related to virtual reality viewing using a head-mounted display.",
"title": ""
},
{
"docid": "f172ad1f92b81f5d8b19fc4687ce2853",
"text": "Research conclusions in the social sciences are increasingly based on meta-analysis, making questions of the accuracy of meta-analysis critical to the integrity of the base of cumulative knowledge. Both fixed effects (FE) and random effects (RE) meta-analysis models have been used widely in published meta-analyses. This article shows that FE models typically manifest a substantial Type I bias in significance tests for mean effect sizes and for moderator variables (interactions), while RE models do not. Likewise, FE models, but not RE models, yield confidence intervals for mean effect sizes that are narrower than their nominal width, thereby overstating the degree of precision in meta-analysis findings. This article demonstrates analytically that these biases in FE procedures are large enough to create serious distortions in conclusions about cumulative knowledge in the research literature. We therefore recommend that RE methods routinely be employed in meta-analysis in preference to FE methods.",
"title": ""
},
{
"docid": "be60fa48b6cc666272911b28a5061899",
"text": "In this paper, we offer a novel analysis of presuppositions, paving particular attention to the interaction between the knowledge resources that are required to interpret them. The analysis has two main features. First, we capture an analogy between presuppositions, anaphora and scope ambiguity (cf. van der Sandt 1992), by utilizing semantic underspecification (c£ Reyle 1993). Second, resolving this underspccification requires reasoning about how the presupposition is rhetorically connected to the discourse context. This has several consequences. First, since pragmatic information plays a role in computing the rhetorical relation, it also constrains the interpretation of presuppositions. Our account therefore provides a formal framework for analysing problematic data, which require pragmatic reasoning. Second, binding presuppositions to the context via rhetorical links replaces accommodating them, in the sense of adding them to the context (cf. Lewis 1979). The treatment of presupposition is thus generalized and integrated into the discourse update procedure. We formalize this approach in SDKT (Asher 1993; Lascarides & Asher 1993), and demonstrate that it provides a rich framework for interpreting presuppositions, where semantic and pragmatic constraints arc integrated. 1 I N T R O D U C T I O N The interpretation of a presupposition typically depends on the context in which it is made. Consider, for instance, sentences (i) vs. (2), adapted from van der Sandt (1992); the presupposition triggered by Jack's son (that Jack has a son) is implied by (1), but not by (2). (1) If baldness is hereditary, then Jack's son is bald. (2) If Jack has a son, then Jack's son is bald. The challenge for a formal semantic theory of presuppositions is to capture contextual effects such as these in an adequate manner. In particular, such a theory must account for why the presupposition in (1) projects from an embedded context, while the presupposition in (2) does not This is a special case of the Projection Problem; If a compound sentence S is made up of 240 The Semantics and Pragmatics of Presupposition constituent sentences 5, , . . . ,Sn , each with presuppositions P, ,... ,Pn, then what are the presuppositions of 5? Many recent accounts of presupposition that offer solutions to the Projection Problem have exploited the dynamics in dynamic semantics (e.g. Beaver 1996; Geurts 1996; Heim 1982; van der Sandt 1992). In these frameworks, assertional meaning is a relation between an input context (or information state) and an output context Presuppositions impose tests on the input context, which researchers have analysed in two ways: either the context must satisfy the presuppositions of the clause being interpreted (e.g. Beaver 1996; Heim 1982) or the presuppositions are anaphoric (e.g. van der Sandt 1992) and so must be bound to elements in the context But clauses carrying presuppositions can be felicitous even when the context fails these tests (e.g. (1)). A special purpose procedure known as accommodation is used to account for this (cf. Lewis 1979): if the context fails the presupposition test, then the presupposition is accommodated or added to it, provided various constraints are met (e.g. the result must be satisfiable). This combination of test and accommodation determines the projection of a presupposition. For example, in (1), the antecedent produces a context which fails the test imposed by the presupposition in the consequent (cither satisfaction or binding). So it is accommodated. Since it can be added to the context outside the scope of the conditional, it can project out from its embedding. In contrast, the antecedent in (2) ensures that the input context passes the presupposition test So the presupposition is not accommodated, the input context is not changed, and the presupposition is not projected out from the conditional. Despite these successes, this approach has trouble with some simple predictions. Compare the following two dialogues (3abc) and (3abd): (3) a. A: Did you hear about John? b. B: No, what? c A: He had an accident. A car hit him. d. A: He had an accident ??The car hit him. The classic approach we just outlined would predict no difference between these two discourses and would find them both acceptable. But (3abd) is unacceptable. As it stands it lacks discourse coherence, while (3abc) does not; the presupposition of the car cannot be accommodated in (3abd). We will argue that the proper treatment of presuppositions in discourse, like a proper treatment of assertions, requires a notion of discourse coherence and must take into account the rhetorical function of both presupposed and asserted information. We will provide a formal account of presuppositions, which integrates constraints from compositional semantics and pragmatics in the required manner. Nicholas Asher and Alex Lascaridn 241 We will start by examining van der Sandt's theory of presupposition satisfaction, since he offers the most detailed proposal concerning accommodatioa We will highlight some difficulties, and offer a new proposal which attempts to overcome them. We will adopt van der Sandt's view that presuppositions are anaphoric, but give it some new twists. First, like other anaphoric expressions (e.g. anaphoric pronouns), presuppositions have an underspecified semantic content Interpreting them in context involves resolving the underspecification. The second distinctive feature is the way we resolve underspecification. We assume a formal model of discourse semantics known as SDRT (e.g. Asher 1993; Lascarides & Asher 1993). where semantic underspecification in a proposition is resolved by reasoning about the way that proposition rhetorically connects to the discourse context Thus, interpreting presuppositions becomes a part of discourse update in SDRT. This has three important consequences. The first concerns pragmatics. SDRT provides an explicit formal account of how semantic and pragmatic information interact when computing a rhetorical link between a proposition and its discourse context This interaction will define the interpretation of presuppositions, and thus provide a richer source of constraints on presuppositions than standard accounts. This account of presuppositions will exploit pragmatic information over and above the clausal implicatures of the kind used in Gazdar's (1979) theory of presuppositions. We'll argue in section 2 that going beyond these implicatures is necessary to account for some of the data. The second consequence of interpreting presuppositions is SDRT concerns accommodation. In all previous dynamic theories of presupposition, accommodation amounts to adding, but not relating, the presupposed content to some accessible part of the context This mechanism is peculiar to presuppositions; it does not feature in accounts of any other phenomena, including other anaphoric phenomena. In contrast, we model presuppositions entirely in terms of the SDRT discourse update procedure. We replace the notion that presuppositions are added to the discourse context with the notion that they are rhetorically linked to it Given that the theory of rhetorical structure in SDRT is used to model a wide range of linguistic phenomena when applied to assertions, it would be odd if presupposed information were to be entirely insensitive to rhetorical function. We will show that presupposed information is sensitive to rhetorical function and that the notion of accommodation should be replaced with a more constrained notion of discourse update. The third consequence concerns the compositional treatment of presupposition. Our approach affords that one could call a compositional treatment of presuppositions. The discourse semantics of SDRT is The Semantics and Pragmatics of Presupposition compositional upon discourse structure: the meaning of a discourse is a function of the meaning of its parts and how they are related to each other. In SDRT presuppositions, like assertions, generate underspecified but interpretable logical forms. The procedure for constructing the semantic representation of discourse takes these underspecified logical forms, resolves some of the underspecifications and relates them together by means of discourse relations representing their rhetorical function in the discourse. So presuppositions have a content that contributes to the content of the discourse as a whole. Indeed, presuppositions have no less a compositional treatment than assertions. Our discourse-based approach affords a wider perspective on presuppositions. Present dynamic accounts of presupposition have concentrated on phenomena like the Projection Problem. For us the Projection Problem amounts to an important special case, which applies to single sentence discourses, of the more general 'discourse' problem: how do presuppositions triggered by elements of a multi-sentence discourse affect its structure and content? We aim to tackle this question here. And we claim that a rich notion of discourse structure, which utilizes rhetorical relations, is needed. While we believe that our discourse based theory of presupposition is novel, we hasten to add that many authors on presupposition like Beaver (1996) and van der Sandt (1992) would agree with us that the treatment of presupposition must be integrated with a richer notion of discourse structure and discourse update than is available in standard dynamic semantics (e.g. Kamp & Reyle's DRT, Dynamic Predicate Logic or Update Semantics), because they believe that pragmatic information constrains the interpretation of presuppositions. We wish to extend their theories with this requisite notion of discourse structure. 2 VAN DER SANDT'S DYNAMIC A C C O U N T AND ITS PROBLEMS Van der Sandt (1992) views presuppositions as anaphors with semantic content He develops this view within the framework of DRT (Kamp & Reyle 1993), in order to exploit its constraints on anaphoric antecedents. A presupposition can bind t",
"title": ""
},
{
"docid": "431e3826c8191834d08aae4f3e85e10b",
"text": "This paper presents an ultra low-power high-speed dynamic comparator. The proposed dynamic comparator is designed and simulated in a 65-nm CMOS technology. It dissipates 7 μW, 21.1 μW from a 0.9-V supply while operating at 1 GHz, 3 GHz sampling clock respectively. Proposed circuit can work up to 14 GHz. Ultra low power consumption is achieved by utilizing charge-steering concept and proper sizing. Monte Carlo simulations show that the input referred offset contribution of the internal devices is negligible compared to the effect of the input devices which results in 3.8 mV offset and 3 mV kick-back noise.",
"title": ""
},
{
"docid": "398016f6881bff1b3bf33a654fc30dff",
"text": "This paper explores the relationship between employee satisfaction, customer satisfaction and shareholders value theoretically. A conceptual model discusses about the relationship of these three variables. In first section the link between tow variables employee satisfaction and customer satisfaction is examined. In the second section the link between customer satisfaction and shareholders value is examined while in third section the link between customer satisfaction and shareholder’s value is examine. All these three models represent positive relationship among the related variables. Paper concludes that customer satisfaction and loyalty would continue as long as employees are satisfied and they deliver the required quality of goods and services. Customer satisfaction and employee’s satisfaction will increase shareholders value.",
"title": ""
},
{
"docid": "922dc8a3f4600dfdd34d828782c9f6aa",
"text": "In this work, we focus on the popular keyframe-based approach for video summarization. Keyframes represent important and diverse content of an input video and a summary is generated by temporally expanding the keyframes to key shots which are merged to a continuous dynamic video summary. In our approach, keyframes are selected from scenes that represent semantically similar content. For scene detection, we propose a simple yet effective dynamic extension of a video Bag-of-Words (BoW) method which provides over segmentation (high recall) for keyframe selection. For keyframe selection, we investigate two effective approaches: local region descriptors (visual content) and optical flow descriptors (motion content). We provide several interesting findings. 1) While scenes (visually similar content) can be effectively detected by region descriptors, optical flow (motion changes) provides better keyframes. 2) However, the suitable parameters of the motion descriptor based keyframe selection vary from one video to another and average performances remain low. To avoid more complex processing, we introduce a human-in-the-loop step where user selects keyframes produced by the three best methods. 3) Our human assisted and learning-free method achieves superior accuracy to learning-based methods and for many videos is on par with average human accuracy.",
"title": ""
},
{
"docid": "5b8865649df281c400e574d3a5bc3c3a",
"text": "Annually about 1,500 cases of cervical cancer are found in Indonesia, which made Indonesia as the country with the highest number of cervical cancer cases in the world. Cervical cancer screening and HPV testing are done with a Pap smear test. However, this examination requires a lot of time, costly and highly susceptible bias of the observer during the process of investigation and analysis. To overcome these problems, several studies have modeled the machine learning with a variety of approaches have been made. However, these studies are constrained by the limitation of the data amounts and the imbalanced data that caused by the different ratio of each case. This can lead to errors in the classification of the minority due to the tendency of the classification results that focus on the majority class. This study addressed the handling imbalance data on classification of cases Pap test results using the method of over-sampling. ADASYN-N and ADASYN-KNN algorithms were proposed as a development of ADASYN algorithm to handle datasets with nominal data types. This study included SMOTE-N algorithm to deal with the problem as comparison algorithm. As the results, ADASYN-KNN with the preference “0” gave the highest accuracy, precision, recall, and f-score of 95.38%; 95.583%; 95.383%; and 95.283%. The highest ROC area value was obtained with the ADASYN-KNN with preference “1” of 99.183%.",
"title": ""
},
{
"docid": "b181d6fd999fdcd8c5e5b52518998175",
"text": "Hydrogels are used to create 3D microenvironments with properties that direct cell function. The current study demonstrates the versatility of hyaluronic acid (HA)-based hydrogels with independent control over hydrogel properties such as mechanics, architecture, and the spatial distribution of biological factors. Hydrogels were prepared by reacting furan-modified HA with bis-maleimide-poly(ethylene glycol) in a Diels-Alder click reaction. Biomolecules were photopatterned into the hydrogel by two-photon laser processing, resulting in spatially defined growth factor gradients. The Young's modulus was controlled by either changing the hydrogel concentration or the furan substitution on the HA backbone, thereby decoupling the hydrogel concentration from mechanical properties. Porosity was controlled by cryogelation, and the pore size distribution, by the thaw temperature. The addition of galactose further influenced the porosity, pore size, and Young's modulus of the cryogels. These HA-based hydrogels offer a tunable platform with a diversity of properties for directing cell function, with applications in tissue engineering and regenerative medicine.",
"title": ""
},
{
"docid": "b42f4d645e2a7e24df676a933f414a6c",
"text": "Epilepsy is a common neurological condition which affects the central nervous system that causes people to have a seizure and can be assessed by electroencephalogram (EEG). Electroencephalography (EEG) signals reflect two types of paroxysmal activity: ictal activity and interictal paroxystic events (IPE). The relationship between IPE and ictal activity is an essential and recurrent question in epileptology. The spike detection in EEG is a difficult problem. Many methods have been developed to detect the IPE in the literature. In this paper we propose three methods to detect the spike in real EEG signal: Page Hinkley test, smoothed nonlinear energy operator (SNEO) and fractal dimension. Before using these methods, we filter the signal. The Singular Spectrum Analysis (SSA) filter is used to remove the noise in an EEG signal.",
"title": ""
},
{
"docid": "f497ae2f4e4188f483fe8ffa10d2e0e9",
"text": "Contemporary deep neural networks exhibit impressive results on practical problems. These networks generalize well although their inherent capacity may extend significantly beyond the number of training examples. We analyze this behavior in the context of deep, infinite neural networks. We show that deep infinite layers are naturally aligned with Gaussian processes and kernel methods, and devise stochastic kernels that encode the information of these networks. We show that stability results apply despite the size, offering an explanation for their empir-",
"title": ""
},
{
"docid": "b0e3249bbea278ceee2154aba5ea99d8",
"text": "Much of the current research in learning Bayesian Networks fails to eeectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data.",
"title": ""
},
{
"docid": "ea8685f27096f3e3e589ea8af90e78f5",
"text": "Acoustic data transmission is a technique to embed the data in a sound wave imperceptibly and to detect it at the receiver. This letter proposes a novel acoustic data transmission system designed based on the modulated complex lapped transform (MCLT). In the proposed system, data is embedded in an audio file by modifying the phases of the original MCLT coefficients. The data can be transmitted by playing the embedded audio and extracting it from the received audio. By embedding the data in the MCLT domain, the perceived quality of the resulting audio could be kept almost similar as the original audio. The system can transmit data at several hundreds of bits per second (bps), which is sufficient to deliver some useful short messages.",
"title": ""
},
{
"docid": "913b4f19a98ef3466b13d37ced3b2134",
"text": "In this paper we present DAML-S, a DAML+OIL ontology for describing the properties and capabilities of Web Services. Web Services – Web-accessible programs and devices – are garnering a great deal of interest from industry, and standards are emerging for low-level descriptions of Web Services. DAML-S complements this effort by providing Web Service descriptions at the application layer, describing what a service can do, and not just how it does it. In this paper we describe three aspects of our ontology: the service profile, the process model, and the service grounding. The paper focuses on the grounding, which connects our ontology with low-level XML-based descriptions of Web Services. 1 Services on the Semantic Web The Semantic Web [2] is rapidly becoming a reality through the development of Semantic Web markup languages such as DAML+OIL [9]. These markup languages enable the creation of arbitrary domain ontologies that support the unambiguous description of Web content. Web Services [15] – Web-accessible programs and devices – are among the most important resources on the Web, not only to provide information to a user, but to enable a user to effect change in the world. Web Services are garnering a great deal of interest from industry, and standards are being developed for low-level descriptions of Web Services. Languages such as WSDL (Web Service Description Language) provide a communication level description of the messages and protocols used by a Web Service. To complement this effort, our interest is in developing semantic markup that will sit at the application level above WSDL, and describe what is being sent across the wires and why, not just how it is being sent. We are developing a DAML+OIL ontology for Web Services, called DAML-S [5], with the objective of making Web Services computer-interpretable and hence enabling the following tasks [15]: discovery, i.e. locating Web Services (typically through a registry service) that provide a particular service and that adhere to specified constraints; invocation or activation and execution of an identified service by an agent or other service; interoperation, i.e. breaking down interoperability barriers through semantics, and the automatic insertion of message parameter translations between clients and services [10, 13, 22]; composition of new services through automatic selection, composition and interoperation of existing services [15, 14]; verification of service properties [19]; and execution monitoring, i.e. tracking the execution of complex or composite tasks performed by a service or a set of services, thus identifying failure cases, or providing explanations of different execution traces. To make use of a Web Service, a software agent needs a computer-interpretable description of the service, and the means by which it is accessed. This paper describes a collaborative effort by BBN Technologies, Carnegie Mellon University, Nokia, Stanford University, SRI International, and Yale University, to define the DAML-S Web Services ontology. An earlier version of the DAML-S specification is described in [5]; an updated version of DAML-S is presented at http://www.daml.org/services/daml-s/2001/10/. In this paper we briefly summarize and update this specification, and discuss the important problem of the grounding, i.e. how to translate what is being sent in a message to or from a service into how it is to be sent. In particular, we present the linking of DAML-S to the Web Services Description Language (WSDL). DAML-S complements WSDL, by providing an abstract or application level description lacking in WSDL. 2 An Upper Ontology for Services In DAML+OIL, abstract categories of entities, events, etc. are defined in terms of classes and properties. DAML-S defines a set of classes and properties, specific to the description of services, within DAML+OIL. The class Service is at the top of the DAML-S ontology. Service properties at this level are very general. The upper ontology for services is silent as to what the particular subclasses of Service should be, or even the conceptual basis for structuring this taxonomy, but it is expected that the taxonomy will be structured according to functional and domain differences and market needs. For example, one might imagine a broad subclass, B2C-transaction, which would encompass services for purchasing items from retail Web sites, tracking purchase status, establishing and maintaining accounts with the sites, and so on. The ontology of services provides two essential types of knowledge about a service, characterized by the questions: – What does the service require of agents, and provide for them? This is provided by the profile, a class that describes the capabilities and parameters of the service. We say that the class Service presents a ServiceProfile. – How does it work? The answer to this question is given in the model, a class that describes the workflow and possible execution paths of the service. Thus, the class Service is describedBy a ServiceModel The ServiceProfile provides information about a service that can be used by an agent to determine if the service meets its rough needs, and if it satisfies constraints such as security, locality, affordability, quality-requirements, etc. In contrast, the ServiceModel enables an agent to: (1) perform a more in-depth analysis of whether the service meets its needs; (2) compose service descriptions from multiple services to perform a specific task; (3) coordinate the activities of different agents; and (4) monitor the execution of the service. Generally speaking, the ServiceProfile provides the information needed for an agent to discover a service, whereas the ServiceModel provides enough information for an agent to make use of a service. In the following sections we discuss the service profile and the service model in greater detail, and introduce the service grounding, which describes how agents can communicate with and thus invoke the service.",
"title": ""
},
{
"docid": "bf11d9a1ef46b24f5d13dc119e715005",
"text": "This paper explores the relationship between the three beliefs about online shopping ie. perceived usefulness, perceived ease of use and perceived enjoyment and intention to shop online. A sample of 150 respondents was selected using a purposive sampling method whereby the respondents have to be Internet users to be included in the survey. A structured, self-administered questionnaire was used to elicit responses from these respondents. The findings indicate that perceived ease of use (β = 0.70, p<0.01) and perceived enjoyment (β = 0.32, p<0.05) were positively related to intention to shop online whereas perceived usefulness was not significantly related to intention to shop online. Furthermore, perceived ease of use (β = 0.78, p<0.01) was found to be a significant predictor of perceived usefulness. This goes to show that ease of use and enjoyment are the 2 main drivers of intention to shop online. Implications of the findings for developers are discussed further.",
"title": ""
},
{
"docid": "96b0acb9a28c8823e66e1384e8ec5f6f",
"text": "This paper presents a visual inspection system aimed at the automatic detection and classification of bare-PCB manufacturing errors. The interest of this CAE system lies in a twofold approach. On the one hand, we propose a modification of the subtraction method based on reference images that allows higher performance in the process of defect detection. On the other hand, this method is combined with a particle classification algorithm based on two measures of light intensity. As a result of this strategy, a machine vision application has been implemented to assist people in etching, inspection and verification tasks of PCBs.",
"title": ""
},
{
"docid": "ba74ebfc0e164b1e6d08c1ac63e49538",
"text": "This chapter develops a unified framework for the study of how network interactions can function as a mechanism for propagation and amplification of microeconomic shocks. The framework nests various classes of games over networks, models of macroeconomic risk originating from microeconomic shocks, and models of financial interactions. Under the assumption that shocks are small, the authors provide a fairly complete characterization of the structure of equilibrium, clarifying the role of network interactions in translating microeconomic shocks into macroeconomic outcomes. This characterization provides a ranking of different networks in terms of their aggregate performance. It also sheds light on several seemingly contradictory results in the prior literature on the role of network linkages in fostering systemic risk.",
"title": ""
},
{
"docid": "817afe747e4079d11fed37f8fb748de8",
"text": "Vehicle re-identification is a process of recognising a vehicle at different locations. It has attracted increasing amounts of attention due to the rapidly-increasing number of vehicles. Identification of two vehicles of the same model is even more difficult than the identification of identical twin humans. Further-more, there is no vehicle re-identification dataset that considers the interference caused by the presence of other vehicles of the same model. Therefore, to provide a fair comparison and facilitate future research into vehicle re-identification, this paper constructs a new dataset called the vehicle re-identification dataset-1 (1 VRID-1). VRID-1 contains 10,000 images captured in daytime of 1,000 individual vehicles of the ten most common vehicle models. For each vehicle model, there are 100 individual vehicles, and for each of these, there are ten images captured at different locations. The images in VRID-1 were captured by 326 surveillance cameras, and thus there are various vehicles poses and levels of illumination. Yet, it provides images of good enough quality for the evaluation of vehicle re-identification in a practical surveillance environment. In addition, according to the characteristics of vehicle morphology, this paper proposes a deep learning-based method to extract multi-dimensional robust features for vehicle re-identification using convolutional neural networks. Experimental results on the VRID-1 dataset demonstrate that it can deal with interference from vehicles of the same model, and is effective and practical for vehicle re-identification.",
"title": ""
},
{
"docid": "e0924a94e0bf614c9c53259f69ff7909",
"text": "In this paper, a unified approach is presented to transfer learning that addresses several source and target domain labelspace and annotation assumptions with a single model. It is particularly effective in handling a challenging case, where source and target label-spaces are disjoint, and outperforms alternatives in both unsupervised and semi-supervised settings. The key ingredient is a common representation termed Common Factorised Space. It is shared between source and target domains, and trained with an unsupervised factorisation loss and a graph-based loss. With a wide range of experiments, we demonstrate the flexibility, relevance and efficacy of our method, both in the challenging cases with disjoint label spaces, and in the more conventional cases such as unsupervised domain adaptation, where the source and target domains share the same label-sets.",
"title": ""
},
{
"docid": "0d22f929d72e44c6bf2902a753c8d79b",
"text": "Gödel's theorem may be demonstrated using arguments having an information-theoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the traditional proof based on the paradox of the liar, this new viewpoint suggests that the incompleteness phenomenon discovered by Gödel is natural and widespread rather than pathological and unusual.",
"title": ""
}
] |
scidocsrr
|
e376eac7fe51dcffbb30b965aee0839a
|
ICDAR 2005 text locating competition results
|
[
{
"docid": "0d9750c234aaa30f2a8851df1652902d",
"text": "This paper describes the robust reading competitions for ICDAR 2003. With the rapid growth in research over the last few years on recognizing text in natural scenes, there is an urgent need to establish some common benchmark datasets and gain a clear understanding of the current state of the art. We use the term ‘robust reading’ to refer to text images that are beyond the capabilities of current commercial OCR packages. We chose to break down the robust reading problem into three subproblems and run competitions for each stage, and also a competition for the best overall system. The subproblems we chose were text locating, character recognition and word recognition. By breaking down the problem in this way, we hoped to gain a better understanding of the state of the art in each of the subproblems. Furthermore, our methodology involved storing detailed results of applying each algorithm to each image in the datasets, allowing researchers to study in depth the strengths and weaknesses of each algorithm. The text-locating contest was the only one to have any entries. We give a brief description of each entry and present the results of this contest, showing cases where the leading entries succeed and fail. We also describe an algorithm for combining the outputs of the individual text locators and show how the combination scheme improves on any of the individual systems.",
"title": ""
}
] |
[
{
"docid": "72a490e38f09001ab8e05d0427542647",
"text": "Systems based on i–vectors represent the current state–of–the–art in text-independent speaker recognition. Unlike joint factor analysis JFA, which models both speaker and intersession subspaces separately, in the i–vector approach all the important variability is modeled in a single low-dimensional subspace. This paper is based on the observation that JFA estimates a more informative speaker subspace than the “total variability” i–vector subspace, because the latter is obtained by considering each training segment as belonging to a different speaker. We propose a speaker modeling approach that extracts a compact representation of a speech segment, similar to the speaker factors of JFA and to i–vectors, referred to as “e–vector.” Estimating the e–vector subspace follows a procedure similar to i–vector training, but produces a more accurate speaker subspace, as confirmed by the results of a set of tests performed on the NIST 2012 and 2010 Speaker Recognition Evaluations. Simply replacing the i–vectors with e–vectors we get approximately 10% average improvement of the C $_{\\text{primary}}$ cost function, using different systems and classifiers. It is worth noting that these performance gains come without any additional memory or computational costs with respect to the standard i–vector systems.",
"title": ""
},
{
"docid": "a0c1f5a7e283e1deaff38edff2d8a3b2",
"text": "BACKGROUND\nEarly detection of abused children could help decrease mortality and morbidity related to this major public health problem. Several authors have proposed tools to screen for child maltreatment. The aim of this systematic review was to examine the evidence on accuracy of tools proposed to identify abused children before their death and assess if any were adapted to screening.\n\n\nMETHODS\nWe searched in PUBMED, PsycINFO, SCOPUS, FRANCIS and PASCAL for studies estimating diagnostic accuracy of tools identifying neglect, or physical, psychological or sexual abuse of children, published in English or French from 1961 to April 2012. We extracted selected information about study design, patient populations, assessment methods, and the accuracy parameters. Study quality was assessed using QUADAS criteria.\n\n\nRESULTS\nA total of 2 280 articles were identified. Thirteen studies were selected, of which seven dealt with physical abuse, four with sexual abuse, one with emotional abuse, and one with any abuse and physical neglect. Study quality was low, even when not considering the lack of gold standard for detection of abused children. In 11 studies, instruments identified abused children only when they had clinical symptoms. Sensitivity of tests varied between 0.26 (95% confidence interval [0.17-0.36]) and 0.97 [0.84-1], and specificity between 0.51 [0.39-0.63] and 1 [0.95-1]. The sensitivity was greater than 90% only for three tests: the absence of scalp swelling to identify children victims of inflicted head injury; a decision tool to identify physically-abused children among those hospitalized in a Pediatric Intensive Care Unit; and a parental interview integrating twelve child symptoms to identify sexually-abused children. When the sensitivity was high, the specificity was always smaller than 90%.\n\n\nCONCLUSIONS\nIn 2012, there is low-quality evidence on the accuracy of instruments for identifying abused children. Identified tools were not adapted to screening because of low sensitivity and late identification of abused children when they have already serious consequences of maltreatment. Development of valid screening instruments is a pre-requisite before considering screening programs.",
"title": ""
},
{
"docid": "766b86047fd403586bd3339d46cf3036",
"text": "A hybrid phase shifted full bridge (PSFB) and LLC half bridge (HB) dc-dc converter for low-voltage and high-current output applications is proposed in this paper. The PSFB shares its lagging leg with the LLC-HB and their outputs are parallel connected. When the output current is small, the energy of LLC circuit in combination with the energy stored in the leakage inductance of PSFB's transformer can help the lagging leg switches to realize ZVS turn on, which can reduce voltage stress and avoid annoying voltage spikes over switches. For the power distribution at rated load, the PSFB converter undergoes most of the power while the LLC-HB converter working as an auxiliary part converts only a small portion of the total power. To improve the conversion efficiency, synchronous rectification technique for the PSFB dc-dc converter is implemented. The design principle is given in view of ZVS for lagging leg switches and low transconductance of LLC converter. The validity of the proposed converter has been verified by experimental results of a 2.5kW prototype.",
"title": ""
},
{
"docid": "c65c4582aecf22e63e88fc89c38f4bc1",
"text": "CONTEXT\nCognitive impairment in late-life depression (LLD) is highly prevalent, disabling, poorly understood, and likely related to long-term outcome.\n\n\nOBJECTIVES\nTo determine the characteristics and determinants of neuropsychological functioning LLD.\n\n\nDESIGN\nCross-sectional study of groups of LLD patients and control subjects.\n\n\nSETTING\nOutpatient, university-based depression research clinic.\n\n\nPARTICIPANTS\nOne hundred patients without dementia 60 years and older who met DSM-IV criteria for current episode of unipolar major depression (nonpsychotic) and 40 nondepressed, age- and education-equated control subjects.\n\n\nMAIN OUTCOME MEASURES\nA comprehensive neuropsychological battery.\n\n\nRESULTS\nRelative to control subjects, LLD patients performed poorer in all cognitive domains. More than half exhibited significant impairment (performance below the 10th percentile of the control group). Information processing speed and visuospatial and executive abilities were the most broadly and frequently impaired. The neuropsychological impairments were mediated almost entirely by slowed information processing (beta =.45-.80). Education (beta =.32) and ventricular atrophy (beta =.28) made additional modest contributions to variance in measures of language ability. Medical and vascular disease burden, apolipoprotein E genotype, and serum anticholinergicity did not contribute to variance in any cognitive domain.\n\n\nCONCLUSIONS\nLate-life depression is characterized by slowed information processing, which affects all realms of cognition. This supports the concept that frontostriatal dysfunction plays a key role in LLD. The putative role of some risk factors was validated (eg, advanced age, low education, depression severity), whereas others were not (eg, medical burden, age at onset of first depressive episode). Further studies of neuropsychological functioning in remitted LLD patients are needed to parse episode-related and persistent factors and to relate them to underlying neural dysfunction.",
"title": ""
},
{
"docid": "70ec2398526863c05b41866593214d0a",
"text": "Matrix factorization (MF) is one of the most popular techniques for product recommendation, but is known to suffer from serious cold-start problems. Item cold-start problems are particularly acute in settings such as Tweet recommendation where new items arrive continuously. In this paper, we present a meta-learning strategy to address item cold-start when new items arrive continuously. We propose two deep neural network architectures that implement our meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. We evaluate our techniques on the real-world problem of Tweet recommendation. On production data at Twitter, we demonstrate that our proposed techniques significantly beat the MF baseline and also outperform production models for Tweet recommendation.",
"title": ""
},
{
"docid": "90c3543eca7a689188725e610e106ce9",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
},
{
"docid": "47386df9012dfb99aafb7bfd11ac5e66",
"text": "Multilevel modeling is a technique that has numerous potential applications for social and personality psychology. To help realize this potential, this article provides an introduction to multilevel modeling with an emphasis on some of its applications in social and personality psychology. This introduction includes a description of multilevel modeling, a rationale for this technique, and a discussion of applications of multilevel modeling in social and personality psychological research. Some of the subtleties of setting up multilevel analyses and interpreting results are presented, and software options are discussed. Once you know that hierarchies exist, you see them everywhere. (Kreft and de Leeuw, 1998, 1) Whether by design or nature, research in personality and social psychology and related disciplines such as organizational behavior increasingly involves what are often referred to as multilevel data. Sometimes, such data sets are referred to as ‘nested’ or ‘hierarchically nested’ because observations (also referred to as units of analysis) at one level of analysis are nested within observations at another level. For example, in a study of classrooms or work groups, individuals are considered to be nested within groups. Similarly, in diary-style studies, observations (e.g., diary entries) are nested within persons. What is particularly important for present purposes is that when you have multilevel data, you need to analyze them using techniques that take into account this nesting. As discussed below (and in numerous places, including Nezlek, 2001), the results of analyses of multilevel data that do not take into account the multilevel nature of the data may (or perhaps will) be inaccurate. This article is intended to acquaint readers with the basics of multilevel modeling. For researchers, it is intended to provide a basis for further study. I think that a lack of understanding of how to think in terms of hierarchies and a lack of understanding of how to analyze such data inhibits researchers from applying the ‘multilevel perspective’ to their work. For those simply interested in understanding what multilevel",
"title": ""
},
{
"docid": "bd4658aa8745aedf3aec0527ec7a2507",
"text": "Recommending new items for suitable users is an important yet challenging problem due to the lack of preference history for the new items. Noncollaborative user modeling techniques that rely on the item features can be used to recommend new items. However, they only use the past preferences of each user to provide recommendations for that user. They do not utilize information from the past preferences of other users, which can potentially be ignoring useful information. More recent factor models transfer knowledge across users using their preference information in order to provide more accurate recommendations. These methods learn a low-rank approximation for the preference matrix, which can lead to loss of information. Moreover, they might not be able to learn useful patterns given very sparse datasets. In this work, we present <scp>UFSM</scp>, a method for top-<i>n</i> recommendation of new items given binary user preferences. <scp>UFSM</scp> learns <b>U</b>ser-specific <b>F</b>eature-based item-<b>S</b>imilarity <b>M</b>odels, and its strength lies in combining two points: (1) exploiting preference information across all users to learn multiple global item similarity functions and (2) learning user-specific weights that determine the contribution of each global similarity function in generating recommendations for each user. <scp>UFSM</scp> can be considered as a sparse high-dimensional factor model where the previous preferences of each user are incorporated within his or her latent representation. This way, <scp>UFSM</scp> combines the merits of item similarity models that capture local relations among items and factor models that learn global preference patterns. A comprehensive set of experiments was conduced to compare <scp>UFSM</scp> against state-of-the-art collaborative factor models and noncollaborative user modeling techniques. Results show that <scp>UFSM</scp> outperforms other techniques in terms of recommendation quality. <scp>UFSM</scp> manages to yield better recommendations even with very sparse datasets. Results also show that <scp>UFSM</scp> can efficiently handle high-dimensional as well as low-dimensional item feature spaces.",
"title": ""
},
{
"docid": "fe1bc993047a95102f4331f57b1f9197",
"text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.",
"title": ""
},
{
"docid": "9787ae39c27f9cfad2dbd29779bb5f36",
"text": "Compressive sensing (CS) techniques offer a framework for the detection and allocation of sparse signals with a reduced number of samples. Today, modern radar systems operate with high bandwidths—demanding high sample rates according to the Shannon–Nyquist theorem—and a huge number of single elements for phased array consumption and costs of radar systems. There is only a small number of publications addressing the application of CS to radar, leaving several open questions. This paper addresses some aspects as a further step to CS-radar by presenting generic system architectures and implementation considerations. It is not the aim of this paper to investigate numerically efficient algorithms but to point to promising applications as well as arising problems. Three possible applications are considered: pulse compression, radar imaging, and air space surveillance with array antennas. Some simulation results are presented and enriched by the evaluation of real data acquired by an experimental radar system of Fraunhofer FHR. & 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d5f97de97ac0c424da65ba068ae9c877",
"text": "Neural Machine Translation (NMT) has shown remarkable progress over the past few years, with production systems now being deployed to end-users. As the field is moving rapidly, it has become unclear which elements of NMT architectures have a significant impact on translation quality. In this work, we present a large-scale analysis of the sensitivity of NMT architectures to common hyperparameters. We report empirical results and variance numbers for several hundred experimental runs, corresponding to over 250,000 GPU hours on a WMT English to German translation task. Our experiments provide practical insights into the relative importance of factors such as embedding size, network depth, RNN cell type, residual connections, attention mechanism, and decoding heuristics. As part of this contribution, we also release an open-source NMT framework in TensorFlow to make it easy for others to reproduce our results and perform their own experiments.",
"title": ""
},
{
"docid": "fce925493fc9f7cbbe4c202e5e625605",
"text": "Topic models are a useful and ubiquitous tool for understanding large corpora. However, topic models are not perfect, and for many users in computational social science, digital humanities, and information studies—who are not machine learning experts—existing models and frameworks are often a “take it or leave it” proposition. This paper presents a mechanism for giving users a voice by encoding users’ feedback to topic models as correlations between words into a topic model. This framework, interactive topic modeling (itm), allows untrained users to encode their feedback easily and iteratively into the topic models. Because latency in interactive systems is crucial, we develop more efficient inference algorithms for tree-based topic models. We validate the framework both with simulated and real users.",
"title": ""
},
{
"docid": "2130ebf553c656ca819c6e3be7c83af7",
"text": "A wide difference of opinion exists about the content and composition of emotions. Advertising may influence an audience and their buying decisions about products and services. The objective of this study is to better conceptualize how women emotionally respond to emotional advertisements (EAs). The variant views are integrated into an ACE model, composed of subordinate levels of emotions (E), celebrity endorsements (C), and appeal drivers (A). This empirical study examines women’s emotional response using data from 240 Chinese women respondents. The study participants were invited to develop ACE mix based advertisements and fill out questionnaires. PLS-SEM analysis, a novel approach in ACE advertisement development and its applicability to consumer behavior, was used. The results show that showbiz celebrities expressing the emotion of happiness with music and color make the most effective ACE mix to influence the consumption behavior of women. The results are significantly mediated by attention levels and are widely applicable in the burgeoning advertising industry. The study also calls for further research with different ACE mixes in different contexts and on different audiences. It also opens doors for policy making and an appropriate understanding of women’s consumption behavior in the Chinese context.",
"title": ""
},
{
"docid": "da4a50c5539bb26ae917d294c83eea18",
"text": "An ultra-wide-band (UWB), stripline-fed Vivaldi antenna is characterized both numerically and experimentally. Three-dimensional far-field measurements are conducted and accurate antenna gain and efficiency as well as gain variation versus frequency in the boresight direction are measured. Using two Vivaldi antennas, a free-space communication link is set up. The impulse response of the cascaded antenna system is obtained using full-wave numerical electromagnetic time-domain simulations. These results are compared with frequency-domain measurements using a network analyzer. Full-wave numerical simulation of the free-space channel is performed using a two step process to circumvent the computationally intense simulation problem. Vector transfer function concept is used to obtain the overall system transfer function and the impulse response.",
"title": ""
},
{
"docid": "ed5a17f62e4024727538aba18f39fc78",
"text": "The extent to which people can focus attention in the face of irrelevant distractions has been shown to critically depend on the level and type of information load involved in their current task. The ability to focus attention improves under task conditions of high perceptual load but deteriorates under conditions of high load on cognitive control processes such as working memory. I review recent research on the effects of load on visual awareness and brain activity, including changing effects over the life span, and I outline the consequences for distraction and inattention in daily life and in clinical populations.",
"title": ""
},
{
"docid": "bb1a10dc8ad5bc953b6fbc2c1c3e0b59",
"text": "A Ka-band traveling-wave power divider/combiner, which is based on double ridge-waveguide couplers, is presented. The in-phase output, which is a challenge of the waveguide-based traveling-wave power divider, is achieved by optimizing the equivalent circuit of the proposed structure. The novel ridge-waveguide coupler has advantages of low loss, high power capability, and easy assembly. Finally, the proposed power divider/combiner is simulated, fabricated, and measured. A 15-dB measured return-loss bandwidth at the center frequency of 35 GHz is over 28%, a maximum transmission coefficients amplitude imbalance of ±1 dB is achieved, and the phase deviation is less than ± 12° from 32 to 39 GHz.",
"title": ""
},
{
"docid": "a41c9650da7ca29a51d310cb4a3c814d",
"text": "The analysis of resonant-type antennas based on the fundamental infinite wavelength supported by certain periodic structures is presented. Since the phase shift is zero for a unit-cell that supports an infinite wavelength, the physical size of the antenna can be arbitrary; the antenna's size is independent of the resonance phenomenon. The antenna's operational frequency depends only on its unit-cell and the antenna's physical size depends on the number of unit-cells. In particular, the unit-cell is based on the composite right/left-handed (CRLH) metamaterial transmission line (TL). It is shown that the CRLH TL is a general model for the required unit-cell, which includes a nonessential series capacitance for the generation of an infinite wavelength. The analysis and design of the required unit-cell is discussed based upon field distributions and dispersion diagrams. It is also shown that the supported infinite wavelength can be used to generate a monopolar radiation pattern. Infinite wavelength resonant antennas are realized with different number of unit-cells to demonstrate the infinite wavelength resonance",
"title": ""
},
{
"docid": "f0d906563c13da83cbe57b9186c53524",
"text": "In this paper, we propose a fast search algorithm for a large fuzzy database that stores iris codes or data with a similar binary structure. The fuzzy nature of iris codes and their high dimensionality render many modern search algorithms, mainly relying on sorting and hashing, inadequate. The algorithm that is used in all current public deployments of iris recognition is based on a brute force exhaustive search through a database of iris codes, looking for a match that is close enough. Our new technique, Beacon Guided Search (BGS), tackles this problem by dispersing a multitude of ldquobeaconsrdquo in the search space. Despite random bit errors, iris codes from the same eye are more likely to collide with the same beacons than those from different eyes. By counting the number of collisions, BGS shrinks the search range dramatically with a negligible loss of precision. We evaluate this technique using 632,500 iris codes enrolled in the United Arab Emirates (UAE) border control system, showing a substantial improvement in search speed with a negligible loss of accuracy. In addition, we demonstrate that the empirical results match theoretical predictions.",
"title": ""
},
{
"docid": "88968e939e9586666c83c13d4f640717",
"text": "The economics of two-sided markets or multi-sided platforms has emerged over the past decade as one of the most active areas of research in economics and strategy. The literature has constantly struggled, however, with a lack of agreement on a proper definition: for instance, some existing definitions imply that retail firms such as grocers, supermarkets and department stores are multi-sided platforms (MSPs). We propose a definition which provides a more precise notion of MSPs by requiring that they enable direct interactions between the multiple customer types which are affiliated to them. Several important implications of this new definition are derived. First, cross-group network effects are neither necessary nor sufficient for an organization to be a MSP. Second, our definition emphasizes the difference between MSPs and alternative forms of intermediation such as “re-sellers” which take control over the interactions between the various sides, or input suppliers which have only one customer group affiliated as opposed to multiple. We discuss a number of examples that illustrate the insights that can be derived by applying our definition. Third, we point to the economic considerations that determine where firms choose to position themselves on the continuum between MSPs and resellers, or MSPs and input suppliers. 1 Britta Kelley provided excellent research assistance. We are grateful to Elizabeth Altman, Tom Eisenmann and Marc Rysman for comments on an earlier draft. 2 Harvard University, ahagiu@hbs.edu. 3 National University of Singapore, jwright@nus.edu.sg.",
"title": ""
}
] |
scidocsrr
|
83ba53029606bb5c305173d111e5129d
|
Three Senses of "Argument"
|
[
{
"docid": "d7a348b092064acf2d6a4fd7d6ef8ee2",
"text": "Argumentation theory involves the analysis of naturally occurring argument, and one key tool employed to this end both in the academic community and in teaching critical thinking skills to undergraduates is argument diagramming. By identifying the structure of an argument in terms of its constituents and the relationships between them, it becomes easier to critically evaluate each part of an argument in turn. The task of analysis and diagramming, however, is labor intensive and often idiosyncratic, which can make academic exchange difficult. The Araucaria system provides an interface which supports the diagramming process, and then saves the result using AML, an open standard, designed in XML, for describing argument structure. Araucaria aims to be of use not only in pedagogical situations, but also in support of research activity. As a result, it has been designed from the outset to handle more advanced argumentation theoretic concepts such as schemes, which capture stereotypical patterns of reasoning. The software is also designed to be compatible with a number of applications under development, including dialogic interaction and online corpus provision. Together, these features, combined with its platform independence and ease of use, have the potential to make Araucaria a valuable resource for the academic community.",
"title": ""
}
] |
[
{
"docid": "2e98d7c876aa4875cc2048b687f97cdf",
"text": "In this paper, we present a pH sensing bandage constructed with pH sensing smart threads for chronic wound monitoring. The bandage is integrated with custom CMOS readout electronics for wireless monitoring and data transmission and is capable of continuously monitoring wound pH. Threads exhibit pH sensitivity of 54mV/pH and reach their steady state value within 2 minutes.",
"title": ""
},
{
"docid": "d98186e7dde031b99330be009b600e43",
"text": "This paper contributes a new high quality dataset for person re-identification, named \"Market-1501\". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.",
"title": ""
},
{
"docid": "4c4dc04c0b53ad22fda80ee30663777e",
"text": "We present the first attempt at using sequence to sequence neural networks to model text simplification (TS). Unlike the previously proposed automated TS systems, our neural text simplification (NTS) systems are able to simultaneously perform lexical simplification and content reduction. An extensive human evaluation of the output has shown that NTS systems achieve almost perfect grammaticality and meaning preservation of output sentences and higher level of simplification than the state-of-the-art automated TS systems.",
"title": ""
},
{
"docid": "1f3985e9c8bbad7279ee7ebfda74a8a8",
"text": "Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10.",
"title": ""
},
{
"docid": "a8ca6ef7b99cca60f5011b91d09e1b06",
"text": "When virtual teams need to establish trust at a distance, it is advantageous for them to use rich media to communicate. We studied the emergence of trust in a social dilemma game in four different communication situations: face-to-face, video, audio, and text chat. All three of the richer conditions were significant improvements over text chat. Video and audio conferencing groups were nearly as good as face-to-face, but both did show some evidence of what we term delayed trust (slower progress toward full cooperation) and fragile trust (vulnerability to opportunistic behavior)",
"title": ""
},
{
"docid": "3458d30f25a6748748a6e793d64a9ea2",
"text": "A Monte Carlo c^timization technique called \"simulated annealing\" is a descent algorithm modified by random ascent moves in order to escape local minima which are not global minima. Tlie levd of randomization is determined by a control parameter T, called temperature, which tends to zero according to a deterministic \"cooling schedule\". We give a simple necessary and suffident conditicm on the cooling sdiedule for the algorithm state to converge in probability to the set of globally minimiim cost states. In the spedal case that the cooling schedule has parameuic form r({) » c/log(l + / ) , the condition for convergence is that c be greater than or equal to the depth, suitably defined, of the deepest local minimum which is not a global minimum state.",
"title": ""
},
{
"docid": "e6107ac6d0450bb1ce4dab713e6dcffa",
"text": "Enterprises collect a large amount of personal data about their customers. Even though enterprises promise privacy to their customers using privacy statements or P3P, there is no methodology to enforce these promises throughout and across multiple enterprises. This article describes the Platform for Enterprise Privacy Practices (E-P3P), which defines technology for privacy-enabled management and exchange of customer data. Its comprehensive privacy-specific access control language expresses restrictions on the access to personal data, possibly shared between multiple enterprises. E-P3P separates the enterprise-specific deployment policy from the privacy policy that covers the complete life cycle of collected data. E-P3P introduces a viable separation of duty between the three “administrators” of a privacy system: The privacy officer designs and deploys privacy policies, the security officer designs access control policies, and the customers can give consent while selecting opt-in and opt-out choices. To appear in2nd Workshop on Privacy Enhancing Technologies , Lecture Notes in Computer Science. Springer Verlag, 2002. Copyright c © Springer",
"title": ""
},
{
"docid": "11c117d839be466c369274f021caba13",
"text": "Android smartphones are becoming increasingly popular. The open nature of Android allows users to install miscellaneous applications, including the malicious ones, from third-party marketplaces without rigorous sanity checks. A large portion of existing malwares perform stealthy operations such as sending short messages, making phone calls and HTTP connections, and installing additional malicious components. In this paper, we propose a novel technique to detect such stealthy behavior. We model stealthy behavior as the program behavior that mismatches with user interface, which denotes the user's expectation of program behavior. We use static program analysis to attribute a top level function that is usually a user interaction function with the behavior it performs. Then we analyze the text extracted from the user interface component associated with the top level function. Semantic mismatch of the two indicates stealthy behavior. To evaluate AsDroid, we download a pool of 182 apps that are potentially problematic by looking at their permissions. Among the 182 apps, AsDroid reports stealthy behaviors in 113 apps, with 28 false positives and 11 false negatives.",
"title": ""
},
{
"docid": "a9f6c0dfd884fb22e039b37e98f22fe0",
"text": "Image semantic segmentation is a fundamental problem and plays an important role in computer vision and artificial intelligence. Recent deep neural networks have improved the accuracy of semantic segmentation significantly. Meanwhile, the number of network parameters and floating point operations have also increased notably. The realworld applications not only have high requirements on the segmentation accuracy, but also demand real-time processing. In this paper, we propose a pyramid pooling encoder-decoder network named PPEDNet for both better accuracy and faster processing speed. Our encoder network is based on VGG16 and discards the fully connected layers due to their huge amounts of parameters. To extract context feature efficiently, we design a pyramid pooling architecture. The decoder is a trainable convolutional network for upsampling the output of the encoder, and finetuning the segmentation details. Our method is evaluated on CamVid dataset, achieving 7.214% mIOU accuracy improvement while reducing 17.9% of the parameters compared with the state-of-the-art algorithm.",
"title": ""
},
{
"docid": "8ce2aacbd523ed831727c9e87bb2774f",
"text": "Phantom perceptions arise almost universally in people who sustain sensory deafferentation, and in multiple sensory domains. The question arises 'why' the brain creates these false percepts in the absence of an external stimulus? The model proposed answers this question by stating that our brain works in a Bayesian way, and that its main function is to reduce environmental uncertainty, based on the free-energy principle, which has been proposed as a universal principle governing adaptive brain function and structure. The Bayesian brain can be conceptualized as a probability machine that constantly makes predictions about the world and then updates them based on what it receives from the senses. The free-energy principle states that the brain must minimize its Shannonian free-energy, i.e. must reduce by the process of perception its uncertainty (its prediction errors) about its environment. As completely predictable stimuli do not reduce uncertainty, they are not worthwhile of conscious processing. Unpredictable things on the other hand are not to be ignored, because it is crucial to experience them to update our understanding of the environment. Deafferentation leads to topographically restricted prediction errors based on temporal or spatial incongruity. This leads to an increase in topographically restricted uncertainty, which should be adaptively addressed by plastic repair mechanisms in the respective sensory cortex or via (para)hippocampal involvement. Neuroanatomically, filling in as a compensation for missing information also activates the anterior cingulate and insula, areas also involved in salience, stress and essential for stimulus detection. Associated with sensory cortex hyperactivity and decreased inhibition or map plasticity this will result in the perception of the false information created by the deafferented sensory areas, as a way to reduce increased topographically restricted uncertainty associated with the deafferentation. In conclusion, the Bayesian updating of knowledge via active sensory exploration of the environment, driven by the Shannonian free-energy principle, provides an explanation for the generation of phantom percepts, as a way to reduce uncertainty, to make sense of the world.",
"title": ""
},
{
"docid": "6d26e03468a9d9c5b9952a5c07743db3",
"text": "Graphs are a powerful tool to model structured objects, but it is nontrivial to measure the similarity between two graphs. In this paper, we construct a two-graph model to represent human actions by recording the spatial and temporal relationships among local features. We also propose a novel family of context-dependent graph kernels (CGKs) to measure similarity between graphs. First, local features are used as the vertices of the two-graph model and the relationships among local features in the intra-frames and inter-frames are characterized by the edges. Then, the proposed CGKs are applied to measure the similarity between actions represented by the two-graph model. Graphs can be decomposed into numbers of primary walk groups with different walk lengths and our CGKs are based on the context-dependent primary walk group matching. Taking advantage of the context information makes the correctly matched primary walk groups dominate in the CGKs and improves the performance of similarity measurement between graphs. Finally, a generalized multiple kernel learning algorithm with a proposed l12-norm regularization is applied to combine these CGKs optimally together and simultaneously train a set of action classifiers. We conduct a series of experiments on several public action datasets. Our approach achieves a comparable performance to the state-of-the-art approaches, which demonstrates the effectiveness of the two-graph model and the CGKs in recognizing human actions.",
"title": ""
},
{
"docid": "5d6bd34fb5fdb44950ec5d98e77219c3",
"text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.",
"title": ""
},
{
"docid": "7af729438f32c198d328a1ebc83d2eeb",
"text": "The development of natural language interfaces (NLI's) for databases has been a challenging problem in natural language processing (NLP) since the 1970's. The need for NLI's has become more pronounced due to the widespread access to complex databases now available through the Internet. A challenging problem for empirical NLP is the automated acquisition of NLI's from training examples. We present a method for integrating statistical and relational learning techniques for this task which exploits the strength of both approaches. Experimental results from three different domains suggest that such an approach is more robust than a previous purely logicbased approach. 1 I n t r o d u c t i o n We use the term semantic parsing to refer to the process of mapping a natural language sentence to a structured meaning representation. One interesting application of semantic parsing is building natural language interfaces for online databases. The need for such applications is growing since when information is delivered through the Internet, most users do not know the underlying database access language. An example of such an interface that we have developed is shown in Figure 1. Traditional (rationalist) approaches to constructing database interfaces require an expert to hand-craft an appropriate semantic parser (Woods, 1970; Hendrix et al., 1978). However, such hand-crafted parsers are time consllming to develop and suffer from problems with robustness and incompleteness even for domain specific applications. Nevertheless, very little research in empirical NLP has explored the task of automatically acquiring such interfaces from annotated training examples. The only exceptions of which we are aware axe a statistical approach to mapping airline-information queries into SQL presented in (Miller et al., 1996), a probabilistic decision-tree method for the same task described in (Kuhn and De Mori, 1995), and an approach using relational learning (a.k.a. inductive logic programming, ILP) to learn a logic-based semantic parser described in (Zelle and Mooney, 1996). The existing empirical systems for this task employ either a purely logical or purely statistical approach. The former uses a deterministic parser, which can suffer from some of the same robustness problems as rationalist methods. The latter constructs a probabilistic grammar, which requires supplying a sytactic parse tree as well as a semantic representation for each training sentence, and requires hand-crafting a small set of contextual features on which to condition the parameters of the model. Combining relational and statistical approaches can overcome the need to supply parse-trees and hand-crafted features while retaining the robustness of statistical parsing. The current work is based on the CHILL logic-based parser-acquisition framework (Zelle and Mooney, 1996), retaining access to the complete parse state for making decisions, but building a probabilistic relational model that allows for statistical parsing2 O v e r v i e w o f t h e A p p r o a c h This section reviews our overall approach using an interface developed for a U.S. Geography database (Geoquery) as a sample application (ZeUe and Mooney, 1996) which is available on the Web (see hl:tp://gvg, c s . u t e z a s , edu/users/n~./geo .html). 2.1 S e m a n t i c R e p r e s e n t a t i o n First-order logic is used as a semantic representation language. CHILL has also been applied to a restaurant database in which the logical form resembles SQL, and is translated",
"title": ""
},
{
"docid": "483b6f00bbd0bcefc945400912cdc428",
"text": "We intend to show that the optimal filter size of backwards convolution (or deconvolution (deconv)) for upsampling is closely related to the upscaling factor s. For conciseness, we consider a single-scale network (SS-Net(ord)) trained in an ordinary domain for upsampling a LR depth map with an upscaling factor s = 4. Figure S1 shows an overview of SS-Net(ord). Specifically, the first and third layers perform convolution, whereas the second layer performs backwards strided convolution. Activation function PReLU is used in SS-Net(ord) except the last layer. We set the network parameters: n1 = 64, n2 = 32, n3 = 1 and f1 = f3 = 5. We evaluate the super-resolving performance of SS-Net(ord) by using different deconv filter sizes f2×f2. Figure S2 shows the convergence curves using f2 ∈ (3, 9, 11). It can be shown that upsampling accuracy increases with f2 until it reaches 2s+1 i.e. f2 = 9. In a compromise between computation efficiency and upsampling performance, we choose deconv filter size to (2s+ 1)× (2s+ 1).",
"title": ""
},
{
"docid": "e17a1429f4ca9de808caaa842ee5a441",
"text": "Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of 〈subject, relation, object〉 triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80, 000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53, 000+ objects and 29, 000+ relations, a scale at which no previous work has been evaluated at. We show superiority of our model over competitive baselines on the original Visual Genome dataset with 80, 000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.",
"title": ""
},
{
"docid": "1e67d66e3b2a02c8acb8c4734dd7104b",
"text": "We introduce a new dataset of 293,008 high definition (1360 x 1360 pixels) fashion images paired with item descriptions provided by professional stylists. Each item is photographed from a variety of angles. We provide baseline results on 1) high-resolution image generation, and 2) image generation conditioned on the given text descriptions. We invite the community to improve upon these baselines. In this paper we also outline the details of a challenge that we are launching based upon this dataset.",
"title": ""
},
{
"docid": "1ce2a5e4aafed56039597524f59e2bcc",
"text": "Statistical mediation methods provide valuable information about underlying mediating psychological processes, but the ability to infer that the mediator variable causes the outcome variable is more complex than widely known. Researchers have recently emphasized how violating assumptions about confounder bias severely limits causal inference of the mediator to dependent variable relation. Our article describes and addresses these limitations by drawing on new statistical developments in causal mediation analysis. We first review the assumptions underlying causal inference and discuss three ways to examine the effects of confounder bias when assumptions are violated. We then describe four approaches to address the influence of confounding variables and enhance causal inference, including comprehensive structural equation models, instrumental variable methods, principal stratification, and inverse probability weighting. Our goal is to further the adoption of statistical methods to enhance causal inference in mediation studies.",
"title": ""
},
{
"docid": "9cfb82869b67633fd0f8a9394fad0c38",
"text": "A 1999 autopsy study of young adults in the US between the ages of 17 and 34 years of who died from accidents, suicides, and homicides confirmed that coronary artery disease (CAD) is ubiquitous in this age group. The disease process at this stage is too early to cause coronary events but heralds their onset in the decades to follow. These data are similar to those reported in an earlier postmortem analysis of US combat casualties during the Korean conflict, which found early CAD in nearly 80% of soldiers at an average age of 20 years. From these reports, which are 17 and 63 years old, respectively, it is clear that the foundation of CAD is established by the end of high school. Yet, medicine and public health leaders have not taken any steps to forestall or eliminate the early onset of this epidemic. Smoking cessation, a diet with lean meat and low-fat dairy, and exercise are generally advised, but cardiovascular disease (CVD) remains the number one killer of women and men in the US. The question is, why? Unfortunately, such dietary gestures do not treat the primary cause of CVD. The same can be said of commonly prescribed cardiovascular medications such as beta-blockers, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, anticoagulants, aspirin, and cholesterol lowering drugs and medical interventions such as bare metal stents, drug-eluting stents, and coronary artery bypass surgery. It is increasingly a shameful national embarrassment for the United States to have constructed a billion-dollar cardiac healthcare industry surrounding an illness that does not even exist in more than half of the planet. If you, as a cardiologist or a cardiac surgeon, decided to hang your shingle in Okinawa, the Papua Highlands of New Guinea, rural China, Central Africa, or with the Tarahumara Indians of Northern Mexico, you better plan on a different profession because these countries do not have cardiovascular disease. The common thread is that they all thrive on whole food, plant-based nutrition (WFPBN) with minimal intake of animal products. By way of contrast, in the United States, we ignore CVD inception initiated by progressive endothelial injury, inflammatory oxidative stress, decreased nitric oxide production, foam cell formation, diminished endothelial progenitor cell production and development of plaque that may rupture and cause myocardial infarction or stroke. This series of events is primarily set in motion, and worsened, by the Western diet, which consists of added oils, dairy, meat, fish, fowl, and sugary foods and drinks—all of which injure endothelial function after ingestion, making food a major, if not the major cause of CAD. In overlooking disease causation, we implement therapies that have high morbidity and mortality. The side effects of a plethora of cardiovascular drugs include the risk of diabetes, neuromuscular pain, brain fog, liver injury, chronic cough, fatigue, hemorrhage, and erectile dysfunction. Surgical interventions are fatal for tens of thousands of patients annually. Each year approximately 1.2 million stents are placed with a 1% mortality rate, causing 12,000 deaths, and 500,000 bypass surgeries are performed with a 3% mortality rate, resulting in another 15,000 deaths. In total, 27,000 patients die annually from these two procedures. It is as though in ignoring this dairy, oil, and animal-based illness, we are wedded to providing futile attempts at temporary symptomatic relief with drugs and interventional therapy, which employs an unsuccessful mechanical approach to a biological illness with no hope for cure. Patients continue to consume the very foods that are destroying them. This disastrous illness and ineffective treatments need never happen if we follow the lessons of plant-based cultures where CVD is virtually nonexistent.",
"title": ""
},
{
"docid": "eb71ba791776ddfe0c1ddb3dc66f6e06",
"text": "An enterprise resource planning (ERP) is an enterprise-wide application software package that integrates all necessary business functions into a single system with a common database. In order to implement an ERP project successfully in an organization, it is necessary to select a suitable ERP system. This paper presents a new model, which is based on linguistic information processing, for dealing with such a problem. In the study, a similarity degree based algorithm is proposed to aggregate the objective information about ERP systems from some external professional organizations, which may be expressed by different linguistic term sets. The consistency and inconsistency indices are defined by considering the subject information obtained from internal interviews with ERP vendors, and then a linear programming model is established for selecting the most suitable ERP system. Finally, a numerical example is given to demonstrate the application of the",
"title": ""
}
] |
scidocsrr
|
a94d75d9f9ab0d00da601fd4cb4a52d8
|
Love & Loans The Effect of Beauty and Personal Characteristics in Credit Markets∗
|
[
{
"docid": "7440cb90073c8d8d58e28447a1774b2c",
"text": "Common maxims about beauty suggest that attractiveness is not important in life. In contrast, both fitness-related evolutionary theory and socialization theory suggest that attractiveness influences development and interaction. In 11 meta-analyses, the authors evaluate these contradictory claims, demonstrating that (a) raters agree about who is and is not attractive, both within and across cultures; (b) attractive children and adults are judged more positively than unattractive children and adults, even by those who know them; (c) attractive children and adults are treated more positively than unattractive children and adults, even by those who know them; and (d) attractive children and adults exhibit more positive behaviors and traits than unattractive children and adults. Results are used to evaluate social and fitness-related evolutionary theories and the veracity of maxims about beauty.",
"title": ""
}
] |
[
{
"docid": "f400ca4fe8fc5c684edf1ae60e026632",
"text": "Driverless vehicles will be common on the road in a short time. They will have many impacts on the global transport market trends. One of the remarkable driverless vehicles impacts will be the laying aside of rail systems, because of several reasons, that is to say traffic congestions will be no more a justification for rail, rail will not be the best answer for disableds, air pollution of cars are more or less equal to air pollution of trains and the last but not least reason is that driverless cars are safer than trains.",
"title": ""
},
{
"docid": "9330c2308883a44b58bb18a7e9de7748",
"text": "In this paper, model predictive control (MPC) strategy is implemented to a GE9001E gas turbine power plant. A linear model is developed for the gas turbine using conventional mathematical models and ARX identification procedure. Also a process control model is identified for system outputs prediction. The controller is designed in order to adjust the exhaust gas temperature and the rotor speed by compressor inlet guide vane (IGV) position and fuel signals. The proposed system is simulated under load demand disturbances. It is shown that MPC controller can maintain the rotor speed and exhaust gas temperature more accurately in comprehension with both SpeedTronicTM control system and conventional PID control. Key-words: Gas turbine, Identification, ARX, Predictive control, Power plant, Modeling, Multivariable control, PID",
"title": ""
},
{
"docid": "c0296c76b81846a9125b399e6efd2238",
"text": "Three Guanella-type transmission line transformers (TLT) are presented: a coiled TLT on a GaAs substrate, a straight ferriteless TLT on a multilayer PCB and a straight hybrid TLT that employs semi-rigid coaxial cables and a ferrite. All three devices have 4:1 impedance transformation ratio, matching 12.5 /spl Omega/ to 50 /spl Omega/. Extremely broadband operation is achieved. A detailed description of the devices and their operational principle is given. General aspects of the design of TLT are discussed.",
"title": ""
},
{
"docid": "02bd3ca492a58e3007c115401419a8ca",
"text": "This paper presents a hybrid predictive model for forecasting intraday stock prices. The proposed model hybridizes the variational mode decomposition (VMD) which is a new multiresolution technique with backpropagation neural network (BPNN). The VMD is used to decompose price series into a sum of variational modes (VM). The extracted VM are used to train BPNN. Besides, particle swarm optimization (PSO) is employed for BPNN initial weights optimization. Experimental results from a set of six stocks show the superiority of the hybrid VMD–PSO–BPNN predictive model over the baseline predictive model eywords: ariational mode decomposition rtificial neural networks article swarm optimization ntraday stock price which is a PSO–BPNN model trained with past prices. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "825888e4befcbf6b492143a13928a34e",
"text": "Sentiment analysis is one of the prominent fields of data mining that deals with the identification and analysis of sentimental contents generally available at social media. Twitter is one of such social medias used by many users about some topics in the form of tweets. These tweets can be analyzed to find the viewpoints and sentiments of the users by using clustering-based methods. However, due to the subjective nature of the Twitter datasets, metaheuristic-based clustering methods outperforms the traditional methods for sentiment analysis. Therefore, this paper proposes a novel metaheuristic method (CSK) which is based on K-means and cuckoo search. The proposed method has been used to find the optimum cluster-heads from the sentimental contents of Twitter dataset. The efficacy of proposed method has been tested on different Twitter datasets and compared with particle swarm optimization, differential evolution, cuckoo search, improved cuckoo search, gauss-based cuckoo search, and two n-grams methods. Experimental results and statistical analysis validate that the proposed method outperforms the existing methods. The proposed method has theoretical implications for the future research to analyze the data generated through social networks/medias. This method has also very generalized practical implications for designing a system that can provide conclusive reviews on any social issues. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "968797472eeedd75ff9b89909bc4f84d",
"text": "In this paper, we investigate the issue of minimizing data center energy usage. In particular, we formulate a problem of virtual machine placement with the objective of minimizing the total power consumption of all the servers. To do this, we examine a CPU power consumption model and then incorporate the model into an mixed integer programming formulation. In order to find optimal or near-optimal solutions fast, we resolve two difficulties: non-linearity of the power model and integer decision variables. We first show how to linearize the problem, and then give a relaxation and iterative rounding algorithm. Computation experiments have shown that the algorithm can solve the problem much faster than the standard integer programming algorithms, and it consistently yields near-optimal solutions. We also provide a heuristic min-cost algorithm, which finds less optimal solutions but works even faster.",
"title": ""
},
{
"docid": "dfa890a87b2e5ac80f61c793c8bca791",
"text": "Reinforcement learning (RL) algorithms have traditionally been thought of as trial and error learning methods that use actual control experience to incrementally improve a control policy. Sutton's DYNA architecture demonstrated that RL algorithms can work as well using simulated experience from an environment model, and that the resulting computation was similar to doing one-step lookahead planning. Inspired by the literature on hierarchical planning, I propose learning a hierarchy of models of the environment that abstract temporal detail as a means of improving the scalability of RL algorithms. I present H-DYNA (Hierarchical DYNA), an extension to Sutton's DYNA architecture that is able to learn such a hierarchy of abstract models. H-DYNA di ers from hierarchical planners in two ways: rst, the abstract models are learned using experience gained while learning to solve other tasks in the same environment, and second, the abstract models can be used to solve stochastic control tasks. Simulations on a set of compositionally-structured navigation tasks show that H-DYNA can learn to solve them faster than conventional RL algorithms. The abstract models also serve as mechanisms for achieving transfer of learning across multiple tasks.",
"title": ""
},
{
"docid": "6650966d57965a626fd6f50afe6cd7a4",
"text": "This paper presents a generalized version of the linear threshold model for simulating multiple cascades on a network while allowing nodes to switch between them. The proposed model is shown to be a rapidly mixing Markov chain and the corresponding steady state distribution is used to estimate highly likely states of the cascades' spread in the network. Results on a variety of real world networks demonstrate the high quality of the estimated solution.",
"title": ""
},
{
"docid": "6981b51813c8e9914f8dc4b965a81fd4",
"text": "Search result diversification has been effectively employed to tackle query ambiguity, particularly in the context of web search. However, ambiguity can manifest differently in different search verticals, with ambiguous queries spanning, e.g., multiple place names, content genres, or time periods. In this paper, we empirically investigate the need for diversity across four different verticals of a commercial search engine, including web, image, news, and product search. As a result, we introduce the problem of aggregated search result diversification as the task of satisfying multiple information needs across multiple search verticals. Moreover, we propose a probabilistic approach to tackle this problem, as a natural extension of state-of-the-art diversification approaches. Finally, we generalise standard diversity metrics, such as ERR-IA and α-nDCG, into a framework for evaluating diversity across multiple search verticals.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "c53021193518ebdd7006609463bafbcc",
"text": "BACKGROUND AND OBJECTIVES\nSleep is important to child development, but there is limited understanding of individual developmental patterns of sleep, their underlying determinants, and how these influence health and well-being. This article explores the presence of various sleep patterns in children and their implications for health-related quality of life.\n\n\nMETHODS\nData were collected from the Longitudinal Study of Australian Children. Participants included 2926 young children followed from age 0 to 1 years to age 6 to 7 years. Data on sleep duration were collected every 2 years, and covariates (eg, child sleep problems, maternal education) were assessed at baseline. Growth mixture modeling was used to identify distinct longitudinal patterns of sleep duration and significant covariates. Linear regression examined whether the distinct sleep patterns were significantly associated with health-related quality of life.\n\n\nRESULTS\nThe results identified 4 distinct sleep duration patterns: typical sleepers (40.6%), initially short sleepers (45.2%), poor sleepers (2.5%), and persistent short sleepers (11.6%). Factors such as child sleep problems, child irritability, maternal employment, household financial hardship, and household size distinguished between the trajectories. The results demonstrated that the trajectories had different implications for health-related quality of life. For instance, persistent short sleepers had poorer physical, emotional, and social health than typical sleepers.\n\n\nCONCLUSIONS\nThe results provide a novel insight into the nature of child sleep and the implications of differing sleep patterns for health-related quality of life. The findings could inform the development of effective interventions to promote healthful sleep patterns in children.",
"title": ""
},
{
"docid": "7d8884a7f6137068f8ede464cf63da5b",
"text": "Object detection and localization is a crucial step for inspection and manipulation tasks in robotic and industrial applications. We present an object detection and localization scheme for 3D objects that combines intensity and depth data. A novel multimodal, scale- and rotation-invariant feature is used to simultaneously describe the object's silhouette and surface appearance. The object's position is determined by matching scene and model features via a Hough-like local voting scheme. The proposed method is quantitatively and qualitatively evaluated on a large number of real sequences, proving that it is generic and highly robust to occlusions and clutter. Comparisons with state of the art methods demonstrate comparable results and higher robustness with respect to occlusions.",
"title": ""
},
{
"docid": "aded7e5301d40faf52942cd61a1b54ba",
"text": "In this paper, a lower limb rehabilitation robot in sitting position is developed for patients with muscle weakness. The robot is a stationary based type which is able to perform various types of therapeutic exercises. For safe operation, the robot's joint is driven by two-stage cable transmission while the balance mechanism is used to reduce actuator size and transmission ratio. Control algorithms for passive, assistive and resistive exercises are designed to match characteristics of each therapeutic exercises and patients with different muscle strength. Preliminary experiments conducted with a healthy subject have demonstrated that the robot and the control algorithms are promising for lower limb rehabilitation task.",
"title": ""
},
{
"docid": "67a958a34084061e3bcd7964790879c4",
"text": "Researchers spent lots of time in searching published articles relevant to their project. Though having similar interest in projects researches perform individual and time overwhelming searches. But researchers are unable to control the results obtained from earlier search process, whereas they can share the results afterwards. We propose a research paper recommender system by enhancing existing search engines with recommendations based on preceding searches performed by others researchers that avert time absorbing searches. Top-k query algorithm retrieves best answers from a potentially large record set so that we find the most accurate records from the given record set that matches the filtering keywords. KeywordsRecommendation System, Personalization, Profile, Top-k query, Steiner Tree",
"title": ""
},
{
"docid": "7bd901463614409eee12d6968e4f4d19",
"text": "This study investigated the inactivation of two antibiotic resistance genes (ARGs)-sul1 and tetG, and the integrase gene of class 1 integrons-intI1 by chlorination, ultraviolet (UV), and ozonation disinfection. Inactivation of sul1, tetG, and intI1 underwent increased doses of three disinfectors, and chlorine disinfection achieved more inactivation of ARGs and intI1 genes (chlorine dose of 160 mg/L with contact time of 120 min for 2.98-3.24 log reductions of ARGs) than UV irradiation (UV dose of 12,477 mJ/cm(2) for 2.48-2.74 log reductions of ARGs) and ozonation disinfection (ozonation dose of 177.6 mg/L for 1.68-2.55 log reductions of ARGs). The 16S rDNA was more efficiently removed than ARGs by ozone disinfection. The relative abundance of selected genes (normalized to 16S rDNA) increased during ozonation and with low doses of UV and chlorine disinfection. Inactivation of sul1 and tetG showed strong positive correlations with the inactivation of intI1 genes (for sul1, R (2) = 0.929 with p < 0.01; for tetG, R (2) = 0.885 with p < 0.01). Compared to other technologies (ultraviolet disinfection, ozonation disinfection, Fenton oxidation, and coagulation), chlorination is an alternative method to remove ARGs from wastewater effluents. At a chlorine dose of 40 mg/L with 60 min contact time, the selected genes inactivation efficiency could reach 1.65-2.28 log, and the cost was estimated at 0.041 yuan/m(3).",
"title": ""
},
{
"docid": "d992300ed0d3e95c14eb115f0f3b09ac",
"text": "The purpose of this paper is to determine those factors that influence the adoption of internet banking services in Tunisia. A theoretical model is provided that conceptualizes and links different factors influencing the adoption of internet banking. A total of 253 respondents in Tunisia were sampled for responding: 95 were internet bank users, 158 were internet bank non users. Factor analyses and regression technique are employed to study the relationship. The results of the model tested clearly that use of internet banking in Tunisia is influenced most strongly by convenience, risk, security and prior internet knowledge. Only information on online banking did not affect intention to use internet banking service in Tunisia. The results also propose that demographic factors impact significantly internet banking behaviour, specifically, occupation and instruction. Finally, this paper suggests that an understanding the factors affecting intention to use internet banking is very important to the practitioners who plan and promote new forms of banking in the current competitive market.",
"title": ""
},
{
"docid": "1e59c6cc3dcc34ec26b912a5162635ed",
"text": "Finding clusters with widely differing sizes, shapes and densities in presence of noise and outliers is a challenging job. The DBSCAN is a versatile clustering algorithm that can find clusters with differing sizes and shapes in databases containing noise and outliers. But it cannot find clusters based on difference in densities. We extend the DBSCAN algorithm so that it can also detect clusters that differ in densities. Local densities within a cluster are reasonably homogeneous. Adjacent regions are separated into different clusters if there is significant change in densities. Thus the algorithm attempts to find density based natural clusters that may not be separated by any sparse region. Computational complexity of the algorithm is O(n log n).",
"title": ""
},
{
"docid": "fd2e7025271565927f43784f0c69c3fb",
"text": "In this paper, we have proposed a fingerprint orientation model based on 2D Fourier expansions (FOMFE) in the phase plane. The FOMFE does not require prior knowledge of singular points (SPs). It is able to describe the overall ridge topology seamlessly, including the SP regions, even for noisy fingerprints. Our statistical experiments on a public database show that the proposed FOMFE can significantly improve the accuracy of fingerprint feature extraction and thus that of fingerprint matching. Moreover, the FOMFE has a low-computational cost and can work very efficiently on large fingerprint databases. The FOMFE provides a comprehensive description for orientation features, which has enabled its beneficial use in feature-related applications such as fingerprint indexing. Unlike most indexing schemes using raw orientation data, we exploit FOMFE model coefficients to generate the feature vector. Our indexing experiments show remarkable results using different fingerprint databases",
"title": ""
},
{
"docid": "3819259ca40ee3c075e80bdf2ded4475",
"text": "BACKGROUND\nThe extant major psychiatric classifications DSM-IV, and ICD-10, are atheoretical and largely descriptive. Although this achieves good reliability, the validity of a medical diagnosis would be greatly enhanced by an understanding of risk factors and clinical manifestations. In an effort to group mental disorders on the basis of aetiology, five clusters have been proposed. This paper considers the validity of the fourth cluster, emotional disorders, within that proposal.\n\n\nMETHOD\nWe reviewed the literature in relation to 11 validating criteria proposed by a Study Group of the DSM-V Task Force, as applied to the cluster of emotional disorders.\n\n\nRESULTS\nAn emotional cluster of disorders identified using the 11 validators is feasible. Negative affectivity is the defining feature of the emotional cluster. Although there are differences between disorders in the remaining validating criteria, there are similarities that support the feasibility of an emotional cluster. Strong intra-cluster co-morbidity may reflect the action of common risk factors and also shared higher-order symptom dimensions in these emotional disorders.\n\n\nCONCLUSION\nEmotional disorders meet many of the salient criteria proposed by the Study Group of the DSM-V Task Force to suggest a classification cluster.",
"title": ""
},
{
"docid": "ed66f39bda7ccd5c76f64543b5e3abd6",
"text": "BACKGROUND\nLoeys-Dietz syndrome is a recently recognized multisystemic disorder caused by mutations in the genes encoding the transforming growth factor-beta receptor. It is characterized by aggressive aneurysm formation and vascular tortuosity. We report the musculoskeletal demographic, clinical, and imaging findings of this syndrome to aid in its diagnosis and treatment.\n\n\nMETHODS\nWe retrospectively analyzed the demographic, clinical, and imaging data of sixty-five patients with Loeys-Dietz syndrome seen at one institution from May 2007 through December 2008.\n\n\nRESULTS\nThe patients had a mean age of twenty-one years, and thirty-six of the sixty-five patients were less than eighteen years old. Previous diagnoses for these patients included Marfan syndrome (sixteen patients) and Ehlers-Danlos syndrome (two patients). Spinal and foot abnormalities were the most clinically important skeletal findings. Eleven patients had talipes equinovarus, and nineteen patients had cervical anomalies and instability. Thirty patients had scoliosis (mean Cobb angle [and standard deviation], 30 degrees +/- 18 degrees ). Two patients had spondylolisthesis, and twenty-two of thirty-three who had computed tomography scans had dural ectasia. Thirty-five patients had pectus excavatum, and eight had pectus carinatum. Combined thumb and wrist signs were present in approximately one-fourth of the patients. Acetabular protrusion was present in approximately one-third of the patients and was usually mild. Fourteen patients had previous orthopaedic procedures, including scoliosis surgery, cervical stabilization, clubfoot correction, and hip arthroplasty. Features of Loeys-Dietz syndrome that are important clues to aid in making this diagnosis include bifid broad uvulas, hypertelorism, substantial joint laxity, and translucent skin.\n\n\nCONCLUSIONS\nPatients with Loeys-Dietz syndrome commonly present to the orthopaedic surgeon with cervical malformations, spinal and foot deformities, and findings in the craniofacial and cutaneous systems.\n\n\nLEVEL OF EVIDENCE\nTherapeutic Level IV. See Instructions to Authors for a complete description of levels of evidence.",
"title": ""
}
] |
scidocsrr
|
77531cd807b13082ff5912268e0218cd
|
Real-time raindrop detection based on cellular neural networks for ADAS
|
[
{
"docid": "1eab78b995fadb69692b254f41a5028e",
"text": "Raindrops on vehicles' windshields can degrade the performance of in-vehicle vision systems. In this paper, we present a novel approach that detects and removes raindrops in the captured image when using a single in-vehicle camera. When driving in light or moderate rainy conditions, raindrops appear as small circlets on the windshield in each image frame. Therefore, by analyzing the color, texture and shape characteristics of raindrops in images, we first identify possible raindrop candidates in the regions of interest (ROI), which are small locally salient droplets in a raindrop saliency map. Then, a learning-based verification algorithm is proposed to reduce the number of false alarms (i.e., clear regions mis-detected as raindrops). Finally, we fill in the regions occupied by the raindrops using image inpainting techniques. Numerical experiments indicate that the proposed method is capable of detecting and reducing raindrops in various rain and road scenarios. We also quantify the improvement offered by the proposed method over the state-of-the-art algorithms aimed at the same problem and the benefits to the in-vehicle vision applications like clear path detection.",
"title": ""
}
] |
[
{
"docid": "bf1bd9bdbe8e4a93e814ea9dc91e6eb3",
"text": "A new robust matching method is proposed. The progressive sample consensus (PROSAC) algorithm exploits the linear ordering defined on the set of correspondences by a similarity function used in establishing tentative correspondences. Unlike RANSAC, which treats all correspondences equally and draws random samples uniformly from the full set, PROSAC samples are drawn from progressively larger sets of top-ranked correspondences. Under the mild assumption that the similarity measure predicts correctness of a match better than random guessing, we show that PROSAC achieves large computational savings. Experiments demonstrate it is often significantly faster (up to more than hundred times) than RANSAC. For the derived size of the sampled set of correspondences as a function of the number of samples already drawn, PROSAC converges towards RANSAC in the worst case. The power of the method is demonstrated on wide-baseline matching problems.",
"title": ""
},
{
"docid": "2200edf1e0be6412c6c0ecfbb487ca2f",
"text": "Algebraic effects are an interesting way to structure effectful programs and offer new modularity properties. We present the Scala library Effekt, which is implemented in terms of a monad for multi-prompt delimited continuations and centered around capability passing. This makes the newly proposed feature of implicit function types a perfect fit for the syntax of our library. Basing the library design on capability passing and a polymorphic embedding of effect handlers furthermore opens up interesting dimensions of extensibility. Preliminary benchmarks comparing Effekt with an established library suggest significant speedups.",
"title": ""
},
{
"docid": "2b3c507c110452aa54c046f9e7f9200d",
"text": "Word embeddings are crucial to many natural language processing tasks. The quality of embeddings relies on large nonnoisy corpora. Arabic dialects lack large corpora and are noisy, being linguistically disparate with no standardized spelling. We make three contributions to address this noise. First, we describe simple but effective adaptations to word embedding tools to maximize the informative content leveraged in each training sentence. Second, we analyze methods for representing disparate dialects in one embedding space, either by mapping individual dialects into a shared space or learning a joint model of all dialects. Finally, we evaluate via dictionary induction, showing that two metrics not typically reported in the task enable us to analyze our contributions’ effects on low and high frequency words. In addition to boosting performance between 2-53%, we specifically improve on noisy, low frequency forms without compromising accuracy on high frequency forms.",
"title": ""
},
{
"docid": "46a8022eea9ed7bcfa1cd8041cab466f",
"text": "In this paper, a bidirectional converter with a uniform controller for Vehicle to grid (V2G) application is designed. The bidirectional converter consists of two stages one is ac-dc converter and second is dc-dc converter. For ac-dc converter bipolar modulation is used. Two separate controller systems are designed for converters which follow active and reactive power commands from grid. Uniform controller provides reactive power support to the grid. The charger operates in two quadrants I and IV. There are three modes of operation viz. charging only operation, charging-capacitive operation and charging-inductive operation. During operation under these three operating modes vehicle's battery is not affected. The whole system is tested using MATLAB/SIMULINK.",
"title": ""
},
{
"docid": "e900bd24f24f5b6c4ec1cab2fac5ce45",
"text": "The recent emergence of lab-on-a-chip (LoC) technology has led to a paradigm shift in many healthcare-related application areas, e.g., point-of-care clinical diagnostics, high-throughput sequencing, and proteomics. A promising category of LoCs is digital microfluidic (DMF)-based biochips, in which nanoliter-volume fluid droplets are manipulated on a 2-D electrode array. A key challenge in designing such chips and mapping lab-bench protocols to a LoC is to carry out the dilution process of biochemical samples efficiently. As an optimization and automation technique, we present a dilution/mixing algorithm that significantly reduces the production of waste droplets. This algorithm takes O(n) time to compute at most n sequential mix/split operations required to achieve any given target concentration with an error in concentration factor less than [1/(2n)]. To implement the algorithm, we design an architectural layout of a DMF-based LoC consisting of two O(n)-size rotary mixers and O(n) storage electrodes. Simulation results show that the proposed technique always yields nonnegative savings in the number of waste droplets and also in the total number of input droplets compared to earlier methods.",
"title": ""
},
{
"docid": "5168f7f952d937460d250c44b43f43c0",
"text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).",
"title": ""
},
{
"docid": "ad6bb165620dafb7dcadaca91c9de6b0",
"text": "This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression.",
"title": ""
},
{
"docid": "84e781fbd41d209567474143c3251edc",
"text": "We review recent neuroimaging research on the experiences of romantic love and sexual desire, focusing specifically on the question of links and distinctions between the brain regions involved in these experiences. We conclude that although love and desire are associated with distinct patterns of brain activation, certain regions (such as the caudate, putamen, insula, and anterior cingulate cortex) have shown activation during both experiences, raising the possibility that certain types of love and desire may be relatively distinct from one another (on an experiential and neural level) whereas others are more interconnected. We outline several promising directions for future research on this possibility, for example testing for differences between the neurobiological bases of different types of sexual desires (i.e., those directed toward strangers versus romantic partners; those which are more “responsive” versus automatic, and those which are more or less dependent on an emotional context). We also discuss future research directions related to the study of female sexual desire and orientation.",
"title": ""
},
{
"docid": "93bca110f5551d8e62dc09328de83d4f",
"text": "It is well established that emotion plays a key role in human social and economic decision making. The recent literature on emotion regulation (ER), however, highlights that humans typically make efforts to control emotion experiences. This leaves open the possibility that decision effects previously attributed to acute emotion may be a consequence of acute ER strategies such as cognitive reappraisal and expressive suppression. In Study 1, we manipulated ER of laboratory-induced fear and disgust, and found that the cognitive reappraisal of these negative emotions promotes risky decisions (reduces risk aversion) in the Balloon Analogue Risk Task and is associated with increased performance in the prehunch/hunch period of the Iowa Gambling Task. In Study 2, we found that naturally occurring negative emotions also increase risk aversion in Balloon Analogue Risk Task, but the incidental use of cognitive reappraisal of emotions impedes this effect. We offer evidence that the increased effectiveness of cognitive reappraisal in reducing the experience of emotions underlies its beneficial effects on decision making.",
"title": ""
},
{
"docid": "98f246414ecd65785be73b6b95fbd2b4",
"text": "The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worst-case exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a general-purpose tool in areas as diverse as software and hardware verification [29–31, 228], automatic test pattern generation [138, 221], planning [129, 197], scheduling [103], and even challenging problems from algebra [238]. Annual SAT competitions have led to the development of dozens of clever implementations of such solvers [e. and the creation of an extensive suite of real-world instances as well as challenging hand-crafted benchmark problems [cf. 115]. Modern SAT solvers provide a \" black-box \" procedure that can often solve hard structured problems with over a million variables and several million constraints. In essence, SAT solvers provide a generic combinatorial reasoning and search platform. The underlying representational formalism is propositional logic. However, the full potential of SAT solvers only becomes apparent when one considers their use in applications that are not normally viewed as propositional reasoning tasks. For example, consider AI planning, which is a PSPACE-complete problem. By restricting oneself to polynomial size plans, one obtains an NP-complete reasoning problem , easily encoded as a Boolean satisfiability problem, which can be given to a SAT solver [128, 129]. In hardware and software verification, a similar strategy leads one to consider bounded model checking, where one places a bound on the length of possible error traces one is willing to consider [30]. Another example of a recent application of SAT solvers is in computing stable models used in the answer set programming paradigm, a powerful knowledge representation and reasoning approach [81]. In these applications—planning, verification, and answer set programming—the translation into a propositional representation (the \" SAT encoding \") is done automatically",
"title": ""
},
{
"docid": "a5c35cd336f016c2f15a4d4067311d0a",
"text": "0965-9978/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.advengsoft.2013.04.021 q In honour of my friend Zdenek Bittnar, for his 70th birthday. W. Patrick De Wilde. ⇑ Corresponding author. Tel.: +32 (0)2 629 29 20; fax: +32 (0)2 629 29 28. E-mail addresses: Lincy.Pyl@vub.ac.be (L. Pyl), csitters@gmail.com (C.W.M. Sitters), de.wilde.willy@gmail.com (W.P. De Wilde). L. Pyl a,⇑, C.W.M. Sitters , W.P. De Wilde a",
"title": ""
},
{
"docid": "3a5ef0db1fbbebd7c466a3b657e5e173",
"text": "Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.",
"title": ""
},
{
"docid": "227753713b9e41d5b83d53a6502a2a40",
"text": "Example-based single image super-resolution (SR) has recently shown outcomes with high reconstruction performance. Several methods based on neural networks have successfully introduced techniques into SR problem. In this paper, we propose a three-dimensional (3D) convolutional neural network to generate high-resolution (HR) brain image from its input low-resolution (LR) with the help of patches of other HR brain images. Our work demonstrates the need of fitting data and network parameters for 3D brain MRI.",
"title": ""
},
{
"docid": "9667bb7f2c6a45130b4c0be372605023",
"text": "Statistical analysis of abstract paintings is becoming an increasingly important tool for understanding the creative process of visual artists. We present a multifractal analysis of ‘poured’ paintings from the Abstract Expressionism and Les Automatistes movements. The box-counting dimension (D0) is measured for the analyzed paintings, as is the associated multifractal depth DD1⁄4D0 DN, where DN is the asymptotic dimension. We investigate the role of depth by plotting a ‘phase space’ diagram that examines the relationship between D0 and DN. We show that, although the D0 and DN values vary between individual paintings, the collection of paintings exhibit a similar depth, suggesting a shared visual characteristic for this genre. We discuss the visual implications of this result. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "94f364c7b1f4254db525c3c6108a9e4c",
"text": "A planar radar sensor for automotive application is presented. The design comprises a fully integrated transceiver multi-chip module (MCM) and an electronically steerable microstrip patch array. The antenna feed network is based on a modified Rotman-lens. An extended angular coverage together with an adapted resolution allows for the integration of automatic cruise control (ACC), precrash sensing and cut-in detection within a single 77 GHz frontend. For ease of manufacturing the interconnects between antenna and MCM rely on a mixed wire bond and flip-chip approach. The concept is validated by laboratory radar measurements.",
"title": ""
},
{
"docid": "eeea9af1b940a2b12609df165d975b32",
"text": "A G-Band planar stubbed branch-line balun is designed and fabricated in 3μm thick BCB technology. This topology of the balun does not need thru-substrate via hole or thin-film resistor which makes it extremely suitable for realization on single-layer high-resistivity substrates commonly used at millimeter-wave or post-processed BCB layers on top of standard semi-insulating wafers. The design is simulated and validated by measurements. Measurement results on two fabricated back-to-back baluns show better than 10 dB input and output return loss and 3.2 dB insertion loss from 140 to 220 GHz.",
"title": ""
},
{
"docid": "9d55947637b358c4dc30d7ba49885472",
"text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;",
"title": ""
},
{
"docid": "7aa3aa25923108f266a7855f4b238a78",
"text": "Collection of multiple annotations from the crowd workers is useful for diverse applications. In this paper, the problem of obtaining the final judgment from such crowd-based annotations has been addressed in an unsupervised way using a biclustering-based approach. Results on multiple datasets show that the proposed approach is competitively better than others, without even using the entire dataset.",
"title": ""
},
{
"docid": "1e8cc72ad8ee3368b092aa5a96e782f9",
"text": "This paper presents a newly developed implementation of remote message passing, remote actor creation and actor migration in SALSA Lite. The new runtime and protocols are implemented using SALSA Lite’s lightweight actors and asynchronous message passing, and provide significant performance improvements over SALSA version 1.1.5. Actors in SALSA Lite can now be local, the default lightweight actor implementation; remote, actors which can be referenced remotely and send remote messages, but cannot migrate; or mobile, actors that can be remotely referenced, send remote messages and migrate to different locations. Remote message passing in SALSA Lite is twice as fast, actor migration is over 17 times as fast, and remote actor creation is two orders of magnitude faster. Two new benchmarks for remote message passing and migration show this implementation has strong scalability in terms of concurrent actor message passing and migration. The costs of using remote and mobile actors are also investigated. For local message passing, remote actors resulted in no overhead, and mobile actors resulted in 30% overhead. Local creation of remote and mobile actors was more expensive with 54% overhead for remote actors and 438% for mobile actors. In distributed scenarios, creating mobile actors remotely was only 6% slower than creating remote actors remotely, and passing messages between mobile actors on different theaters was only 5.55% slower than passing messages between remote actors. These results highlight the benefits of our approach in implementing the distributed runtime over a core set of efficient lightweight actors, as well as provide insights into the costs of implementing remote message passing and actor mobility.",
"title": ""
},
{
"docid": "86aeb2e62f01f64cc73f6d2ff764e1d7",
"text": "This paper aims to make two contributions to the sustainability transitions literature, in particular the Geels and Schot (2007. Res. Policy 36(3), 399) transition pathways typology. First, it reformulates and differentiates the typology through the lens of endogenous enactment, identifying the main patterns for actors, formal institutions, and technologies. Second, it suggests that transitions may shift between pathways, depending on struggles over technology deployment and institutions. Both contributions are demonstrated with a comparative analysis of unfolding low-carbon electricity transitions in Germany and the UK between 1990–2014. The analysis shows that Germany is on a substitution pathway, enacted by new entrants deploying small-scale renewable electricity technologies (RETs), while the UK is on a transformation pathway, enacted by incumbent actors deploying large-scale RETs. Further analysis shows that the German transition has recently shifted from a ‘stretch-and-transform’ substitution pathway to a ‘fit-and-conform’ pathway, because of a fightback from utilities and altered institutions. It also shows that the UK transition moved from moderate to substantial incumbent reorientation, as government policies became stronger. Recent policy changes, however, substantially downscaled UK renewables support, which is likely to shift the transition back to weaker reorientation. © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
}
] |
scidocsrr
|
791963b7aebca08af28760f9721c6b7b
|
Ringing out fault tolerance. A new ring network for superior low-cost dependability
|
[
{
"docid": "cf369f232ba023e675f322f42a20b2c2",
"text": "Ring topology local area networks (LAN’s) using the “buffer insertion” access method have as yet received relatively little attention. In this paper we present details of a LAN of this.-, called SILK-system for integrated local communication (in German, “Kommunikation”). Sections of the paper describe the synchronous transmission technique of the ring channel, the time-multiplexed access of eight ports at each node, the “braided” interconnection for bypassing defective nodes, and the role of interface transformation units and user interfaces, as well as some traffic,characteristics and reliability aspects. SILK’S modularity and open system concept are demonstrated by the already implemented applications such as distributed text editing, local telephone or teletex exchange, and process control in a TV studio.",
"title": ""
}
] |
[
{
"docid": "663068bb3ff4d57e1609b2a337a34d7f",
"text": "Automated optic disk (OD) detection plays an important role in developing a computer aided system for eye diseases. In this paper, we propose an algorithm for the OD detection based on structured learning. A classifier model is trained based on structured learning. Then, we use the model to achieve the edge map of OD. Thresholding is performed on the edge map, thus a binary image of the OD is obtained. Finally, circle Hough transform is carried out to approximate the boundary of OD by a circle. The proposed algorithm has been evaluated on three public datasets and obtained promising results. The results (an area overlap and Dices coefficients of 0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false positive fraction of 0.9183 and 0.0102) show that the proposed method is very competitive with the state-of-the-art methods and is a reliable tool for the segmentation of OD.",
"title": ""
},
{
"docid": "03ee8359c7d0115897c89d36531ed971",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading acs surgery principles and practice is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "1c3af13e29fc8a1cea5ee821d62b86f0",
"text": "Cellular and 802.11 WiFi are compelling options for mobile Internet connectivity. The goal of our work is to understand the performance afforded by each of these technologies in diverse environments and use conditions. In this paper, we compare and contrast cellular and WiFi performance using crowd-sourced data from Speedtest.net. Our study considers spatio-temporal performance (upload/download throughput and latency) using over 3 million user-initiated tests from iOS and Android apps in 15 different metro areas collected over a 15 week period. Our basic performance comparisons show that (i) WiFi provides better absolute download/upload throughput, and a higher degree of consistency in performance; (ii) WiFi networks generally deliver lower absolute latency, but the consistency in latency is often better with cellular access; (iii) throughput and latency vary widely depending on the particular access type e.g., HSPA, EVDO, LTE, WiFi, etc.) and service provider. More broadly, our results show that performance consistency for cellular and WiFi is much lower than has been reported for wired broadband. Temporal analysis shows that average performance for cell and WiFi varies with time of day, with the best performance for large metro areas coming at non-peak hours. Spatial analysis shows that performance is highly variable across metro areas, but that there are subregions that offer consistently better performance for cell or WiFi. Comparisons between metro areas show that larger areas provide higher throughput and lower latency than smaller metro areas, suggesting where ISPs have focused their deployment efforts. Finally, our analysis reveals diverse performance characteristics resulting from the rollout of new cell access technologies and service differences among local providers.",
"title": ""
},
{
"docid": "caae0254ea28dad0abf2f65fcadc7971",
"text": "Deregulation within the financial service industries and the widespread acceptance of new technologies is increasing competition in the finance marketplace. Central to the business strategy of every financial service company is the ability to retain existing customers and reach new prospective customers. Data mining is adopted to play an important role in these efforts. In this paper, we present a data mining approach for analyzing retailing bank customer attrition. We discuss the challenging issues such as highly skewed data, time series data unrolling, leaker field detection etc, and the procedure of a data mining project for the attrition analysis for retailing bank customers. We use lift as a proper measure for attrition analysis and compare the lift of data mining models of decision tree, boosted naïve Bayesian network, selective Bayesian network, neural network and the ensemble of classifiers of the above methods. Some interesting findings are reported. Our research work demonstrates the effectiveness and efficiency of data mining in attrition analysis for retailing bank.",
"title": ""
},
{
"docid": "9683bb5dc70128d3981b10503cf3261a",
"text": "This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different vendors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors.\n As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtualization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques.\n VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to efficiently virtualize the x86 architecture and support most commodity operating systems. By relying on x86 hardware segmentation as a protection mechanism, the binary translator could execute translated code at near hardware speeds. The binary translator also relied on partial evaluation and adaptive retranslation to reduce the overall overheads of virtualization.\n Written with the benefit of hindsight, this article shares the key lessons we learned from building the original system and from its later evolution.",
"title": ""
},
{
"docid": "5039733d1fd5361820489549bfd2669f",
"text": "Reporting the economic burden of oral diseases is important to evaluate the societal relevance of preventing and addressing oral diseases. In addition to treatment costs, there are indirect costs to consider, mainly in terms of productivity losses due to absenteeism from work. The purpose of the present study was to estimate the direct and indirect costs of dental diseases worldwide to approximate the global economic impact. Estimation of direct treatment costs was based on a systematic approach. For estimation of indirect costs, an approach suggested by the World Health Organization's Commission on Macroeconomics and Health was employed, which factored in 2010 values of gross domestic product per capita as provided by the International Monetary Fund and oral burden of disease estimates from the 2010 Global Burden of Disease Study. Direct treatment costs due to dental diseases worldwide were estimated at US$298 billion yearly, corresponding to an average of 4.6% of global health expenditure. Indirect costs due to dental diseases worldwide amounted to US$144 billion yearly, corresponding to economic losses within the range of the 10 most frequent global causes of death. Within the limitations of currently available data sources and methodologies, these findings suggest that the global economic impact of dental diseases amounted to US$442 billion in 2010. Improvements in population oral health may imply substantial economic benefits not only in terms of reduced treatment costs but also because of fewer productivity losses in the labor market.",
"title": ""
},
{
"docid": "d19c2d11d871ff1e1afa94f87054fdc5",
"text": "Catastrophic forgetting occurs when a neural network loses the information learned in a previous task after training on subsequent tasks. This problem remains a hurdle for artificial intelligence systems with sequential learning capabilities. In this paper, we propose a task-based hard attention mechanism that preserves previous tasks’ information without affecting the current task’s learning. A hard attention mask is learned concurrently to every task, through stochastic gradient descent, and previous masks are exploited to condition such learning. We show that the proposed mechanism is effective for reducing catastrophic forgetting, cutting current rates by 45 to 80%. We also show that it is robust to different hyperparameter choices, and that it offers a number of monitoring capabilities. The approach features the possibility to control both the stability and compactness of the learned knowledge, which we believe makes it also attractive for online learning or network compression applications.",
"title": ""
},
{
"docid": "d369d3bd03f54e9cb912f53cdaf51631",
"text": "This paper presents a method to detect table regions in document images by identifying the column and row line-separators and their properties. The method employs a run-length approach to identify the horizontal and vertical lines present in the input image. From each group of intersecting horizontal and vertical lines, a set of 26 low-level features are extracted and an SVM classifier is used to test if it belongs to a table or not. The performance of the method is evaluated on a heterogeneous corpus of French, English and Arabic documents that contain various types of table structures and compared with that of the Tesseract OCR system.",
"title": ""
},
{
"docid": "5510f5e1bcf352e3219097143200531f",
"text": "Research aimed at correcting words in text has focused on three progressively more difficult problems:(1) nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction. In response to the first problem, efficient pattern-matching and n-gram analysis techniques have been developed for detecting strings that do not appear in a given word list. In response to the second problem, a variety of general and application-specific spelling correction techniques have been developed. Some of them were based on detailed studies of spelling error patterns. In response to the third problem, a few experiments using natural-language-processing tools or statistical-language models have been carried out. This article surveys documented findings on spelling error patterns, provides descriptions of various nonword detection and isolated-word error correction techniques, reviews the state of the art of context-dependent word correction techniques, and discusses research issues related to all three areas of automatic error correction in text.",
"title": ""
},
{
"docid": "9a7c915803c84bc2270896bd82b4162d",
"text": "In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in [1]. The gesture detection process is carried out by a Gaussian Mixture Model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a Hidden Markov Model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci Assisted Surgery command console.",
"title": ""
},
{
"docid": "a6fb181666e5dfcd1579aa18f46e1d57",
"text": "Accelerometry offers a practical and low cost method of objectively monitoring human movements, and has particular applicability to the monitoring of free-living subjects. Accelerometers have been used to monitor a range of different movements, including gait, sit-to-stand transfers, postural sway and falls. They have also been used to measure physical activity levels and to identify and classify movements performed by subjects. This paper reviews the use of accelerometer-based systems in each of these areas. The scope and applicability of such systems in unsupervised monitoring of human movement are considered. The different systems and monitoring techniques can be integrated to provide a more comprehensive system that is suitable for measuring a range of different parameters in an unsupervised monitoring context with free-living subjects. An integrated approach is described in which a single, waist-mounted accelerometry system is used to monitor a range of different parameters of human movement in an unsupervised setting.",
"title": ""
},
{
"docid": "5b1578b4fd38b5c7a24459df6f9aeff3",
"text": "This study aims to investigate the connection between concepts of masculinity and militarism in Egyptian online press. In order to avoid reification of stereotypical, orientalist constructions of Arab men as villains or oppressors, this study does not look at men in the typical sense, either as individuals or as a group, but as gendered subjects, socially constructed through performativity. Furthermore, this study is grounded in material derived from four months of ethnographic field studies in Cairo, exploring the understanding of masculinities by Egyptian media audiences and media professionals. The purpose of this study, as such, is to locate ‘militarised masculinity’ within Egyptian online press; to explore how militarism and notions of masculinity become entangled and what role the media plays in perpetuating this entanglement. Seeing how the military is an institution of state-sanctioned violence, combined with a rigid, normative representation of men and a shunning of ‘deviant masculinities’ in media, it is possible that a celebration of (ideal) masculinity as militaristic is related to issues of violence against women, and persecution of non-heterosexual men. In a time when media personalities are actively working with the police to ‘hunt’ gay men, and publicly expose those seen as deviating from ‘traditional’ or ‘hegemonic’ masculinity, it is today even more important to examine Egyptian media, in regards to minority and gender representation as well as hegemonic discourse.",
"title": ""
},
{
"docid": "3b27f02b96f079e57714ef7c2f688b48",
"text": "Polycystic ovary syndrome (PCOS) affects 5-10% of women in reproductive age and is characterized by oligo/amenorrhea, androgen excess, insulin resistance, and typical polycystic ovarian morphology. It is the most common cause of infertility secondary to ovulatory dysfunction. The underlying etiology is still unknown but is believed to be multifactorial. Insulin-sensitizing compounds such as inositol, a B-complex vitamin, and its stereoisomers (myo-inositol and D-chiro-inositol) have been studied as an effective treatment of PCOS. Administration of inositol in PCOS has been shown to improve not only the metabolic and hormonal parameters but also ovarian function and the response to assisted-reproductive technology (ART). Accumulating evidence suggests that it is also capable of improving folliculogenesis and embryo quality and increasing the mature oocyte yield following ovarian stimulation for ART in women with PCOS. In the current review, we collate the evidence and summarize our current knowledge on ovarian stimulation and ART outcomes following inositol treatment in women with PCOS undergoing in vitro fertilization (IVF) and/or intracytoplasmic sperm injection (ICSI).",
"title": ""
},
{
"docid": "2dc084d063ec1610917e09921e145c24",
"text": "This article describes an assistant interface to design and produce pop-up cards. A pop-up card is a piece of folded paper from which a three-dimensional structure pops up when opened. The authors propose an interface to assist the user in the design and production of a pop-up card. During the design process, the system examines whether the parts protrude from the card or whether the parts collide with one another when the card is closed. The user can concentrate on the design activity because the error occurrence and the error resolution are continuously fed to the user in real time. The authors demonstrate the features of their system by creating two pop-up card examples and perform an informal preliminary user study, showing that automatic protrusion and collision detection are effective in the design process. DOI: 10.4018/jcicg.2010070104 International Journal of Creative Interfaces and Computer Graphics, 1(2), 40-50, July-December 2010 41 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. start over from the beginning. This process requires a lot of time, energy, and paper. Design and simulation in a computer help both nonprofessionals and professionals to design a pop-up card, eliminate the boring repetition, and save time. Glassner (1998, 2002) proposed methods for designing a pop-up card on a computer. He introduced several simple pop-up mechanisms and described how to use these mechanisms, how to simulate the position of vertices as an intersecting point of three spheres, how to check whether the structure sticks out beyond the cover or if a collision occurs during opening, and how to generate templates. His work is quite useful in designing simple pop-up cards. In this article, we build on Glassner’s pioneering work and introduce several innovative aspects. We add two new mechanisms based on the V-fold: the box and the cube. We present a detailed description of the interface for design, which Glassner did not describe in any detail. In addition, our system provides real-time error detection feedback during editing operations by examining whether parts protrude from the card when closed or whether they collide with one another during opening and closing. Finally, we report on an informal preliminary user study of our system involving four inexperienced users.",
"title": ""
},
{
"docid": "6101f4582b1ad0b0306fe3d513940fab",
"text": "Although a great deal of media attention has been given to the negative effects of playing video games, relatively less attention has been paid to the positive effects of engaging in this activity. Video games in health care provide ample examples of innovative ways to use existing commercial games for health improvement or surgical training. Tailor-made games help patients be more adherent to treatment regimens and train doctors how to manage patients in different clinical situations. In this review, examples in the scientific literature of commercially available and tailor-made games used for education and training with patients and medical students and doctors are summarized. There is a history of using video games with patients from the early days of gaming in the 1980s, and this has evolved into a focus on making tailor-made games for different disease groups, which have been evaluated in scientific trials more recently. Commercial video games have been of interest regarding their impact on surgical skill. More recently, some basic computer games have been developed and evaluated that train doctors in clinical skills. The studies presented in this article represent a body of work outlining positive effects of playing video games in the area of health care.",
"title": ""
},
{
"docid": "b7bf3ae864ce774874041b0e5308323f",
"text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.",
"title": ""
},
{
"docid": "cec6e899c23dd65881f84cca81205eb0",
"text": "A fuzzy graph (f-graph) is a pair G : ( σ, μ) where σ is a fuzzy subset of a set S and μ is a fuzzy relation on σ. A fuzzy graph H : ( τ, υ) is called a partial fuzzy subgraph of G : (σ, μ) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ μ(u, v) for every u and v . In particular we call a partial fuzzy subgraph H : ( τ, υ) a fuzzy subgraph of G : ( σ, μ ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = μ(u, v) for every arc (u, v) in υ*. A connected f-graph G : ( σ, μ) is a fuzzy tree(f-tree) if it has a fuzzy spannin g subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not i n F there exists a path from x to y in F whose strength is more than μ(x, y). A path P of length n is a sequence of disti nct nodes u0, u1, ..., un such that μ(ui−1, ui) > 0, i = 1, 2, ..., n and the degree of membershi p of a weakest arc is defined as its strength. If u 0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it cont ains more than one weakest arc . The strength of connectedness between two nodes x and y is efined as the maximum of the strengths of all paths between x and y and is denot ed by CONNG(x, y). An x − y path P is called a strongest x − y path if its strength equal s CONNG(x, y). An f-graph G : ( σ, μ) is connected if for every x,y in σ ,CONNG(x, y) > 0. In this paper, we offer a survey of selected recent results on fuzzy graphs.",
"title": ""
},
{
"docid": "644f61bc267d3dcb915f8c36c1584605",
"text": "This paper discusses the design and development of an experimental tabletop robot called \"Haru\" based on design thinking methodology. Right from the very beginning of the design process, we have brought an interdisciplinary team that includes animators, performers and sketch artists to help create the first iteration of a distinctive anthropomorphic robot design based on a concept that leverages form factor with functionality. Its unassuming physical affordance is intended to keep human expectation grounded while its actual interactive potential stokes human interest. The meticulous combination of both subtle and pronounced mechanical movements together with its stunning visual displays, highlight its affective affordance. As a result, we have developed the first iteration of our tabletop robot rich in affective potential for use in different research fields involving long-term human-robot interaction.",
"title": ""
}
] |
scidocsrr
|
9a839c80fa31841d6ebbc6a106b5da5c
|
Consistent, durable, and safe memory management for byte-addressable non volatile main memory
|
[
{
"docid": "7c05ef9ac0123a99dd5d47c585be391c",
"text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.",
"title": ""
},
{
"docid": "0a0ec569738b90f44b0c20870fe4dc2f",
"text": "Transactional memory provides a concurrency control mechanism that avoids many of the pitfalls of lock-based synchronization. Researchers have proposed several different implementations of transactional memory, broadly classified into software transactional memory (STM) and hardware transactional memory (HTM). Both approaches have their pros and cons: STMs provide rich and flexible transactional semantics on stock processors but incur significant overheads. HTMs, on the other hand, provide high performance but implement restricted semantics or add significant hardware complexity. This paper is the first to propose architectural support for accelerating transactions executed entirely in software. We propose instruction set architecture (ISA) extensions and novel hardware mechanisms that improve STM performance. We adapt a high-performance STM algorithm supporting rich transactional semantics to our ISA extensions (called hardware accelerated software transactional memory or HASTM). HASTM accelerates fully virtualized nested transactions, supports language integration, and provides both object-based and cache-line based conflict detection. We have implemented HASTM in an accurate multi-core IA32 simulator. Our simulation results show that (1) HASTM single-thread performance is comparable to a conventional HTM implementation; (2) HASTM scaling is comparable to a STM implementation; and (3) HASTM is resilient to spurious aborts and can scale better than HTM in a multi-core setting. Thus, HASTM provides the flexibility and rich semantics of STM, while giving the performance of HTM.",
"title": ""
}
] |
[
{
"docid": "4c0eca49feea86dd635e6d5413bb8fe9",
"text": "DOCUMENT RESUME",
"title": ""
},
{
"docid": "dbc253488a9f5d272e75b38dc98ea101",
"text": "A new form of a hybrid design of a microstrip-fed parasitic coupled ring fractal monopole antenna with semiellipse ground plane is proposed for modern mobile devices having a wireless local area network (WLAN) module along with a Worldwide Interoperability for Microwave Access (WiMAX) function. In comparison to the previous monopole structures, the miniaturized antenna dimension is only about 25 × 25 × 1 mm3 , which is 15 times smaller than the previous proposed design. By only increasing the fractal iterations, very good impedance characteristics are obtained. Throughout this letter, the improvement process of the impedance and radiation properties is completely presented and discussed.",
"title": ""
},
{
"docid": "a93361b09b4aaf1385569a9efce7087e",
"text": "Cortical surface mapping has been widely used to compensate for individual variability of cortical shape and topology in anatomical and functional studies. While many surface mapping methods were proposed based on landmarks, curves, spherical or native cortical coordinates, few studies have extensively and quantitatively evaluated surface mapping methods across different methodologies. In this study we compared five cortical surface mapping algorithms, including large deformation diffeomorphic metric mapping (LDDMM) for curves (LDDMM-curve), for surfaces (LDDMM-surface), multi-manifold LDDMM (MM-LDDMM), FreeSurfer, and CARET, using 40 MRI scans and 10 simulated datasets. We computed curve variation errors and surface alignment consistency for assessing the mapping accuracy of local cortical features (e.g., gyral/sulcal curves and sulcal regions) and the curvature correlation for measuring the mapping accuracy in terms of overall cortical shape. In addition, the simulated datasets facilitated the investigation of mapping error distribution over the cortical surface when the MM-LDDMM, FreeSurfer, and CARET mapping algorithms were applied. Our results revealed that the LDDMM-curve, MM-LDDMM, and CARET approaches best aligned the local curve features with their own curves. The MM-LDDMM approach was also found to be the best in aligning the local regions and cortical folding patterns (e.g., curvature) as compared to the other mapping approaches. The simulation experiment showed that the MM-LDDMM mapping yielded less local and global deformation errors than the CARET and FreeSurfer mappings.",
"title": ""
},
{
"docid": "5b31cdfd19e40a2ee5f1094e33366902",
"text": "Much of the early literature on 'cultural competence' focuses on the 'categorical' or 'multicultural' approach, in which providers learn relevant attitudes, values, beliefs, and behaviors of certain cultural groups. In essence, this involves learning key 'dos and don'ts' for each group. Literature and educational materials of this kind focus on broad ethnic, racial, religious, or national groups, such as 'African American', 'Hispanic', or 'Asian'. The problem with this categorical or 'list of traits' approach to clinical cultural competence is that culture is multidimensional and dynamic. Culture comprises multiple variables, affecting all aspects of experience. Cultural processes frequently differ within the same ethnic or social group because of differences in age cohort, gender, political association, class, religion, ethnicity, and even personality. Culture is therefore a very elusive and nebulous concept, like art. The multicultural approach to cultural competence results in stereotypical thinking rather than clinical competence. A newer, cross cultural approach to culturally competent clinical practice focuses on foundational communication skills, awareness of cross-cutting cultural and social issues, and health beliefs that are present in all cultures. We can think of these as universal human beliefs, needs, and traits. This patient centered approach relies on identifying and negotiating different styles of communication, decision-making preferences, roles of family, sexual and gender issues, and issues of mistrust, prejudice, and racism, among other factors. In the current paper, we describe 'cultural' challenges that arise in the care of four patients from disparate cultures, each of whom has advanced colon cancer that is no longer responding to chemotherapy. We then illustrate how to apply principles of patient centered care to these challenges.",
"title": ""
},
{
"docid": "65ddfd636299f556117e53b5deb7c7e5",
"text": "BACKGROUND\nMobile phone use is near ubiquitous in teenagers. Paralleling the rise in mobile phone use is an equally rapid decline in the amount of time teenagers are spending asleep at night. Prior research indicates that there might be a relationship between daytime sleepiness and nocturnal mobile phone use in teenagers in a variety of countries. As such, the aim of this study was to see if there was an association between mobile phone use, especially at night, and sleepiness in a group of U.S. teenagers.\n\n\nMETHODS\nA questionnaire containing an Epworth Sleepiness Scale (ESS) modified for use in teens and questions about qualitative and quantitative use of the mobile phone was completed by students attending Mountain View High School in Mountain View, California (n = 211).\n\n\nRESULTS\nMultivariate regression analysis indicated that ESS score was significantly associated with being female, feeling a need to be accessible by mobile phone all of the time, and a past attempt to reduce mobile phone use. The number of daily texts or phone calls was not directly associated with ESS. Those individuals who felt they needed to be accessible and those who had attempted to reduce mobile phone use were also ones who stayed up later to use the mobile phone and were awakened more often at night by the mobile phone.\n\n\nCONCLUSIONS\nThe relationship between daytime sleepiness and mobile phone use was not directly related to the volume of texting but may be related to the temporal pattern of mobile phone use.",
"title": ""
},
{
"docid": "d87abfd50876da09bce301831f71605f",
"text": "Recent advances in topic models have explored complicated structured distributions to represent topic correlation. For example, the pachinko allocation model (PAM) captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). While PAM provides more flexibility and greater expressive power than previous models like latent Dirichlet allocation (LDA), it is also more difficult to determine the appropriate topic structure for a specific dataset. In this paper, we propose a nonparametric Bayesian prior for PAM based on a variant of the hierarchical Dirichlet process (HDP). Although the HDP can capture topic correlations defined by nested data structure, it does not automatically discover such correlations from unstructured data. By assuming an HDP-based prior for PAM, we are able to learn both the number of topics and how the topics are correlated. We evaluate our model on synthetic and real-world text datasets, and show that nonparametric PAM achieves performance matching the best of PAM without manually tuning the number of topics.",
"title": ""
},
{
"docid": "2bbbd2d1accca21cdb614a0324aa1a0d",
"text": "We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.",
"title": ""
},
{
"docid": "ee160fba41e89dd7be1ef54b20d53d41",
"text": "The purpose of this article is to provide a systematic review of Combined Physical Therapy, Intermittent Pneumatic Compression and arm elevation for the treatment of lymphoedema secondary to an axillary dissection for breast cancer. Combined Physical Therapy starts with an intensive phase consisting of skin care, Manual Lymphatic Drainage, exercises and bandaging and continues with a maintenance phase consisting of skin care, exercises, wearing a compression sleeve and Manual Lymphatic Drainage if needed. We have searched the following databases: PubMed/MEDLINE, CINAHL, EMBASE, PEDro and Cochrane. Only (pseudo-) randomised controlled trials and non-randomised experimental trials investigating the effectiveness of Combined Physical Therapy and its different parts, of Intermittent Pneumatic Compression and of arm elevation were included. These physical treatments had to be applied to patients with arm lymphoedema which developed after axillary dissection for breast cancer. Ten randomised controlled trials, one pseudo-randomised controlled trial and four non-randomised experimental trials were found and analysed. Combined Physical Therapy can be considered as an effective treatment modality for lymphoedema. Bandaging the arm is effective, whether its effectiveness is investigated on a heterogeneous group consisting of patients with upper and lower limb lymphoedema from different causes. There is no consensus on the effectiveness of Manual Lymphatic Drainage. The effectiveness of skin care, exercises, wearing a compression sleeve and arm elevation is not investigated by a controlled trial. Intermittent Pneumatic Compression is effective, but once the treatment is interrupted, the lymphoedema volume increases. In conclusion, Combined Physical Therapy is an effective therapy for lymphoedema. However, the effectiveness of its different components remains uncertain. Furthermore, high-quality studies are warranted. The long-term effect of Intermittent Pneumatic Compression and the effect of elevation on lymphoedema are not yet proven.",
"title": ""
},
{
"docid": "06a3ad4649b03e5f6ff40894045c9ce3",
"text": "In this paper, a TSK-type recurrent fuzzy network (TRFN) structure is proposed. The proposal calls for a design of TRFN by either neural network or genetic algorithms depending on the learning environment. Set forth first is a recurrent fuzzy network which develops from a series of recurrent fuzzy if–then rules with TSK-type consequent parts. The recurrent property comes from feeding the internal variables, derived from fuzzy firing strengths, back to both the network input and output layers. In this configuration, each internal variable is responsible for memorizing the temporal history of its corresponding fuzzy rule. The internal variable is also combined with external input variables in each rule’s consequence, which shows an increase in network learning ability. TRFN design under different learning environments is next advanced. For problems where supervised training data is directly available, TRFN with supervised learning (TRFN-S) is proposed, and neural network (NN) learning approach is adopted for TRFN-S design. An online learning algorithm with concurrent structure and parameter learning is proposed. With flexibility of partition in the precondition part, and outcome of TSK-type, TRFN-S has the admirable property of small network size and high learning accuracy. As to the problems where gradient information for NN learning is costly to obtain or unavailable, like reinforcement learning, TRFN with Genetic learning (TRFN-G) is put forward. The precondition parts of TRFN-G are also partitioned in a flexible way, and all free parameters are designed concurrently by genetic algorithm. Owing to the well-designed network structure of TRFN, TRFN-G, like TRFN-S, also is characterized by a high learning accuracy property. To demonstrate the superior properties of TRFN, TRFN-S is applied to dynamic system identification and TRFN-G to dynamic system control. By comparing the results to other types of recurrent networks and design configurations, the efficiency of TRFN is verified.",
"title": ""
},
{
"docid": "439485763ec50c6a1e843f98950e4b7d",
"text": "Currently the large surplus of glycerol formed as a by-product during the production of biodiesel offered an abundant and low cost feedstock. Researchers showed a surge of interest in using glycerol as renewable feedstock to produce functional chemicals. This Minireview focuses on recent developments in the conversion of glycerol into valueadded products, including citric acid, lactic acid, 1,3-dihydroxyacetone (DHA), 1,3-propanediol (1,3-PD), dichloro-2propanol (DCP), acrolein, hydrogen, and ethanol etc. The versatile new applications of glycerol in the everyday life and chemical industry will improve the economic viability of the biodiesel industry.",
"title": ""
},
{
"docid": "6aa1c48fcde6674990a03a1a15b5dc0e",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications with band-notched function. The proposed antenna is composed of two offset microstrip-fed antenna elements with UWB performance. To achieve high isolation and polarization diversity, the antenna elements are placed perpendicular to each other. A parasitic T-shaped strip between the radiating elements is employed as a decoupling structure to further suppress the mutual coupling. In addition, the notched band at 5.5 GHz is realized by etching a pair of L-shaped slits on the ground. The antenna prototype with a compact size of 38.5 × 38.5 mm2 has been fabricated and measured. Experimental results show that the antenna has an impedance bandwidth of 3.08-11.8 GHz with reflection coefficient less than -10 dB, except the rejection band of 5.03-5.97 GHz. Besides, port isolation, envelope correlation coefficient and radiation characteristics are also investigated. The results indicate that the MIMO antenna is suitable for band-notched UWB applications.",
"title": ""
},
{
"docid": "17c3e9af0d6bc8cd4e0915df0b9b2bf3",
"text": "The focus of the three previous chapters has been on context-free grammars and their use in automatically generating constituent-based representations. Here we present another family of grammar formalisms called dependency grammars that Dependency grammar are quite important in contemporary speech and language processing systems. In these formalisms, phrasal constituents and phrase-structure rules do not play a direct role. Instead, the syntactic structure of a sentence is described solely in terms of the words (or lemmas) in a sentence and an associated set of directed binary grammatical relations that hold among the words. The following diagram illustrates a dependency-style analysis using the standard graphical method favored in the dependency-parsing community. (14.1) I prefer the morning flight through Denver nsubj dobj det nmod nmod case root Relations among the words are illustrated above the sentence with directed, labeled arcs from heads to dependents. We call this a typed dependency structure Typed dependency because the labels are drawn from a fixed inventory of grammatical relations. It also includes a root node that explicitly marks the root of the tree, the head of the entire structure. Figure 14.1 shows the same dependency analysis as a tree alongside its corresponding phrase-structure analysis of the kind given in Chapter 11. Note the absence of nodes corresponding to phrasal constituents or lexical categories in the dependency parse; the internal structure of the dependency parse consists solely of directed relations between lexical items in the sentence. These relationships directly encode important information that is often buried in the more complex phrase-structure parses. For example, the arguments to the verb prefer are directly linked to it in the dependency structure, while their connection to the main verb is more distant in the phrase-structure tree. Similarly, morning and Denver, modifiers of flight, are linked to it directly in the dependency structure. A major advantage of dependency grammars is their ability to deal with languages that are morphologically rich and have a relatively free word order. For Free word order example, word order in Czech can be much more flexible than in English; a grammatical object might occur before or after a location adverbial. A phrase-structure grammar would need a separate rule for each possible place in the parse tree where such an adverbial phrase could occur. A dependency-based approach would just have one link type representing this particular adverbial relation. Thus, a dependency grammar approach abstracts away from word-order information, …",
"title": ""
},
{
"docid": "bc955a52d08f192b06844721fcf635a0",
"text": "Total quality management (TQM) has been widely considered as the strategic, tactical and operational tool in the quality management research field. It is one of the most applied and well accepted approaches for business excellence besides Continuous Quality Improvement (CQI), Six Sigma, Just-in-Time (JIT), and Supply Chain Management (SCM) approaches. There is a great enthusiasm among manufacturing and service industries in adopting and implementing this strategy in order to maintain their sustainable competitive advantage. The aim of this study is to develop and propose the conceptual framework and research model of TQM implementation in relation to company performance particularly in context with the Indian service companies. It examines the relationships between TQM and company’s performance by measuring the quality performance as performance indicator. A comprehensive review of literature on TQM and quality performance was carried out to accomplish the objectives of this study and a research model and hypotheses were generated. Two research questions and 34 hypotheses were proposed to re-validate the TQM practices. The adoption of such a theoretical model on TQM and company’s quality performance would help managers, decision makers, and practitioners of TQM in better understanding of the TQM practices and to focus on the identified practices while implementing TQM in their companies. Further, the scope for future study is to test and validate the theoretical model by collecting the primary data from the Indian service companies and using Structural Equation Modeling (SEM) approach for hypotheses testing.",
"title": ""
},
{
"docid": "ff39f9fdb98981137f93d156150e1b83",
"text": "We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "40db41aa0289dbf45bef067f7d3e3748",
"text": "Maximum reach envelopes for the 5th, 50th and 95th percentile reach lengths of males and females in seated and standing work positions were determined. The use of a computerized potentiometric measurement system permitted functional reach measurement in 15 min for each subject. The measurement system captured reach endpoints in a dynamic mode while the subjects were describing their maximum reach envelopes. An unbiased estimate of the true reach distances was made through a systematic computerized data averaging process. The maximum reach envelope for the standing position was significantly (p<0.05) larger than the corresponding measure in the seated position for both the males and females. The average reach length of the female was 13.5% smaller than that for the corresponding male. Potential applications of this research include designs of industrial workstations, equipment, tools and products.",
"title": ""
},
{
"docid": "2438479795a9673c36138212b61c6d88",
"text": "Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning.",
"title": ""
},
{
"docid": "dd9a523f199116cf5c10ac6fd7aeae1e",
"text": "Owing to demand characteristics of spare part, demand forecasting for spare parts is especially difficult. Based on the properties of spare part demand, we develop a hybrid forecasting approach, which can synthetically evaluate autocorrelation of demand time series and the relationship of explanatory variables with demand of spare part. In the described approach, support vector machines (SVMs) are adapted to forecast occurrences of nonzero demand of spare part, and a hybrid mechanism for integrating the SVM forecast results and the relationship of occurrence of nonzero demand with explanatory variables is proposed. Using real data sets of 30 kinds of spare parts from a petrochemical enterprise in China, we show that our method produces more accurate forecasts of distribution of lead-time demands of spare parts than do current methods across almost all the lead times. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "afe2bc204458117fb278ef500b485ea1",
"text": "PURPOSE\nTitanium based implant systems, though considered as the gold standard for rehabilitation of edentulous spaces, have been criticized for many inherent flaws. The onset of hypersensitivity reactions, biocompatibility issues, and an unaesthetic gray hue have raised demands for more aesthetic and tissue compatible material for implant fabrication. Zirconia is emerging as a promising alternative to conventional Titanium based implant systems for oral rehabilitation with superior biological, aesthetics, mechanical and optical properties. This review aims to critically analyze and review the credibility of Zirconia implants as an alternative to Titanium for prosthetic rehabilitation.\n\n\nSTUDY SELECTION\nThe literature search for articles written in the English language in PubMed and Cochrane Library database from 1990 till December 2016. The following search terms were utilized for data search: \"zirconia implants\" NOT \"abutment\", \"zirconia implants\" AND \"titanium implants\" AND \"osseointegration\", \"zirconia implants\" AND compatibility.\n\n\nRESULTS\nThe number of potential relevant articles selected were 47. All the human in vivo clinical, in vitro, animals' studies were included and discussed under the following subheadings: Chemical composition, structure and phases; Physical and mechanical properties; Aesthetic and optical properties; Osseointegration and biocompatibility; Surface modifications; Peri-implant tissue compatibility, inflammation and soft tissue healing, and long-term prognosis.\n\n\nCONCLUSIONS\nZirconia implants are a promising alternative to titanium with a superior soft-tissue response, biocompatibility, and aesthetics with comparable osseointegration. However, further long-term longitudinal and comparative clinical trials are required to validate zirconia as a viable alternative to the titanium implant.",
"title": ""
},
{
"docid": "509fe613e25c9633df2520e4c3a62b74",
"text": "This study, in an attempt to rise above the intricacy of 'being informed on the verge of globalization,' is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper favors not judging against the superiority of human translation vs. machine translation or automated translation in non-English speaking settings, but rather referring to the inadequacies and adequacies of MT at certain pragmatic levels, lacking the right sense and dynamic equivalence, but producing syntactically well-formed or meaning-extractable outputs in restricted settings. Reasoning in this way, the present study supports MT before, during, and after translation. It aims at making translators understand that they could cooperate with the software to obtain a synergistic effect. In other words, they could have a say and have an essential part to play in a semi-automated translation process (Rodrigo, 2001). In this respect, semi-automated translation or MT courses should be included in the curricula of translation departments worldwide to keep track of the state of the art as well as make potential translators aware of future trends.",
"title": ""
}
] |
scidocsrr
|
600fb48cb9b47cc45fb6831750456b6f
|
Real Time 3D-Handwritten Character and Gesture Recognition for Smartphone
|
[
{
"docid": "b338f9e213b1837e217e1969edf0aedf",
"text": "In many applications today user interaction is moving away from mouse and pens and is becoming pervasive and much more physical and tangible. New emerging interaction technologies allow developing and experimenting with new interaction methods on the long way to providing intuitive human computer interaction. In this paper, we aim at recognizing gestures to interact with an application and present the design and evaluation of our sensor-based gesture recognition. As input device we employ the Wii-controller (Wiimote) which recently gained much attention world wide. We use the Wiimote's acceleration sensor independent of the gaming console for gesture recognition. The system allows the training of arbitrary gestures by users which can then be recalled for interacting with systems like photo browsing on a home TV. The developed library exploits Wii-sensor data and employs a hidden Markov model for training and recognizing user-chosen gestures. Our evaluation shows that we can already recognize gestures with a small number of training samples. In addition to the gesture recognition we also present our experiences with the Wii-controller and the implementation of the gesture recognition. The system forms the basis for our ongoing work on multimodal intuitive media browsing and are available to other researchers in the field.",
"title": ""
}
] |
[
{
"docid": "fdac9bbe4e92fedfcd237878afdefc90",
"text": "Pervasive and sensor-driven systems are by nature open and extensible, both in terms of input and tasks they are required to perform. Data streams coming from sensors are inherently noisy, imprecise and inaccurate, with di↵ering sampling rates and complex correlations with each other. These characteristics pose a significant challenge for traditional approaches to storing, representing, exchanging, manipulating and programming with sensor data. Semantic Web technologies provide a uniform framework for capturing these properties. O↵ering powerful representation facilities and reasoning techniques, these technologies are rapidly gaining attention towards facing a range of issues such as data and knowledge modelling, querying, reasoning, service discovery, privacy and provenance. This article reviews the application of the Semantic Web to pervasive and sensor-driven systems with a focus on information modelling and reasoning along with streaming data and uncertainty handling. The strengths and weaknesses of current and projected approaches are analysed and a roadmap is derived for using the Semantic Web as a platform, on which open, standard-based, pervasive, adaptive and sensor-driven systems can be deployed.",
"title": ""
},
{
"docid": "548f43f2193cffc6711d8a15c00e8c3d",
"text": "Dither signals provide an effective way to compensate for nonlinearities in control systems. The seminal works by Zames and Shneydor, and more recently, by Mossaheb, present rigorous tools for systematic design of dithered systems. Their results rely, however, on a Lipschitz assumption relating to nonlinearity, and thus, do not cover important applications with discontinuities. This paper presents initial results on how to analyze and design dither in nonsmooth systems. In particular, it is shown that a dithered relay feedback system can be approximated by a smoothed system. Guidelines are given for tuning the amplitude and the period time of the dither signal, in order to stabilize the nonsmooth system.",
"title": ""
},
{
"docid": "9aa53b924f08e6d50fb41b5f9483f83e",
"text": "Despite the recent increase in the development of antivirals and antibiotics, antimicrobial resistance and the lack of broad-spectrum virus-targeting drugs are still important issues and additional alternative approaches to treat infectious diseases are urgently needed. Host-directed therapy (HDT) is an emerging approach in the field of anti-infectives. The strategy behind HDT is to interfere with host cell factors that are required by a pathogen for replication or persistence, to enhance protective immune responses against a pathogen, to reduce exacerbated inflammation and to balance immune reactivity at sites of pathology. Although HDTs encompassing interferons are well established for the treatment of chronic viral hepatitis, novel strategies aimed at the functional cure of persistent viral infections and the development of broad-spectrum antivirals against emerging viruses seem to be crucial. In chronic bacterial infections, such as tuberculosis, HDT strategies aim to enhance the antimicrobial activities of phagocytes and to curtail inflammation through interference with soluble factors (such as eicosanoids and cytokines) or cellular factors (such as co-stimulatory molecules). This Review describes current progress in the development of HDTs for viral and bacterial infections, including sepsis, and the challenges in bringing these new approaches to the clinic.",
"title": ""
},
{
"docid": "cf5e6ce7313d15f33afa668f27a5e9e2",
"text": "Researchers have designed a variety of systems that promote wellness. However, little work has been done to examine how casual mobile games can help adults learn how to live healthfully. To explore this design space, we created OrderUP!, a game in which players learn how to make healthier meal choices. Through our field study, we found that playing OrderUP! helped participants engage in four processes of change identified by a well-established health behavior theory, the Transtheoretical Model: they improved their understanding of how to eat healthfully and engaged in nutrition-related analytical thinking, reevaluated the healthiness of their real life habits, formed helping relationships by discussing nutrition with others and started replacing unhealthy meals with more nutritious foods. Our research shows the promise of using casual mobile games to encourage adults to live healthier lifestyles.",
"title": ""
},
{
"docid": "c048fcee111850376f585dad381a6eec",
"text": "In order to simulate the effects of lightning and switching transients on power lines, coupling and decoupling network (CDN) is necessary for performing surge tests on electrical and electronic equipment. This paper presents a particular analysis of the circuit of CDN and design details according to the requirement of the standard IEC 61000-4-5. The CDN circuits are simulated by electromagnetic transients program (EMTP) to judge the suitability of the selected component values and performance of the designed CDN. Simulation results also show that increasing of decoupling capacitance or inductance can reduce the residual surge voltage on the power line inputs of the CDN. Based on the voltage drop control and residual surge voltage limiting, smaller decoupling inductance and relatively larger decoupling capacitance should be chosen if EUT has a higher rated current. The CDN developed according to the simulation results satisfy the design specifications well.",
"title": ""
},
{
"docid": "698fb992c5ff7ecc8d2e153f6b385522",
"text": "We investigate bag-of-visual-words (BOVW) approaches to land-use classification in high-resolution overhead imagery. We consider a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence. We also consider two spatial extensions, the established spatial pyramid match kernel which considers the absolute spatial arrangement of the image features, as well as a novel method which we term the spatial co-occurrence kernel that considers the relative arrangement. These extensions are motivated by the importance of spatial structure in geographic data.\n The methods are evaluated using a large ground truth image dataset of 21 land-use classes. In addition to comparisons with standard approaches, we perform extensive evaluation of different configurations such as the size of the visual dictionaries used to derive the BOVW representations and the scale at which the spatial relationships are considered.\n We show that even though BOVW approaches do not necessarily perform better than the best standard approaches overall, they represent a robust alternative that is more effective for certain land-use classes. We also show that extending the BOVW approach with our proposed spatial co-occurrence kernel consistently improves performance.",
"title": ""
},
{
"docid": "45009303764570cbfa3532a9d98f5393",
"text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.",
"title": ""
},
{
"docid": "e3a9de1939e90b8a50c8e472027622be",
"text": "The bacterial cellulose (BC) secreted by Gluconacetobacter xylinus was explored as a novel scaffold material due to its unusual biocompatibility, light transmittance and material properties. The specific surface area of the frozendried BC sheet based on BET isotherm was 22.886 m/g, and the porosity was around 90%. It is known by SEM graphs that significant difference in porosity and pore size exists in the two sides of air-dried BC sheets. The width of cellulose ribbons was 10 nm to 100 nm known by AFM image. The examination of the growth of human corneal stromal cells on BC demonstrated that the material supported the growth and proliferation of human corneal stromal cells. The ingrowth of corneal stromal cells into the scaffold was verified by Laser Scanning Confocal Microscope. The results suggest the potentiality for this biomaterial as a scaffold for tissue engineering of artificial cornea. KeywordsBacterial cellulose; Cornea; Tissue engineering; Scaffold; Corneal stromal cells",
"title": ""
},
{
"docid": "1847cce79f842a7d01f1f65721c1f007",
"text": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.",
"title": ""
},
{
"docid": "8e8fcf2953d256e75df69167dab1cc7a",
"text": "We present a compact formula for the derivative of a 3-D rotation matrix with respect to its exponential coordinates. A geometric interpretation of the resulting expression is provided, as well as its agreement with other less-compact but better-known formulas. To the best of our knowledge, this simpler formula does not appear anywhere in the literature. We hope by providing this more compact expression to alleviate the common pressure to reluctantly resort to alternative representations in various computational applications simply as a means to avoid the complexity of differential analysis in exponential coordinates.",
"title": ""
},
{
"docid": "483b57bef1158ae37c43ca9a92c1cda3",
"text": "Recently, advanced driver assistance system (ADAS) has attracted a lot of attention due to the fast growing industry of smart cars, which is believed to be the next human-computer interaction after smart phones. As ADAS is a critical enabling component in a human-in-the-loop cyber-physical system (CPS) involving complicated physical environment, it has stringent requirements on reliability, accuracy as well as latency. Lane and vehicle detections are the basic functions in ADAS, which provide lane departure warning (LDW) and forward collision warning (FCW) to predict the dangers and warn the drivers. While extensive literature exists on this topic, none of them considers the important fact that many vehicles today do not have powerful embedded electronics or cameras. It will be costly to upgrade the vehicle just for ADAS enhancement. To address this issue, we demonstrate a new framework that utilizes microprocessors in mobile devices with embedded cameras for advanced driver assistance. The main challenge that comes with this low cost solution is the dilemma between limited computing power and tight latency requirement, and uncalibrated camera and high accuracy requirement. Accordingly, we propose an efficient, accurate, flexible yet light-weight real-time lane and vehicle detection method and implement it on Android devices. Real road test results suggest that an average latency of 15 fps can be achieved with a high accuracy of 12.58 average pixel offset for each lane in all scenarios and 97+ precision for vehicle detection. To the best of the authors' knowledge, this is the very first implementation of both lane and vehicle detections on mobile devices with un-calibrated embedded camera.",
"title": ""
},
{
"docid": "8848adb878c7219b5d67aced8f9e789c",
"text": "In this short review of fish gill morphology we cover some basic gross anatomy as well as in some more detail the microscopic anatomy of the branchial epithelia from representatives of the major extant groups of fishes (Agnathans, Elasmobranchs, and Teleosts). The agnathan hagfishes have primitive gill pouches, while the lampreys have arch-like gills similar to the higher fishes. In the lampreys and elasmobranchs, the gill filaments are supported by a complete interbranchial septum and water exits via external branchial slits or pores. In contrast, the teleost interbranchial septum is much reduced, leaving the ends of the filaments unattached, and the multiple gill openings are replaced by the single caudal opening of the operculum. The basic functional unit of the gill is the filament, which supports rows of plate-like lamellae. The lamellae are designed for gas exchange with a large surface area and a thin epithelium surrounding a well-vascularized core of pillar cell capillaries. The lamellae are positioned for the blood flow to be counter-current to the water flow over the gills. Despite marked differences in the gross anatomy of the gill among the various groups, the cellular constituents of the epithelium are remarkably similar. The lamellar gas-exchange surface is covered by squamous pavement cells, while large, mitochondria-rich, ionocytes and mucocytes are found in greatest frequency in the filament epithelium. Demands for ionoregulation can often upset this balance. There has been much study of the structure and function of the branchial mitochondria-rich cells. These cells are generally characterized by a high mitochondrial density and an amplification of the basolateral membrane through folding or the presence of an intracellular tubular system. Morphological subtypes of MRCs as well as some methods of MRC detection are discussed.",
"title": ""
},
{
"docid": "58a4b07717c1df99454fd70148cbe15b",
"text": "CONTEXT\nTo improve selective infraspinatus muscle strength and endurance, researchers have recommended selective shoulder external-rotation exercise during rehabilitation or athletic conditioning programs. Although selective strengthening of the infraspinatus muscle is recommended for therapy and training, limited information is available to help clinicians design a selective strengthening program.\n\n\nOBJECTIVE\nTo determine the most effective of 4 shoulder external-rotation exercises for selectively stimulating infraspinatus muscle activity while minimizing the use of the middle trapezius and posterior deltoid muscles.\n\n\nDESIGN\nCross-sectional study.\n\n\nSETTING\nUniversity research laboratory.\n\n\nPATIENTS OR OTHER PARTICIPANTS\nA total of 30 healthy participants (24 men, 6 women; age = 22.6 ± 1.7 years, height = 176.2 ± 4.5 cm, mass = 65.6 ± 7.4 kg) from a university population.\n\n\nINTERVENTION(S)\nThe participants were instructed to perform 4 exercises: (1) prone horizontal abduction with external rotation (PER), (2) side-lying wiper exercise (SWE), (3) side-lying external rotation (SER), and (4) standing external-rotation exercise (STER).\n\n\nMAIN OUTCOME MEASURE(S)\nSurface electromyography signals were recorded from the infraspinatus, middle trapezius, and posterior deltoid muscles. Differences among the exercise positions were tested using a 1-way repeated-measures analysis of variance with Bonferroni adjustment.\n\n\nRESULTS\nThe infraspinatus muscle activity was greater in the SWE (55.98% ± 18.79%) than in the PER (46.14% ± 15.65%), SER (43.38% ± 22.26%), and STER (26.11% ± 15.00%) (F3,87 = 19.97, P < .001). Furthermore, the SWE elicited the least amount of activity in the middle trapezius muscle (F3,87 = 20.15, P < .001). Posterior deltoid muscle activity was similar in the SWE and SER but less than that measured in the PER and STER (F3,87 = 25.10, P < .001).\n\n\nCONCLUSIONS\nThe SWE was superior to the PER, SER, and STER in maximizing infraspinatus activity with the least amount of middle trapezius and posterior deltoid activity. These findings may help clinicians design effective exercise programs.",
"title": ""
},
{
"docid": "f4df305ad32ebdd1006eefdec6ee7ca3",
"text": "In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.",
"title": ""
},
{
"docid": "34bf7fb014f5b511943526c28407cb4b",
"text": "Mobile devices can be maliciously exploited to violate the privacy of people. In most attack scenarios, the adversary takes the local or remote control of the mobile device, by leveraging a vulnerability of the system, hence sending back the collected information to some remote web service. In this paper, we consider a different adversary, who does not interact actively with the mobile device, but he is able to eavesdrop the network traffic of the device from the network side (e.g., controlling a Wi-Fi access point). The fact that the network traffic is often encrypted makes the attack even more challenging. In this paper, we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps. We design a system that achieves this goal using advanced machine learning techniques. We built a complete implementation of this system, and we also run a thorough set of experiments, which show that our attack can achieve accuracy and precision higher than 95%, for most of the considered actions. We compared our solution with the three state-of-the-art algorithms, and confirming that our system outperforms all these direct competitors.",
"title": ""
},
{
"docid": "d9c35381a493fd5f4261b37b9582ef79",
"text": "Back Flashover (BFO) is one of the several phenomenon which decreases power transmission lines reliability. It occurs as a result of direct lightning stroke to the tower structure or guard wires and injects wave currents with high amplitude and very high steepness to the phase conductors, injection of these wave currents produce voltages with high amplitude and very high steepness, which in turn cause phase to ground faults in power transmission lines. In order to treat with this phenomenon, it is necessary to analysis its mechanism and effective parameters on its occurring probability. In this paper, the mechanism of phenomenon has been analyzed and effective parameters on its probability of occurring are evaluated. Then several methods in order to decrease this probability are suggested. Finally, related simulations with insulation coordination of a 765 kV transmission substation also have been done and the results have been presented.",
"title": ""
},
{
"docid": "d6de2969e89e211f6faf8a47854ee43e",
"text": "Digital image forensics has attracted a lot of attention recently for its role in identifying the origin of digital image. Although different forensic approaches have been proposed, one of the most popular approaches is to rely on the imaging sensor pattern noise, where each sensor pattern noise uniquely corresponds to an imaging device and serves as the intrinsic fingerprint. The correlation-based detection is heavily dependent upon the accuracy of the extracted pattern noise. In this work, we discuss the way to extract the pattern noise, in particular, explore the way to make better use of the pattern noise. Unlike current methods that directly compare the whole pattern noise signal with the reference one, we propose to only compare the large components of these two signals. Our detector can better identify the images taken by different cameras. In the meantime, it needs less computational complexity.",
"title": ""
},
{
"docid": "b00151950a51f9991d4ca62a9a333c9e",
"text": "A lot of systems and applications are data-driven, and the correctness of their operation relies heavily on the correctness of their data. While existing data cleaning techniques can be quite effective at purging datasets of errors, they disregard the fact that a lot of errors are systematic, inherent to the process that produces the data, and thus will keep occurring unless the problem is corrected at its source. In contrast to traditional data cleaning, in this paper we focus on data diagnosis: explaining where and how the errors happen in a data generative process.\n We develop a large-scale diagnostic framework called DATA X-RAY. Our contributions are three-fold. First, we transform the diagnosis problem to the problem of finding common properties among erroneous elements, with minimal domain-specific assumptions. Second, we use Bayesian analysis to derive a cost model that implements three intuitive principles of good diagnoses. Third, we design an efficient, highly-parallelizable algorithm for performing data diagnosis on large-scale data. We evaluate our cost model and algorithm using both real-world and synthetic data, and show that our diagnostic framework produces better diagnoses and is orders of magnitude more efficient than existing techniques.",
"title": ""
},
{
"docid": "ab94fb9281a0327cc632a0a607a55a56",
"text": "To facilitate understanding the architecture of a software system, we developed SArF Map technique that visualizes software architecture from feature and layer viewpoints using a city metaphor. SArF Map visualizes implicit software features using our previous study, SArF dependency-based software clustering algorithm. Since features are high-level abstraction units of software, a generated map can be directly used for high-level decision making such as reuse and also for communications between developers and non-developer stakeholders. In SArF Map, each feature is visualized as a city block, and classes in the feature are laid out as buildings reflecting their software layer. Relevance between features is represented as streets. Dependency links are visualized lucidly. Through open source and industrial case studies, we show that the architecture of the target systems can be easily overviewed and that the quality of their packaging designs can be quickly assessed.",
"title": ""
},
{
"docid": "c235af1fbd499c1c3c10ea850d01bffd",
"text": "Cloud computing, as a concept, promises cost savings to end-users by letting them outsource their non-critical business functions to a third party in pay-as-you-go style. However, to enable economic pay-as-you-go services, we need Cloud middleware that maximizes sharing and support near zero costs for unused applications. Multi-tenancy, which let multiple tenants (user) to share a single application instance securely, is a key enabler for building such a middleware. On the other hand, Business processes capture Business logic of organizations in an abstract and reusable manner, and hence play a key role in most organizations. This paper presents the design and architecture of a Multi-tenant Workflow engine while discussing in detail potential use cases of such architecture. Primary contributions of this paper are motivating workflow multi-tenancy, and the design and implementation of multi-tenant workflow engine that enables multiple tenants to run their workflows securely within the same workflow engine instance without modifications to the workflows.",
"title": ""
}
] |
scidocsrr
|
02dbcc394f1039440cbd103186b01992
|
Cryptanalysis and Improvement of Authentication and Key Agreement Protocols for Telecare Medicine Information Systems
|
[
{
"docid": "2739acca1a61ca8b2738b1312ab857ab",
"text": "The Telecare Medical Information System (TMIS) provides a set of different medical services to the patient and medical practitioner. The patients and medical practitioners can easily connect to the services remotely from their own premises. There are several studies carried out to enhance and authenticate smartcard-based remote user authentication protocols for TMIS system. In this article, we propose a set of enhanced and authentic Three Factor (3FA) remote user authentication protocols utilizing a smartphone capability over a dynamic Cloud Computing (CC) environment. A user can access the TMIS services presented in the form of CC services using his smart device e.g. smartphone. Our framework transforms a smartphone to act as a unique and only identity required to access the TMIS system remotely. Methods, Protocols and Authentication techniques are proposed followed by security analysis and a performance analysis with the two recent authentication protocols proposed for the healthcare TMIS system.",
"title": ""
}
] |
[
{
"docid": "1b8afad1b27c5febbd256e00300b3178",
"text": "Psychosocial risks at the workplace is a well-researched subject from a managerial and organisational point of view. However, the relation of psychosocial risks to Information Security has not been formally studied to the extent required by the gravity of the topic. An attempt is made to highlight the nature of psychosocial risks and provide examples of their effects on Information Security. The foundation is thus set for methodologies of assessment and mitigation and suggestions are made on future research directions.",
"title": ""
},
{
"docid": "9bbb8ff8e8d498709ee68c6797b00588",
"text": "Studies often report that bilingual participants possess a smaller vocabulary in the language of testing than monolinguals, especially in research with children. However, each study is based on a small sample so it is difficult to determine whether the vocabulary difference is due to sampling error. We report the results of an analysis of 1,738 children between 3 and 10 years old and demonstrate a consistent difference in receptive vocabulary between the two groups. Two preliminary analyses suggest that this difference does not change with different language pairs and is largely confined to words relevant to a home context rather than a school context.",
"title": ""
},
{
"docid": "7f8a6df043dfb98e1cc1bf82f3ed9d02",
"text": "Copious sequential event data has consistently increased in various high-impact domains such as social media and sharing economy. When events start to take place in a sequential fashion, an important question arises: “what type of event will happen at what time in the near future?” To answer the question, a class of mathematical models called the marked temporal point process is often exploited as it can model the timing and properties of events seamlessly in a joint framework. Recently, various recurrent neural network (RNN) models are proposed to enhance the predictive power of mark temporal point process. However, existing marked temporal point models are fundamentally based on the Maximum Likelihood Estimation (MLE) framework for the training, and inevitably suffer from the problem resulted from the intractable likelihood function. Surprisingly, little attention has been paid to address this issue. In this work, we propose INITIATOR a novel training framework based on noise-contrastive estimation to resolve this problem. Theoretically, we show the exists a strong connection between the proposed INITIATOR and the exact MLE. Experimentally, the efficacy of INITIATOR is demonstrated over the state-of-the-art approaches on several real-world datasets from various areas.",
"title": ""
},
{
"docid": "33db7ac45c020d2a9e56227721b0be70",
"text": "This thesis proposes an extended version of the Combinatory Categorial Grammar (CCG) formalism, with the following features: 1. grammars incorporate inheritance hierarchies of lexical types, defined over a simple, feature-based constraint language 2. CCG lexicons are, or at least can be, functions from forms to these lexical types This formalism, which I refer to as ‘inheritance-driven’ CCG (I-CCG), is conceptualised as a partially model-theoretic system, involving a distinction between category descriptions and their underlying category models, with these two notions being related by logical satisfaction. I argue that the I-CCG formalism retains all the advantages of both the core CCG framework and proposed generalisations involving such things as multiset categories, unary modalities or typed feature structures. In addition, I-CCG: 1. provides non-redundant lexicons for human languages 2. captures a range of well-known implicational word order universals in terms of an acquisition-based preference for shorter grammars This thesis proceeds as follows: Chapter 2 introduces the ‘baseline’ CCG formalism, which incorporates just the essential elements of category notation, without any of the proposed extensions. Chapter 3 reviews parts of the CCG literature dealing with linguistic competence in its most general sense, showing how the formalism predicts a number of language universals in terms of either its restricted generative capacity or the prioritisation of simpler lexicons. Chapter 4 analyses the first motivation for generalising the baseline category notation, demonstrating how certain fairly simple implicational word order universals are not formally predicted by baseline CCG, although they intuitively do involve considerations of grammatical economy. Chapter 5 examines the second motivation underlying many of the customised CCG category notations — to reduce lexical redundancy, thus allowing for the construction of lexicons which assign (each sense of) open class words and morphemes to no more than one lexical category, itself denoted by a non-composite lexical type.",
"title": ""
},
{
"docid": "ec19c40473bb1316b9390b6d7bcaae7f",
"text": "Online crowdfunding platforms like DonorsChoose.org and Kickstarter allow specific projects to get funded by targeted contributions from a large number of people. Critical for the success of crowdfunding communities is recruitment and continued engagement of donors. With donor attrition rates above 70%, a significant challenge for online crowdfunding platforms as well as traditional offline non-profit organizations is the problem of donor retention. We present a large-scale study of millions of donors and donations on DonorsChoose.org, a crowdfunding platform for education projects. Studying an online crowdfunding platform allows for an unprecedented detailed view of how people direct their donations. We explore various factors impacting donor retention which allows us to identify different groups of donors and quantify their propensity to return for subsequent donations. We find that donors are more likely to return if they had a positive interaction with the receiver of the donation. We also show that this includes appropriate and timely recognition of their support as well as detailed communication of their impact. Finally, we discuss how our findings could inform steps to improve donor retention in crowdfunding communities and non-profit organizations.",
"title": ""
},
{
"docid": "fbcab4ec5e941858efe7e72db910de67",
"text": "Previously published guidelines provide comprehensive recommendations for hand hygiene in healthcare facilities. The intent of this document is to highlight practical recommendations in a concise format, update recommendations with the most current scientific evidence, and elucidate topics that warrant clarification or more robust research. Additionally, this document is designed to assist healthcare facilities in implementing hand hygiene adherence improvement programs, including efforts to optimize hand hygiene product use, monitor and report back hand hygiene adherence data, and promote behavior change. This expert guidance document is sponsored by the Society for Healthcare Epidemiology of America (SHEA) and is the product of a collaborative effort led by SHEA, the Infectious Diseases Society of America (IDSA), the American Hospital Association (AHA), the Association for Professionals in Infection Control and Epidemiology (APIC), and The Joint Commission, with major contributions from representatives of a number of organizations and societies with content expertise. The list of endorsing and supporting organizations is presented in the introduction to the 2014 updates.",
"title": ""
},
{
"docid": "5ba3baabc84d02f0039748a4626ace36",
"text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.",
"title": ""
},
{
"docid": "f6249304dbd2b275a70b2b12faeb4712",
"text": "This paper describes a system, built and refined over the past five years, that automatically analyzes student programs assigned in a computer organization course. The system tests a student's program, then e-mails immediate feedback to the student to assist and encourage the student to continue testing, debugging, and optimizing his or her program. The automated feedback system improves the students' learning experience by allowing and encouraging them to improve their program iteratively until it is correct. The system has also made it possible to add challenging parts to each project, such as optimization and testing, and it has enabled students to meet these challenges. Finally, the system has reduced the grading load of University of Michigan's large classes significantly and helped the instructors handle the rapidly increasing enrollments of the 1990s. Initial experience with the feedback system showed that students depended too heavily on the feedback system as a substitute for their own testing. This problem was addressed by requiring students to submit a comprehensive test suite along with their program and by applying automated feedback techniques to help students learn how to write good test suites. Quantitative iterative feedback has proven to be extremely helpful in teaching students specific concepts about computer organization and general concepts on computer programming and testing.",
"title": ""
},
{
"docid": "dcc003378537f9cad071c75bc144304f",
"text": "Social support is critical for psychological and physical well-being, reflecting the centrality of belongingness in our lives. Human interactions often provide people with considerable social support, but can pets also fulfill one's social needs? Although there is correlational evidence that pets may help individuals facing significant life stressors, little is known about the well-being benefits of pets for everyday people. Study 1 found in a community sample that pet owners fared better on several well-being (e.g., greater self-esteem, more exercise) and individual-difference (e.g., greater conscientiousness, less fearful attachment) measures. Study 2 assessed a different community sample and found that owners enjoyed better well-being when their pets fulfilled social needs better, and the support that pets provided complemented rather than competed with human sources. Finally, Study 3 brought pet owners into the laboratory and experimentally demonstrated the ability of pets to stave off negativity caused by social rejection. In summary, pets can serve as important sources of social support, providing many positive psychological and physical benefits for their owners.",
"title": ""
},
{
"docid": "f5e44676e9ce8a06bcdb383852fb117f",
"text": "We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6% Top-1/93% Top-5 on the Imagenet object classification challenge with low-precision network while reducing the compute requirement by ∼3× compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, DLAC, that can achieve up to 1 TFLOP/mm2 equivalent for single-precision floating-point operations (∼2 TFLOP/mm2 for half-precision), which is ∼5× better than Linear Algebra Core [16] and ∼4× better than previous deep learning accelerator proposal [8].",
"title": ""
},
{
"docid": "d5a9267d523a959fdf30d85f5b78e69d",
"text": "Where data based decision making is taking over businesses and elite sports, elite soccer is lacking behind. In elite soccer, decisions are still often based on emotions and recent results. As results are, however, dependent on many aspects, the reasons for these results are currently unknown by the elite soccer clubs. In our study, a method is proposed to determine the expected winner of a match. Since goals are rare in soccer, goal scoring opportunities are analyzed instead. By analyzing which team created the best goal scoring opportunities, a feeling can be created which team should have won the game. Therefore, it is important that the quality of goal scoring opportunities accurately reflect reality. Therefore, the proposed method ensures that the quality of a goal scoring opportunity is given as the probability of the goal scoring opportunity resulting in a goal. It is shown that these scores accurately match reality. The quality scores of individual goal scoring opportunities are then aggregated to obtain an expected match outcome, which results in an expected winner. In little more than 50% of the cases, our method is able to determine the correct winner of a match. The majority of incorrect classified winners comes from close matches where a draw is predicted. The quality scores of the proposed method can already be used by elite soccer clubs. First of all, these clubs can evaluate periods of time more objectively. Secondly, individual matches can be evaluated to evaluate the importance of major events during a match e.g. substitutions. Finally, the quality metrics can be used to determine the performance of players over time which can be used to adjust training programs or to perform player acquisition. Expected Goals in Soccer: Explaining Match Results using Predictive Analytics iii",
"title": ""
},
{
"docid": "91365154a173be8be29ef14a3a76b08e",
"text": "Fraud is a criminal practice for illegitimate gain of wealth or tampering information. Fraudulent activities are of critical concern because of their severe impact on organizations, communities as well as individuals. Over the last few years, various techniques from different areas such as data mining, machine learning, and statistics have been proposed to deal with fraudulent activities. Unfortunately, the conventional approaches display several limitations, which were addressed largely by advanced solutions proposed in the advent of Big Data. In this paper, we present fraud analysis approaches in the context of Big Data. Then, we study the approaches rigorously and identify their limits by exploiting Big Data analytics.",
"title": ""
},
{
"docid": "7411be59eacb3ecad53204b300e17c24",
"text": "In this study, a finger exoskeleton robot has been designed and presented. The prototype device was designed to be worn on the dorsal side of the hand to assist in the movement and rehabilitation of the fingers. The finger exoskeleton is 3D-printed to be low-cost and has a transmission mechanism consisting of rigid serial links which is actuated by a stepper motor. The actuation of the robotic finger is by a sliding motion and mimics the movement of the human finger. To make it possible for the patient to use the rehabilitation device anywhere and anytime, an ArduinoTM control board and a speech recognition board were used to allow voice control. As the robotic finger follows the patients voice commands the actual motion is analyzed by Tracker image analysis software. The finger exoskeleton is designed to flex and extend the fingers, and has a rotation range of motion (ROM) of 44.2◦.",
"title": ""
},
{
"docid": "aef85d4f84b56e1355c5a0d7e3354e2e",
"text": "Algorithms based on trust regions have been shown to be robust methods for unconstrained optimization problems. All existing methods, either based on the dogleg strategy or Hebden-More iterations, require solution of system of linear equations. In large scale optimization this may be prohibitively expensive. It is shown in this paper that an approximate solution of the trust region problem may be found by the preconditioned conjugate gradient method. This may be regarded as a generalized dogleg technique where we asymptotically take the inexact quasi-Newton step. We also show that we have the same convergence properties as existing methods based on the dogleg strategy using an approximate Hessian.",
"title": ""
},
{
"docid": "752cf1c7cefa870c01053d87ff4f445c",
"text": "Cannabidiol (CBD) represents a new promising drug due to a wide spectrum of pharmacological actions. In order to relate CBD clinical efficacy to its pharmacological mechanisms of action, we performed a bibliographic search on PUBMED about all clinical studies investigating the use of CBD as a treatment of psychiatric symptoms. Findings to date suggest that (a) CBD may exert antipsychotic effects in schizophrenia mainly through facilitation of endocannabinoid signalling and cannabinoid receptor type 1 antagonism; (b) CBD administration may exhibit acute anxiolytic effects in patients with generalised social anxiety disorder through modification of cerebral blood flow in specific brain sites and serotonin 1A receptor agonism; (c) CBD may reduce withdrawal symptoms and cannabis/tobacco dependence through modulation of endocannabinoid, serotoninergic and glutamatergic systems; (d) the preclinical pro-cognitive effects of CBD still lack significant results in psychiatric disorders. In conclusion, current evidences suggest that CBD has the ability to reduce psychotic, anxiety and withdrawal symptoms by means of several hypothesised pharmacological properties. However, further studies should include larger randomised controlled samples and investigate the impact of CBD on biological measures in order to correlate CBD's clinical effects to potential modifications of neurotransmitters signalling and structural and functional cerebral changes.",
"title": ""
},
{
"docid": "585ec3229d7458f5d6bca3c7936eb306",
"text": "Graph processing has gained renewed attention. The increasing large scale and wealth of connected data, such as those accrued by social network applications, demand the design of new techniques and platforms to efficiently derive actionable information from large scale graphs. Hybrid systems that host processing units optimized for both fast sequential processing and bulk processing (e.g., GPUaccelerated systems) have the potential to cope with the heterogeneous structure of real graphs and enable high performance graph processing. Reaching this point, however, poses multiple challenges. The heterogeneity of the processing elements (e.g., GPUs implement a different parallel processing model than CPUs and have much less memory) and the inherent irregularity of graph workloads require careful graph partitioning and load assignment. In particular, the workload generated by a partitioning scheme should match the strength of the processing element the partition is allocated to. This work explores the feasibility and quantifies the performance gains of such low-cost partitioning schemes. We propose to partition the workload between the two types of processing elements based on vertex connectivity. We show that such partitioning schemes offer a simple, yet efficient way to boost the overall performance of the hybrid system. Our evaluation illustrates that processing a 4-billion edges graph on a system with one CPU socket and one GPU, while offloading as little as 25% of the edges to the GPU, achieves 2x performance improvement over state-of-the-art implementations running on a dual-socket symmetric system. Moreover, for the same graph, a hybrid system with dualsocket and dual-GPU is capable of 1.13 Billion breadth-first search traversed edge per second, a performance rate that is competitive with the latest entries in the Graph500 list, yet at a much lower price point.",
"title": ""
},
{
"docid": "335a330d7c02f13c0f50823461f4e86f",
"text": "Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources-the transmit precoding matrices of the MUs-and the computational resources-the CPU cycles/second assigned by the cloud to each MU-in order to minimize the overall users' energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.",
"title": ""
},
{
"docid": "098da928abe37223e0eed0c6bf0f5747",
"text": "With the proliferation of social media, fashion inspired from celebrities, reputed designers as well as fashion influencers has shortned the cycle of fashion design and manufacturing. However, with the explosion of fashion related content and large number of user generated fashion photos, it is an arduous task for fashion designers to wade through social media photos and create a digest of trending fashion. Designers do not just wish to have fashion related photos at one place but seek search functionalities that can let them search photos with natural language queries such as ‘red dress’, ’vintage handbags’, etc in order to spot the trends. This necessitates deep parsing of fashion photos on social media to localize and classify multiple fashion items from a given fashion photo. While object detection competitions such as MSCOCO have thousands of samples for each of the object categories, it is quite difficult to get large labeled datasets for fast fashion items. Moreover, state-of-the-art object detectors [2, 7, 9] do not have any functionality to ingest large amount of unlabeled data available on social media in order to fine tune object detectors with labeled datasets. In this work, we show application of a generic object detector [11], that can be pretrained in an unsupervised manner, on 24 categories from recently released Open Images V4 dataset. We first train the base architecture of the object detector using unsupervisd learning on 60K unlabeled photos from 24 categories gathered from social media, and then subsequently fine tune it on 8.2K labeled photos from Open Images V4 dataset. On 300 × 300 image inputs, we achieve 72.7% mAP on a test dataset of 2.4K photos while performing 11% to 17% better as compared to the state-of-the-art object detectors. We show that this improvement is due to our choice of architecture that lets us do unsupervised learning and that performs significantly better in identifying small objects. 1",
"title": ""
},
{
"docid": "a6ba94c0faf2fd41d8b1bd5a068c6d3d",
"text": "The main mechanisms responsible for performance degradation of millimeter wave (mmWave) and terahertz (THz) on-chip antennas are reviewed. Several techniques to improve the performance of the antennas and several high efficiency antenna types are presented. In order to illustrate the effects of the chip topology on the antenna, simulations and measurements of mmWave and THz on-chip antennas are shown. Finally, different transceiver architectures are explored with emphasis on the challenges faced in a wireless multi-core environment.",
"title": ""
},
{
"docid": "1564a94998151d52785dd0429b4ee77d",
"text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.",
"title": ""
}
] |
scidocsrr
|
4e161687055ab86526f0f1c1663ea104
|
Fit or unfit: analysis and prediction of 'closed questions' on stack overflow
|
[
{
"docid": "68693c88cb62ce28514344d15e9a6f09",
"text": "New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline.",
"title": ""
}
] |
[
{
"docid": "b0687f53ba624136723f477d38e075d1",
"text": "Presenting historical content and information illustratively and interestingly for the audience is an interesting challenge. Now that mobile devices with good computational and graphical capabilities have become wide-spread, Augmented Reality (AR) has become an attractive solution. Historical events can be presented for the tourist in the very locations where they occurred. One of the easiest types of historical content to present in AR is historical photographs. This paper presents mobile applications to show historical photos for tourists in Augmented Reality. We present several on-site pilot cases and give a summary of technical findings and feedback from test users.",
"title": ""
},
{
"docid": "5d22d1b401c80ba2aa02608a379c0440",
"text": "For decades our common understanding of the organization of economic production has been that individuals order their productive activities in one of two ways: either as employees in firms, following the directions of managers, or as individuals in markets, following price signals. This dichotomy was first identified in the early work of Ronald Coase and was developed most explicitly in the work of institutional economist Oliver Williamson. In this paper I explain why we are beginning to see the emergence of a new, third mode of production, in the digitally networked environment, a mode I call commons-based peer production. In the past three or four years, public attention has focused on a fifteen-yearold social-economic phenomenon in the software development world. This phenomenon, called free software or open source software, involves thousands or even tens of thousands of programmers contributing to large and small scale projects, where the central organizing principle is that the software remains free of most constraints on copying and use common to proprietary materials. No one “owns” the software in the traditional sense of being able to command how it is used or developed, or to control its disposition. The result has been the emergence of a vibrant, innovative and productive collaboration, whose participants are not organized in firms and do not choose their projects in response to price signals. This paper explains that while free software is highly visible, it is in fact only one example of a much broader social-economic phenomenon. I suggest that we are seeing the broad and deep emergence of a new, third mode of production in the ∗ Professor of Law, New York University School of Law. Research for this paper was partly supported by a grant from the Filomen D’Agostino and Max Greenberg Fund at NYU School of Law. I owe thanks to many for comments on this and earlier drafts, including: Bruce Ackerman, Ed Baker, Elazar Barkan, Dan Burk, Jamie Boyle, Niva Elkin Koren, Terry Fisher, Natalie Jeremijenko, Dan Kahan, Doug Lichtman, Tara Lemmy, Mark Nadel, Carol Rose, Bob Ellickson, Peggy Radin, Clay Shirky, Helen Nissenbaum, Jerry Mashaw, Eben Moglen, Larry Lessig, Chuck Sabel, Alan Schwartz, Richard Stallman, and Kenji Yoshino. I owe special thanks to Steve Snyder for his invaluable research assistance on the peer production enterprises described here. I have gotten many question about the “Coase’s Penguin” portion of the title. It turns out that the geek culture that easily recognizes “Coase” doesn’t’ recognize the “penguin,” and vice versa. So, “Coase” refers to Ronald Coase, who originated the transactions costs theory of the firm that provides the methodological template for the positive analysis of peer production that I offer here. The penguin refers to the fact that the Linux kernel development community has adopted the image of a paunchy penguin as its mascot/trademark. One result of this cross-cultural conversation is that I will occasionally explain in some detail concepts that are well known in one community but not in the other. 2 COASE’S PENGUIN V.04.3 AUGUST. 2002 2 digitally networked environment. I call this mode “commons-based peer production,” to distinguish it from the propertyand contract-based modes of firms and markets. Its central characteristic is that groups of individuals successfully collaborate on largescale projects following a diverse cluster of motivational drives and social signals, rather than either market prices or managerial commands. I explain why this mode has systematic advantages over markets and managerial hierarchies when the object of production is information or culture, and where the physical capital necessary for that production—computers and communications capabilities—is widely distributed instead of concentrated. In particular, this mode of production is better than firms and markets for two reasons. First, it is better at identifying and assigning human capital to information and cultural production processes. In this regard, peer production has an advantage in what I call “information opportunity cost.” That is, it loses less information about who the best person for a given job might be than either of the other two organizational modes. Second, there are substantial increasing returns, in terms of allocation efficiency, to allowing larger clusters of potential contributors to interact with large clusters of information resources in search of new projects and opportunities for collaboration. Removing property and contract as the organizing principles of collaboration substantially reduces transaction costs involved in allowing these large clusters of potential contributors to review and select which resources to work on, for which projects, and with which collaborators. This results in the potential for substantial allocation gains. The article concludes with an overview of how these models use a variety of technological and social strategies to overcome the collective action problems usually solved in managerial and market-based systems by property, contract, and managerial commands.",
"title": ""
},
{
"docid": "25c59d905fc75d82b9c7ee1e8a17291e",
"text": "The Path Ranking Algorithm (Lao and Cohen, 2010) is a general technique for performing link prediction in a graph. PRA has mainly been used for knowledge base completion (Lao et al., 2011; Gardner et al., 2013; Gardner et al., 2014), though the technique is applicable to any kind of link prediction task. To learn a prediction model for a particular edge type in a graph, PRA finds sequences of edge types (or paths) that frequently connect nodes that are instances of the edge type being predicted. PRA then uses those path types as features in a logistic regression model to infer missing edges in the graph. In this class project, we performed three separate experiments relating to different aspects of PRA: improving the efficiency of the algorithm, exploring the use of multiple languages in a knowledge base completion task, and using PRA-style features in sentencelevel prediction models. The first experiment looks at improving the efficiency and performance of link prediction in graphs by removing unnecessary steps from PRA. We introduce a simple technique that extracts features from the subgraph centered around a pair of nodes in the graph, and show that this method is an order of magnitude faster than PRA while giving significantly better performance. Additionally, this new model is more expressive than PRA, as it can handle arbitrary features extracted from the subgraphs, instead of only the relation sequences connecting the node pair. The new feature types we experimented with did not generally lead to better predictions, though further feature engineering may yield additional performance improvements. The second experiment we did with PRA extends recent work that performs knowledge base completion using a large parsed English corpus in conjunction with random walks over a knowledge base (Gardner et al., 2013; Gardner et al., 2014). This prior work showed significant performance gains when using the corpus along with the knowledge base, and even further gains by using abstract representations of the textual relations extracted from the corpus. In this experiment, we attempt to extend these results to a multilingual setting, with textual relations extracted from 10 different languages. We discuss the challenges that arise when dealing with data in languages for which parsers and entity linkers are not readily available, and show that previous techniques for obtaining abstract relation representations do not work in this setting. The final experiment takes a step towards a longstanding goal in artificial intelligence research: using a large collection of background knowledge to improve natural language understanding. We present a new technique for incorporating information from a knowledge base into sentence-level prediction tasks, and demonstrate its usefulness in one task in particular: relation extraction. We show that adding PRAstyle features generated from Freebase to an off-theshelf relation extraction model significantly improves its performance. This simple and general technique also outperforms prior work that learns knowledge base embeddings to improve prediction performance on the same task. In the remainder of this paper, we first give a brief introduction to the path ranking algorithm. Then we discuss each experiment in turn, with each section introducing the new methods, describing related work, and presenting experimental results.",
"title": ""
},
{
"docid": "00575265d0a6338e3eeb23d234107206",
"text": "We introduce the concept of mode-k generalized eigenvalues and eigenvectors of a tensor and prove some properties of such eigenpairs. In particular, we derive an upper bound for the number of equivalence classes of generalized tensor eigenpairs using mixed volume. Based on this bound and the structures of tensor eigenvalue problems, we propose two homotopy continuation type algorithms to solve tensor eigenproblems. With proper implementation, these methods can find all equivalence classes of isolated generalized eigenpairs and some generalized eigenpairs contained in the positive dimensional components (if there are any). We also introduce an algorithm that combines a heuristic approach and a Newton homotopy method to extract real generalized eigenpairs from the found complex generalized eigenpairs. A MATLAB software package TenEig has been developed to implement these methods. Numerical results are presented to illustrate the effectiveness and efficiency of TenEig for computing complex or real generalized eigenpairs.",
"title": ""
},
{
"docid": "874d3e28318cd6e2427b54a06d2ac966",
"text": "The article presents some selected problems related to modeling and simulation of hydraulic servo-systems (valve + cylinder), using MATLAB-Simulink package. For this purpose, is taken into account the basic mathematical model of certain selected elements and phenomena that occur in a hydraulic servo system. Models are represented as block diagrams adapted to the software package requirements. Afterward, the simulation results are compared with laboratory measurements. Laboratory measurements have been performed in Laboratory for hydraulics and pneumatics at Faculty of Engineering, University of Rijeka.",
"title": ""
},
{
"docid": "7a5fb7d551d412fd8bdbc3183dafc234",
"text": "Presentations have been an effective means of delivering information to groups for ages. Over the past few decades, technological advancements have revolutionized the way humans deliver presentations. Despite that, the quality of presentations can be varied and affected by a variety of reasons. Conventional presentation evaluation usually requires painstaking manual analysis by experts. Although the expert feedback can definitely assist users in improving their presentation skills, manual evaluation suffers from high cost and is often not accessible to most people. In this work, we propose a novel multi-sensor self-quantification framework for presentations. Utilizing conventional ambient sensors (i.e., static cameras, Kinect sensor) and the emerging wearable egocentric sensors (i.e., Google Glass), we first analyze the efficacy of each type of sensor with various nonverbal assessment rubrics, which is followed by our proposed multi-sensor presentation analytics framework. The proposed framework is evaluated on a new presentation dataset, namely NUS Multi-Sensor Presentation (NUSMSP) dataset, which consists of 51 presentations covering a diverse set of topics. The dataset was recorded with ambient static cameras, Kinect sensor, and Google Glass. In addition to multi-sensor analytics, we have conducted a user study with the speakers to verify the effectiveness of our system generated analytics, which has received positive and promising feedback.",
"title": ""
},
{
"docid": "9cb567317559ada8baec5b6a611e68d0",
"text": "Fungal bioactive polysaccharides deriving mainly from the Basidiomycetes family (and some from the Ascomycetes) and medicinal mushrooms have been well known and widely used in far Asia as part of traditional diet and medicine, and in the last decades have been the core of intense research for the understanding and the utilization of their medicinal properties in naturally produced pharmaceuticals. In fact, some of these biopolymers (mainly β-glucans or heteropolysaccharides) have already made their way to the market as antitumor, immunostimulating or prophylactic drugs. The fact that many of these biopolymers are produced by edible mushrooms makes them also very good candidates for the formulation of novel functional foods and nutraceuticals without any serious safety concerns, in order to make use of their immunomodulating, anticancer, antimicrobial, hypocholesterolemic, hypoglycemic and health-promoting properties. This article summarizes the most important properties and applications of bioactive fungal polysaccharides and discusses the latest developments on the utilization of these biopolymers in human nutrition.",
"title": ""
},
{
"docid": "a784d35f9d7ea612ab4374c6b4060bb2",
"text": "The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control.",
"title": ""
},
{
"docid": "874973c7a28652d5d9859088b965e76c",
"text": "Recommender systems are commonly defined as applications that e-commerce sites exploit to suggest products and provide consumers with information to facilitate their decision-making processes.1 They implicitly assume that we can map user needs and constraints, through appropriate recommendation algorithms, and convert them into product selections using knowledge compiled into the intelligent recommender. Knowledge is extracted from either domain experts (contentor knowledge-based approaches) or extensive logs of previous purchases (collaborative-based approaches). Furthermore, the interaction process, which turns needs into products, is presented to the user with a rationale that depends on the underlying recommendation technology and algorithms. For example, if the system funnels the behavior of other users in the recommendation, it explicitly shows reviews of the selected products or quotes from a similar user. Recommender systems are now a popular research area2 and are increasingly used by e-commerce sites.1 For travel and tourism,3 the two most successful recommender system technologies (see Figure 1) are Triplehop’s TripMatcher (used by www. ski-europe.com, among others) and VacationCoach’s expert advice platform, MePrint (used by travelocity.com). Both of these recommender systems try to mimic the interactivity observed in traditional counselling sessions with travel agents when users search for advice on a possible holiday destination. From a technical viewpoint, they primarily use a content-based approach, in which the user expresses needs, benefits, and constraints using the offered language (attributes). The system then matches the user preferences with items in a catalog of destinations (described with the same language). VacationCoach exploits user profiling by explicitly asking the user to classify himself or herself in one profile (for example, as a “culture creature,” “beach bum,” or “trail trekker”), which induces implicit needs that the user doesn’t provide. The user can even input precise profile information by completing the appropriate form. TripleHop’s matching engine uses a more sophisticated approach to reduce user input. It guesses importance of attributes that the user does not explicitly mention. It then combines statistics on past user queries with a prediction computed as a weighted average of importance assigned by similar users.4",
"title": ""
},
{
"docid": "0ee6398d098b9a087f339d32f4381566",
"text": "Human iris contains rich textural information which serves as the key information for biometric identifications. It is very unique and one of the most accurate biometric modalities. However, spoofing techniques can be used to obfuscate or impersonate identities and increase the risk of false acceptance or false rejection. This paper revisits iris recognition with spoofing attacks and analyzes their effect on the recognition performance. Specifically, print attack with contact lens variations is used as the spoofing mechanism. It is observed that print attack and contact lens, individually and in conjunction, can significantly change the inter-personal and intra-personal distributions and thereby increase the possibility to deceive the iris recognition systems. The paper also presents the IIITD iris spoofing database, which contains over 4800 iris images pertaining to over 100 individuals with variations due to contact lens, sensor, and print attack. Finally, the paper also shows that cost effective descriptor approaches may help in counter-measuring spooking attacks.",
"title": ""
},
{
"docid": "a69220d5cf0145eb6e2e8b13252e6eea",
"text": "Database benchmarks are an important tool for database researchers and practitioners that ease the process of making informed comparisons between different database hardware, software and configurations. Large scale web services such as social networks are a major and growing database application area, but currently there are few benchmarks that accurately model web service workloads.\n In this paper we present a new synthetic benchmark called LinkBench. LinkBench is based on traces from production databases that store \"social graph\" data at Facebook, a major social network. We characterize the data and query workload in many dimensions, and use the insights gained to construct a realistic synthetic benchmark. LinkBench provides a realistic and challenging test for persistent storage of social and web service data, filling a gap in the available tools for researchers, developers and administrators.",
"title": ""
},
{
"docid": "07ea7628199e3e432b145530d91d75a9",
"text": "BACKGROUND\nEffective menstrual hygiene has direct and indirect effect on achieving millennium development goals two (universal education), three (gender equality and women empowerment) and, five (improving maternal health). However, in Ethiopiait is an issue which is insufficiently acknowledged in the reproductive health sector. The objective of this study therefore, is to assess the age of menarche and knowledge of adolescents about menstrual hygiene management in Amhara province.\n\n\nMETHOD\nSchool based cross sectional study was conducted from November 2012 to June 2013. Multistage stage sampling technique was used. The school was first clustered in to grades & sections and thenparticipants were selected by lottery method. A pretested &structured questionnaire was used. Data were entered, cleaned and analyzed using SPSS version 16.0. Finally, multivariate analysis was used to assess independent effect of predictors.\n\n\nFINDINGS\nIn this study, 492 students were included, making a response rate of 100%. Mean age at menarche was 14.1±1.4 years. The main sources of information about menstrual hygiene management were teachers for 212 (43.1%). Four hundred forty six (90.7%) respondents had high level knowledge about menstrual hygiene management. Most of the respondents 457 (92.9%) and 475 (96.5%) had access for water and toilet facility respectively. Place of residence (AOR = 1.8, 95%CI: [1.42-1.52]) and educational status of their mothers' (AOR = 95%CI: [1.15-13.95]) were independent predictors of knowledge about menstrual hygiene management.\n\n\nCONCLUSION\nKnowledge of respondents about menstrual hygiene management was very high. School teachers were the primary source of information. Place of residence and their mother's educational status were independent predictors of menstrual hygiene management. Thus, the government of Ethiopia in collaboration with its stalk holders should develop and disseminatereproductive health programmes on menstrual hygiene management targeting both parents and their adolescents. Moreover, parents should be made aware about the need to support their children with appropriate sanitary materials.",
"title": ""
},
{
"docid": "9aa1e7c351129fa4a6adb3a8899e518f",
"text": "Thousands of unique non-coding RNA (ncRNA) sequences exist within cells. Work from the past decade has altered our perception of ncRNAs from 'junk' transcriptional products to functional regulatory molecules that mediate cellular processes including chromatin remodelling, transcription, post-transcriptional modifications and signal transduction. The networks in which ncRNAs engage can influence numerous molecular targets to drive specific cell biological responses and fates. Consequently, ncRNAs act as key regulators of physiological programmes in developmental and disease contexts. Particularly relevant in cancer, ncRNAs have been identified as oncogenic drivers and tumour suppressors in every major cancer type. Thus, a deeper understanding of the complex networks of interactions that ncRNAs coordinate would provide a unique opportunity to design better therapeutic interventions.",
"title": ""
},
{
"docid": "5b3b78504afc7e4959edd0108d3e06ee",
"text": "In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 2563 by recovering the occluded/missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects.",
"title": ""
},
{
"docid": "6c29473469f392079fa8406419190116",
"text": "The five-factor model of personality is a hierarchical organization of personality traits in terms of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Research using both natural language adjectives and theoretically based personality questionnaires supports the comprehensiveness of the model and its applicability across observers and cultures. This article summarizes the history of the model and its supporting evidence; discusses conceptions of the nature of the factors; and outlines an agenda for theorizing about the origins and operation of the factors. We argue that the model should prove useful both for individual assessment and for the elucidation of a number of topics of interest to personality psychologists.",
"title": ""
},
{
"docid": "ae5bf888ce9a61981be60b9db6fc2d9c",
"text": "Inverting the hash values by performing brute force computation is one of the latest security threats on password based authentication technique. New technologies are being developed for brute force computation and these increase the success rate of inversion attack. Honeyword base authentication protocol can successfully mitigate this threat by making password cracking detectable. However, the existing schemes have several limitations like Multiple System Vulnerability, Weak DoS Resistivity, Storage Overhead, etc. In this paper we have proposed a new honeyword generation approach, identified as Paired Distance Protocol (PDP) which overcomes almost all the drawbacks of previously proposed honeyword generation approaches. The comprehensive analysis shows that PDP not only attains a high detection rate of 97.23% but also reduces the storage cost to a great extent.",
"title": ""
},
{
"docid": "640ba15172b56373b3a6bdfe9f5f6cd4",
"text": "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multiagent domains.",
"title": ""
},
{
"docid": "bdf9ab9c9efcc476d36d66219d13f721",
"text": "In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF), SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF, for different noise levels and different simulated source depths has shown that for single source localization, regularized sLORETA gives the best solution in terms of both localization error and ghost sources. Furthermore the computationally intensive solution given by SLF was not found to give any additional benefits under such simulated conditions.",
"title": ""
},
{
"docid": "85a076e58f4d117a37dfe6b3d68f5933",
"text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.",
"title": ""
}
] |
scidocsrr
|
b683e2f14de86d4a8e47550dfb90e6bd
|
Face recognition under illumination variations based on eight local directional patterns
|
[
{
"docid": "a53f26ef068d11ea21b9ba8609db6ddf",
"text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "6eb8e1a391398788d9b4be294b8a70d1",
"text": "To improve software quality, researchers and practitioners have proposed static analysis tools for various purposes (e.g., detecting bugs, anomalies, and vulnerabilities). Although many such tools are powerful, they typically need complete programs where all the code names (e.g., class names, method names) are resolved. In many scenarios, researchers have to analyze partial programs in bug fixes (the revised source files can be viewed as a partial program), tutorials, and code search results. As a partial program is a subset of a complete program, many code names in partial programs are unknown. As a result, despite their syntactical correctness, existing complete-code tools cannot analyze partial programs, and existing partial-code tools are limited in both their number and analysis capability. Instead of proposing another tool for analyzing partial programs, we propose a general approach, called GRAPA, that boosts existing tools for complete programs to analyze partial programs. Our major insight is that after unknown code names are resolved, tools for complete programs can analyze partial programs with minor modifications. In particular, GRAPA locates Java archive files to resolve unknown code names, and resolves the remaining unknown code names from resolved code names. To illustrate GRAPA, we implement a tool that leverages the state-of-the-art tool, WALA, to analyze Java partial programs. We thus implemented the first tool that is able to build system dependency graphs for partial programs, complementing existing tools. We conduct an evaluation on 8,198 partial-code commits from four popular open source projects. Our results show that GRAPA fully resolved unknown code names for 98.5% bug fixes, with an accuracy of 96.1% in total. Furthermore, our results show the significance of GRAPA's internal techniques, which provides insights on how to integrate with more complete-code tools to analyze partial programs.",
"title": ""
},
{
"docid": "fbaf790dd8a59516bc4d1734021400fd",
"text": "With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem. In this paper, we reproduce seven state-of-the-art hate speech detection models from prior work, and show that they perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech. A combination of these methods is also effective against Google Perspective - a cutting-edge solution from industry. Our experiments demonstrate that adversarial training does not completely mitigate the attacks, and using character-level features makes the models systematically more attack-resistant than using word-level features.",
"title": ""
},
{
"docid": "d2f5f5b42d732a5d27310e4f2d76116a",
"text": "This paper reports on a cluster analysis of pervasive games through a bottom-up approach based upon 120 game examples. The basis for the clustering algorithm relies on the identification of pervasive gameplay design patterns for each game from a set of 75 possible patterns. The resulting hierarchy presents a view of the design space of pervasive games, and details of clusters and novel gameplay features are described. The paper concludes with a view over how the clusters relate to existing genres and models of pervasive games.",
"title": ""
},
{
"docid": "a1748382dad10ac819c33ddad88ddc06",
"text": "The emergence of peer-to-peer networking and the increase of home PC storage capacity are necessitating efficient scaleable methods for video clustering, recommending and browsing. Based on film theories and psychological models, color-mood is an important factor affecting user emotional preferences. We propose a compact set of features for color-mood analysis and subgenre discrimination. We introduce two color representations for scenes and full films in order to extract the essential moods from the films: a global measure for the color palette and a discriminative measure for the transitions of the moods in the movie. We captured the dominant color ratio and the pace of the movie. Despite the simplicity and efficiency of the features, the classification accuracy was surprisingly good, about 80%, possibly thanks to the prevalence of the color-mood association in feature films",
"title": ""
},
{
"docid": "35a0044724854f6fabeb777f80c8acd8",
"text": "Liposuction is one of the most commonly performed aesthetic procedures. It is performed worldwide as an outpatient procedure. However, the complications are underestimated and underreported by caregivers. We present a case of delayed diagnosis of bilothorax secondary to liver and gallbladder injury after tumescent liposuction. A 26-year-old female patient was transferred to our emergency department from an aesthetic clinic with worsening dyspnea, tachypnea and fatigue. She had undergone extensive liposuction of the thighs, buttocks, back and abdomen 5 days prior to presentation. A chest X-ray showed significant right-sided pleural effusion. Thoracentesis was performed and drained bilious fluid. CT scan of the abdomen revealed pleural, liver and gall bladder injury. An exploratory laparoscopy confirmed the findings, the collections were drained; cholecystectomy and intraoperative cholangiogram were performed. The patient did very well postoperatively and was discharged home in 2 days. Even though liposuction is considered a simple office-based procedure, its complications can be fatal. The lack of strict laws that exclusively place this procedure in the hands of medical professionals allow these procedures to still be done by less experienced hands and in outpatient-based settings. Our case serves to highlight yet another unique but potentially fatal complication of liposuction. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "e0a34904165a600f102a55df14bb82b9",
"text": "We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.",
"title": ""
},
{
"docid": "345f54e3a6d00ecb734de529ed559933",
"text": "Size and cost of a switched mode power supply can be reduced by increasing the switching frequency. The maximum switching frequency and the maximum input voltage range, respectively, is limited by the minimum propagated on-time pulse, which is mainly determined by the level shifter speed. At switching frequencies above 10 MHz, a voltage conversion with an input voltage range up to 50 V and output voltages below 5 V requires an on-time of a pulse width modulated signal of less than 5 ns. This cannot be achieved with conventional level shifters. This paper presents a level shifter circuit, which controls an NMOS power FET on a high-voltage domain up to 50 V. The level shifter was implemented as part of a DCDC converter in a 180 nm BiCMOS technology. Experimental results confirm a propagation delay of 5 ns and on-time pulses of less than 3 ns. An overlapping clamping structure with low parasitic capacitances in combination with a high-speed comparator makes the level shifter also very robust against large coupling currents during high-side transitions as fast as 20 V/ns, verified by measurements. Due to the high dv/dt, capacitive coupling currents can be two orders of magnitude larger than the actual signal current. Depending on the conversion ratio, the presented level shifter enables an increase of the switching frequency for multi-MHz converters towards 100 MHz. It supports high input voltages up to 50 V and it can be applied also to other high-speed applications.",
"title": ""
},
{
"docid": "40a88d168ad559c1f68051e710c49d6b",
"text": "Modern robotic systems tend to get more complex sensors at their disposal, resulting in complex algorithms to process their data. For example, camera images are being used map their environment and plan their route. On the other hand, the robotic systems are becoming mobile more often and need to be as energy-efficient as possible; quadcopters are an example of this. These two trends interfere with each other: Data-intensive, complex algorithms require a lot of processing power, which is in general not energy-friendly nor mobile-friendly. In this paper, we describe how to move the complex algorithms to a computing platform that is not part of the mobile part of the setup, i.e. to offload the processing part to a base station. We use the ROS framework for this, as ROS provides a lot of existing computation solutions. On the mobile part of the system, our hard real-time execution framework, called LUNA, is used, to make it possible to run the loop controllers on it. The design of a `bridge node' is explained, which is used to connect the LUNA framework to ROS. The main issue to tackle is to subscribe to an arbitrary ROS topic at run-time, instead of defining the ROS topics at compile-time. Furthermore, it is shown that this principle is working and the requirements of network bandwidth are discussed.",
"title": ""
},
{
"docid": "d10d60684e6915ba7deb959f4fe842ae",
"text": "Supervised learning methods have long been used to allow musical interface designers to generate new mappings by example. We propose a method for harnessing machine learning algorithms within a radically interactive paradigm, in which the designer may repeatedly generate examples, train a learner, evaluate outcomes, and modify parameters in real-time within a single software environment. We describe our meta-instrument, the Wekinator, which allows a user to engage in on-the-fly learning using arbitrary control modalities and sound synthesis environments. We provide details regarding the system implementation and discuss our experiences using the Wekinator for experimentation and performance.",
"title": ""
},
{
"docid": "2698fd3ba60a1f96cb7ba8e636b8f380",
"text": "A break in the integrity of the plasma membrane immediately compromises this structure’s essential role as a barrier, and this can kill the affected cell. Yet animal cell plasma membranes, unprotected by a cell wall, are highly vulnerable to mechanically induced disruption. Moreover, many tissue environments generate and receive “physiological” levels of mechanical force that impose shear, tensile, and compressive stresses on constituent cells. We examine here the tissue conditions that lead to membrane disruptions and the mechanisms that cells use for preventing disruption-induced death and for rapidly restoring membrane integrity. Mechanical stress also induces an adaptive response by cells, which must sense and respond to this stimulus. A skeletal muscle, for example, experiences during its lifetime a highly variable degree of mechanical load. Its individual myofibers must adapt to this changing mechanical environment, hypertrophying in response to increased load and atrophying in response to decreased load. Such changes in tissue architecture are important because they improve mechanical functioning, make economical use of valuable resources, repair or replace injured components, and/or prevent future injury. We here review briefly what is known about the role of plasma membrane disruption in transducing cell responses to mechanical load.",
"title": ""
},
{
"docid": "19cf79ff6f793b660b76fed231d5c99b",
"text": "Triple negative breast cancers express receptors for gonadotropin-releasing hormone (GnRH) in more than 50 % of the cases, which can be targeted with peptidic analogs of GnRH, such as triptorelin. The current study investigates cytotoxic activity of triptorelin as a monotherapy and in treatment combinations with chemotherapeutic agents and inhibitors of the PI3K and the ERK pathways in in vitro models of triple negative breast cancers (TNBC). GnRH receptor expression of TNBC cell lines MDA-MB-231 and HCC1806 was investigated. Cells were treated with triptorelin, chemotherapeutic agents (cisplatin, docetaxel, AEZS-112), PI3K/AKT inhibitors (perifosine, AEZS-129), an ERK inhibitor (AEZS-134), and dual PI3K/ERK inhibitor AEZS-136 applied as single agent therapies and in combinations. MDA-MB-231 and HCC1806 TNBC cells both expressed receptors for GnRH on messenger (m)RNA and protein level and were found sensitive to triptorelin with a respective median effective concentration (EC50) of 31.21 ± 0.21 and 58.50 ± 19.50. Synergistic effects occurred when triptorelin was combined with cisplatin. In HCC1806 cells, synergy occurred when triptorelin was applied with PI3K/AKT inhibitors perifosine and AEZS-129. In MDA-MB-231 cells, synergy was observed after co-treatment with triptorelin and ERK inhibitor AEZS-134 and dual PI3K/ERK inhibitor AEZS-136. GnRH receptors on TNBC cells can be used for targeted therapy of these cancers with GnRH agonist triptorelin. Treatment combinations based on triptorelin and PI3K and ERK inhibitors and chemotherapeutic agent cisplatin have synergistic effects in in vitro models of TNBC. If confirmed in vivo, clinical trials based on triptorelin and cisplatin could be quickly carried out, as triptorelin is FDA approved for other indications and known to be well tolerated.",
"title": ""
},
{
"docid": "67509b64aaf1ead0bcba557d8cfe84bc",
"text": "Base on innovation resistance theory, this research builds the model of factors affecting consumers' resistance in using online travel in Thailand. Through the questionnaires and the SEM methods, empirical analysis results show that functional barriers are even greater sources of resistance to online travel website than psychological barriers. Online experience and independent travel experience have significantly influenced on consumer innovation resistance. Social influence plays an important role in this research.",
"title": ""
},
{
"docid": "16cbc21b3092a5ba0c978f0cf38710ab",
"text": "A major challenge to the problem of community question answering is the lexical and semantic gap between the sentence representations. Some solutions to minimize this gap includes the introduction of extra parameters to deep models or augmenting the external handcrafted features. In this paper, we propose a novel attentive recurrent tensor network for solving the lexical and semantic gap in community question answering. We introduce token-level and phrase-level attention strategy that maps input sequences to the output using trainable parameters. Further, we use the tensor parameters to introduce a 3-way interaction between question, answer and external features in vector space. We introduce simplified tensor matrices with L2 regularization that results in smooth optimization during training. The proposed model achieves state-of-the-art performance on the task of answer sentence selection (TrecQA and WikiQA datasets) while outperforming the current state-of-the-art on the tasks of best answer selection (Yahoo! L4) and answer triggering task (WikiQA).",
"title": ""
},
{
"docid": "b57229646d21f8fac2e06b2a6b724782",
"text": "This paper proposes a unified and consistent set of flexible tools to approximate important geometric attributes, including normal vectors and curvatures on arbitrary triangle meshes. We present a consistent derivation of these first and second order differential properties using averaging Voronoi cells and the mixed Finite-Element/Finite-Volume method, and compare them to existing formulations. Building upon previous work in discrete geometry, these operators are closely related to the continuous case, guaranteeing an appropriate extension from the continuous to the discrete setting: they respect most intrinsic properties of the continuous differential operators. We show that these estimates are optimal in accuracy under mild smoothness conditions, and demonstrate their numerical quality. We also present applications of these operators, such as mesh smoothing, enhancement, and quality checking, and show results of denoising in higher dimensions, such as for tensor images.",
"title": ""
},
{
"docid": "62d63357923c5a7b1ea21b8448e3cba3",
"text": "This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians.",
"title": ""
},
{
"docid": "b7bf40c61ff4c73a8bbd5096902ae534",
"text": "—In therapeutic and functional applications transcutaneous electrical stimulation (TES) is still the most frequently applied technique for muscle and nerve activation despite the huge efforts made to improve implantable technologies. Stimulation electrodes play the important role in interfacing the tissue with the stimulation unit. Between the electrode and the excitable tissue there are a number of obstacles in form of tissue resistivities and permittivities that can only be circumvented by magnetic fields but not by electric fields and currents. However, the generation of magnetic fields needed for the activation of excitable tissues in the human body requires large and bulky equipment. TES devices on the other hand can be built cheap, small and light weight. The weak part in TES is the electrode that cannot be brought close enough to the excitable tissue and has to fulfill a number of requirements to be able to act as efficient as possible. The present review article summarizes the most important factors that influence efficient TES, presents and discusses currently used electrode materials, designs and configurations, and points out findings that have been obtained through modeling, simulation and testing.",
"title": ""
},
{
"docid": "dc5f111bfe7fa27ae7e9a4a5ba897b51",
"text": "We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at https://github.com/nqanh/affordance-net.",
"title": ""
},
{
"docid": "2d97c2f0ca6c30d7d22d10ed189239e0",
"text": "Human face pose estimation aims at estimating the gazing direction or head postures with 2D images. It gives some very important information such as communicative gestures, saliency detection and so on, which attracts plenty of attention recently. However, it is challenging because of complex background, various orientations and face appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we make use of multi-modal data and propose a novel face pose estimation method that uses a novel deep learning framework named Multi-task Manifold Deep Learning (MDL). It is based on feature extraction with improved deep neural networks and multi-modal mapping relationship with multi-task learning. In the proposed deep learning based framework, Manifold Regularized Convolutional Layers (MRCL) improve traditional convolutional layers by learning the relationship among outputs of neurons. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined to learn the mapping function from face images to poses. In this way, the computed mapping model with multiple tasks is improved. Experimental results on three challenging benchmark datasets DPOSE, HPID and BKHPD demonstrate the outstanding performance of MDL.",
"title": ""
},
{
"docid": "cfdc217170410e60fb9323cc39d51aff",
"text": "Malware, i.e., malicious software, represents one of the main cyber security threats today. Over the last decade malware has been evolving in terms of the complexity of malicious software and the diversity of attack vectors. As a result modern malware is characterized by sophisticated obfuscation techniques, which hinder the classical static analysis approach. Furthermore, the increased amount of malware that emerges every day, renders a manual approach inefficient. This study tackles the problem of analyzing, detecting and classifying the vast amount of malware in a scalable, efficient and accurate manner. We propose a novel approach for detecting malware and classifying it to either known or novel, i.e., previously unseen malware family. The approach relies on Random Forests classifier for performing both malware detection and family classification. Furthermore, the proposed approach employs novel feature representations for malware classification, that significantly reduces the feature space, while achieving encouraging predictive performance. The approach was evaluated using behavioral traces of over 270,000 malware samples and 837 samples of benign software. The behavioral traces were obtained using a modified version of Cuckoo sandbox, that was able to harvest behavioral traces of the analyzed samples in a time-efficient manner. The proposed system achieves high malware detection rate and promising predictive performance in the family classification, opening the possibility of coping with the use of obfuscation and the growing number of malware.",
"title": ""
}
] |
scidocsrr
|
ccee78fff564c52782d6a4a9911d3da1
|
Peek-a-Boo: I see your smart home activities, even encrypted!
|
[
{
"docid": "ac805199736ec965572c636a26f41b1a",
"text": "We present several novel techniques to track (unassociated) mobile devices by abusing features of the Wi-Fi standard. This shows that using random MAC addresses, on its own, does not guarantee privacy. First, we show that information elements in probe requests can be used to fingerprint devices. We then combine these fingerprints with incremental sequence numbers, to create a tracking algorithm that does not rely on unique identifiers such as MAC addresses. Based on real-world datasets, we demonstrate that our algorithm can correctly track as much as 50% of devices for at least 20 minutes. We also show that commodity Wi-Fi devices use predictable scrambler seeds. These can be used to improve the performance of our tracking algorithm. Finally, we present two attacks that reveal the real MAC address of a device, even if MAC address randomization is used. In the first one, we create fake hotspots to induce clients to connect using their real MAC address. The second technique relies on the new 802.11u standard, commonly referred to as Hotspot 2.0, where we show that Linux and Windows send Access Network Query Protocol (ANQP) requests using their real MAC address.",
"title": ""
},
{
"docid": "2d2465aff21421330f82468858a74cf4",
"text": "There has been a tremendous increase in popularity and adoption of wearable fitness trackers. These fitness trackers predominantly use Bluetooth Low Energy (BLE) for communicating and syncing the data with user's smartphone. This paper presents a measurement-driven study of possible privacy leakage from BLE communication between the fitness tracker and the smartphone. Using real BLE traffic traces collected in the wild and in controlled experiments, we show that majority of the fitness trackers use unchanged BLE address while advertising, making it feasible to track them. The BLE traffic of the fitness trackers is found to be correlated with the intensity of user's activity, making it possible for an eavesdropper to determine user's current activity (walking, sitting, idle or running) through BLE traffic analysis. Furthermore, we also demonstrate that the BLE traffic can represent user's gait which is known to be distinct from user to user. This makes it possible to identify a person (from a small group of users) based on the BLE traffic of her fitness tracker. As BLE-based wearable fitness trackers become widely adopted, our aim is to identify important privacy implications of their usage and discuss prevention strategies.",
"title": ""
}
] |
[
{
"docid": "b6508d1f2b73b90a0cfe6399f6b44421",
"text": "An alternative to land spreading of manure effluents is to mass-culture algae on the N and P present in the manure and convert manure N and P into algal biomass. The objective of this study was to determine how the fatty acid (FA) content and composition of algae respond to changes in the type of manure, manure loading rate, and to whether the algae was grown with supplemental carbon dioxide. Algal biomass was harvested weekly from indoor laboratory-scale algal turf scrubber (ATS) units using different loading rates of raw and anaerobically digested dairy manure effluents and raw swine manure effluent. Manure loading rates corresponded to N loading rates of 0.2 to 1.3 g TN m−2 day−1 for raw swine manure effluent and 0.3 to 2.3 g TN m−2 day−1 for dairy manure effluents. In addition, algal biomass was harvested from outdoor pilot-scale ATS units using different loading rates of raw and anaerobically digested dairy manure effluents. Both indoor and outdoor units were dominated by Rhizoclonium sp. FA content values of the algal biomass ranged from 0.6 to 1.5% of dry weight and showed no consistent relationship to loading rate, type of manure, or to whether supplemental carbon dioxide was added to the systems. FA composition was remarkably consistent among samples and >90% of the FA content consisted of 14:0, 16:0, 16:1ω7, 16:1ω9, 18:0, 18:1ω9, 18:2 ω6, and 18:3ω3.",
"title": ""
},
{
"docid": "709427c308d9f670f75278d64c98ae8f",
"text": "An increase in world population along with a significant aging portion is forcing rapid rises in healthcare costs. The healthcare system is going through a transformation in which continuous monitoring of inhabitants is possible even without hospitalization. The advancement of sensing technologies, embedded systems, wireless communication technologies, nano technologies, and miniaturization makes it possible to develop smart systems to monitor activities of human beings continuously. Wearable sensors detect abnormal and/or unforeseen situations by monitoring physiological parameters along with other symptoms. Therefore, necessary help can be provided in times of dire need. This paper reviews the latest reported systems on activity monitoring of humans based on wearable sensors and issues to be addressed to tackle the challenges.",
"title": ""
},
{
"docid": "e7c97ff0a949f70b79fb7d6dea057126",
"text": "Most conventional document categorization methods require a large number of documents with labeled categories for training. These methods are hard to be applied in scenarios, such as scientific publications, where training data is expensive to obtain and categories could change over years and across domains. In this work, we propose UNEC, an unsupervised representation learning model that directly categories documents without the need of labeled training data. Specifically, we develop a novel cascade embedding approach. We first embed concepts, i.e., significant phrases mined from scientific publications, into continuous vectors, which capture concept semantics. Based on the concept similarity graph built from the concept embedding, we further embed concepts into a hidden category space, where the category information of concepts becomes explicit. Finally we categorize documents by jointly considering the category attribution of their concepts. Our experimental results show that UNEC significantly outperforms several strong baselines on a number of real scientific corpora, under both automatic and manual evaluation.",
"title": ""
},
{
"docid": "4efb2001e93d82ea5ed6b4674bc2edc4",
"text": "OBJECTIVE\nTo investigate the outcomes of tympanoplasty in elderly (≥60 years) compared with young patients (18-59 years).\n\n\nMATERIALS AND METHODS\nPatients who had been performed type I tympanoplasty between 2009 and 2012 were retrospectively analyzed. Preoperative and postoperative audiological results and the graft success of 42 older patients were compared with those of 72 younger ones.\n\n\nRESULTS\nThe mean preoperative air conduction levels were statistically significantly higher in older group (57.4±16.8dB) than younger group (37.3±10.3dB) (p<0.001). Preoperative bone conduction levels were statistically significantly higher in older group (28.5±13.4dB) than in the younger group (12.4±4.8dB) (p<0.001). The mean preoperative and postoperative air-bone gaps were statistically significantly larger in the older group (28.5±11.0dB, 16.4±9.0dB) than in the younger group (24.9±7.7dB, 12.4±8.0dB respectively) (p<0.001). The intragroup comparisons of preoperative and postoperative mean air-bone gaps revealed statistically significant improvements in both groups (p<0.001 for both). Graft success rates and the mean hearing gains were not statistically significantly different between the groups (p=0.225, p=0.786 respectively).\n\n\nCONCLUSION\nAlthough preoperative and postoperative air and bone conduction levels were worse in the older group, graft take rates and postoperative hearing gain did not differ between the groups. If the physical status is stable tympanoplasty procedure can be recommended for elderly patients.",
"title": ""
},
{
"docid": "11bb75b89cffe28bd280a09c3ae1436a",
"text": "In this paper, we introduce a novel technique, called F-APACS, for mining fuzzy association rules. Existing algorithms involve discretizing the domains of quantitative attributes into intervals so as to discover quantitative association rules. These intervals may not be concise and meaningful enough for human experts to easily obtain nontrivial knowledge from those rules discovered. Instead of using intervals, F-APACS employs linguistic terms to represent the revealed regularities and exceptions. The linguistic representation is especially useful when those rules discovered are presented to human experts for examination. The definition of linguistic terms is based on fuzzy set theory and hence we call the rules having these terms fuzzy association rules. The use of fuzzy techniques makes F-APACS resilient to noises such as inaccuracies in physical measurements of real-life entities and missing values in the databases. Furthermore, F-APACS employs adjusted difference analysis which has the advantage that it does not require any user-supplied thresholds which are often hard to determine. The fact that F-APACS is able to mine fuzzy association rules which utilize linguistic representation and that it uses an objective yet meaningful confidence measure to determine the interestingness of a rule makes it very effective at the discovery of rules from a real-life transactional database of a PBX system provided by a telecommunication corporation.",
"title": ""
},
{
"docid": "5959fac6fabf495a0af372ffb3add86f",
"text": "A versatile, platform independent and easy to use Java suite for large-scale gene expression analysis was developed. Genesis integrates various tools for microarray data analysis such as filters, normalization and visualization tools, distance measures as well as common clustering algorithms including hierarchical clustering, self-organizing maps, k-means, principal component analysis, and support vector machines. The results of the clustering are transparent across all implemented methods and enable the analysis of the outcome of different algorithms and parameters. Additionally, mapping of gene expression data onto chromosomal sequences was implemented to enhance promoter analysis and investigation of transcriptional control mechanisms.",
"title": ""
},
{
"docid": "4c7d66d767c9747fdd167f1be793d344",
"text": "In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model achieves accuracy that is similar to other published models and algorithms. By harnessing prior knowledge, our model eliminates the requirement for training data as compared with existing approaches, thereby introducing the notion of a fully adaptive zero profiling approach to location estimation.",
"title": ""
},
{
"docid": "c406d734f32cc4b88648c037d9d10e46",
"text": "In this paper, we review the state-of-the-art technologies for driver inattention monitoring, which can be classified into the following two main categories: 1) distraction and 2) fatigue. Driver inattention is a major factor in most traffic accidents. Research and development has actively been carried out for decades, with the goal of precisely determining the drivers' state of mind. In this paper, we summarize these approaches by dividing them into the following five different types of measures: 1) subjective report measures; 2) driver biological measures; 3) driver physical measures; 4) driving performance measures; and 5) hybrid measures. Among these approaches, subjective report measures and driver biological measures are not suitable under real driving conditions but could serve as some rough ground-truth indicators. The hybrid measures are believed to give more reliable solutions compared with single driver physical measures or driving performance measures, because the hybrid measures minimize the number of false alarms and maintain a high recognition rate, which promote the acceptance of the system. We also discuss some nonlinear modeling techniques commonly used in the literature.",
"title": ""
},
{
"docid": "8943c84763a9049f518f635aecc6507f",
"text": "Over the years, autonomous systems have entered almost all the facets of human life. Gradually, higher levels of autonomy are being incorporated into cyber-physical systems (CPS) and Internet-of-things (IoT) devices. However, safety and security has always been a lurking fear behind adoption of autonomous systems such as self-driving vehicles. To address these issues, we develop a framework for quantifying trust in autonomous system. This framework consist of an estimation method, which considers effect of adversarial attacks on sensor measurements. Our estimation algorithm uses a set-membership method during identification of safe states of the system. An important feature of this algorithm is that it can distinguish between adversarial noise and other disturbances. We also verify the autonomous system by first modeling it as networks of priced timed automata (NPTA) with stochastic semantics and then using statistical probabilistic model checking to verify it against probabilistic specifications. The verification process ensures that the autonomous system behave in accordance to safety specifications within a probabilistic threshold. For quantifying trust on the system, we use confidence results provided by the model checking tool. We have demonstrated our approach by using a case study of adaptive cruise control system under sensor spoofing attacks.",
"title": ""
},
{
"docid": "e91310da7635df27b5c4056388cc6e52",
"text": "This paper presents a new metric for automated registration of multi-modal sensor data. The metric is based on the alignment of the orientation of gradients formed from the two candidate sensors. Data registration is performed by estimating the sensors’ extrinsic parameters that minimises the misalignment of the gradients. The metric can operate in a large range of applications working on both 2D and 3D sensor outputs and is suitable for both (i) single scan data registration and (ii) multi-sensor platform calibration using multiple scans. Unlike traditional calibration methods, it does not require markers or other registration aids to be placed in the scene. The effectiveness of the new method is demonstrated with experimental results on a variety of camera-lidar and camera-camera calibration problems. The novel metric is validated through comparisons with state of the art methods. Our approach is shown to give high quality registrations under all tested conditions. C © 2014 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "0e803e853422328aeef59e426410df48",
"text": "We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.",
"title": ""
},
{
"docid": "1bdfcf7f162bfc8c8c51a153fd4ea437",
"text": "In this paper, modified image segmentation techniques were applied on MRI scan images in order to detect brain tumors. Also in this paper, a modified Probabilistic Neural Network (PNN) model that is based on learning vector quantization (LVQ) with image and data analysis and manipulation techniques is proposed to carry out an automated brain tumor classification using MRI-scans. The assessment of the modified PNN classifier performance is measured in terms of the training performance, classification accuracies and computational time. The simulation results showed that the modified PNN gives rapid and accurate classification compared with the image processing and published conventional PNN techniques. Simulation results also showed that the proposed system out performs the corresponding PNN system presented in [30], and successfully handle the process of brain tumor classification in MRI image with 100% accuracy when the spread value is equal to 1. These results also claim that the proposed LVQ-based PNN system decreases the processing time to approximately 79% compared with the conventional PNN which makes it very promising in the field of in-vivo brain tumor detection and identification. Keywords— Probabilistic Neural Network, Edge detection, image segmentation, brain tumor detection and identification",
"title": ""
},
{
"docid": "9b94a383b2a6e778513a925cc88802ad",
"text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was largely ignored in literature. In this paper, a novel model is proposed for pedestrian behavior modeling by including stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality classification, and abnormal event detection. To evaluate our model, a large pedestrian walking route dataset1 is built. The walking routes of 12, 684 pedestrians from a one-hour crowd surveillance video are manually annotated. It will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.",
"title": ""
},
{
"docid": "b76462ec4dc505e3e7d4e2126a461668",
"text": "This paper describes an effective and efficient image classification framework nominated distributed deep representation learning model (DDRL). The aim is to strike the balance between the computational intensive deep learning approaches (tuned parameters) which are intended for distributed computing, and the approaches that focused on the designed parameters but often limited by sequential computing and cannot scale up. In the evaluation of our approach, it is shown that DDRL is able to achieve state-of-art classification accuracy efficiently on both medium and large datasets. The result implies that our approach is more efficient than the conventional deep learning approaches, and can be applied to big data that is too complex for parameter designing focused approaches. More specifically, DDRL contains two main components, i.e., feature extraction and selection. A hierarchical distributed deep representation learning algorithm is designed to extract image statistics and a nonlinear mapping algorithm is used to map the inherent statistics into abstract features. Both algorithms are carefully designed to avoid millions of parameters tuning. This leads to a more compact solution for image classification of big data. We note that the proposed approach is designed to be friendly with parallel computing. It is generic and easy to be deployed to different distributed computing resources. In the experiments, the large∗Corresponding author. Tel.:+86 13981763623; Fax: +86-28-61831655. Email address: ledong@uestc.edu.cn (Le Dong) Preprint submitted to Pattern Recognition July 5, 2016 scale image datasets are classified with a DDRM implementation on Hadoop MapReduce, which shows high scalability and resilience.",
"title": ""
},
{
"docid": "12e882a9c8e594de143547ec0fffd9f9",
"text": "We introduce the task of book structure labeling: segmenting and assigning a fixed category (such as TABLE OF CONTENTS, PREFACE, INDEX) to the document structure of printed books. We manually annotate the page-level structural categories for a large dataset totaling 294,816 pages in 1,055 books evenly sampled from 1750– 1922, and present empirical results comparing the performance of several classes of models. The best-performing model, a bidirectional LSTM with rich features, achieves an overall accuracy of 95.8 and a class-balanced macro F-score of 71.4.",
"title": ""
},
{
"docid": "dbf3650aadb4c18500ec3676d23dba99",
"text": "Current search engines do not, in general, perform well with longer, more verbose queries. One of the main issues in processing these queries is identifying the key concepts that will have the most impact on effectiveness. In this paper, we develop and evaluate a technique that uses query-dependent, corpus-dependent, and corpus-independent features for automatic extraction of key concepts from verbose queries. We show that our method achieves higher accuracy in the identification of key concepts than standard weighting methods such as inverse document frequency. Finally, we propose a probabilistic model for integrating the weighted key concepts identified by our method into a query, and demonstrate that this integration significantly improves retrieval effectiveness for a large set of natural language description queries derived from TREC topics on several newswire and web collections.",
"title": ""
},
{
"docid": "419116a3660f1c1f7127de31f311bd1e",
"text": "Unlike dimensionality reduction (DR) tools for single-view data, e.g., principal component analysis (PCA), canonical correlation analysis (CCA) and generalized CCA (GCCA) are able to integrate information from multiple feature spaces of data. This is critical in multi-modal data fusion and analytics, where samples from a single view may not be enough for meaningful DR. In this work, we focus on a popular formulation of GCCA, namely, MAX-VAR GCCA. The classic MAX-VAR problem is optimally solvable via eigen-decomposition, but this solution has serious scalability issues. In addition, how to impose regularizers on the sought canonical components was unclear - while structure-promoting regularizers are often desired in practice. We propose an algorithm that can easily handle datasets whose sample and feature dimensions are both large by exploiting data sparsity. The algorithm is also flexible in incorporating regularizers on the canonical components. Convergence properties of the proposed algorithm are carefully analyzed. Numerical experiments are presented to showcase its effectiveness.",
"title": ""
},
{
"docid": "7d9cb4c28ab2b4c27e061bcd18577be5",
"text": "Plantar fasciitis, a self-limiting condition, is a common cause of heel pain in adults. It affects more than 1 million persons per year, and two-thirds of patients with plantar fasciitis will seek care from their family physician. Plantar fasciitis affects sedentary and athletic populations. Obesity, excessive foot pronation, excessive running, and prolonged standing are risk factors for developing plantar fasciitis. Diagnosis is primarily based on history and physical examination. Patients may present with heel pain with their first steps in the morning or after prolonged sitting, and sharp pain with palpation of the medial plantar calcaneal region. Discomfort in the proximal plantar fascia can be elicited by passive ankle/first toe dorsiflexion. Diagnostic imaging is rarely needed for the initial diagnosis of plantar fasciitis. Use of ultrasonography and magnetic resonance imaging is reserved for recalcitrant cases or to rule out other heel pathology; findings of increased plantar fascia thickness and abnormal tissue signal the diagnosis of plantar fasciitis. Conservative treatments help with the disabling pain. Initially, patient-directed treatments consisting of rest, activity modification, ice massage, oral analgesics, and stretching techniques can be tried for several weeks. If heel pain persists, then physician-prescribed treatments such as physical therapy modalities, foot orthotics, night splinting, and corticosteroid injections should be considered. Ninety percent of patients will improve with these conservative techniques. Patients with chronic recalcitrant plantar fasciitis lasting six months or longer can consider extracorporeal shock wave therapy or plantar fasciotomy.",
"title": ""
},
{
"docid": "c22d64723df5233bfa5e41b8eb10e1d5",
"text": "State-of-the-art millimeter wave (MMW) multiple-input, multiple-output (MIMO) frequency-modulated continuous-wave (FMCW) radars allow high precision direction of arrival (DOA) estimation with an optimized antenna aperture size [1]. Typically, these systems operate using a single polarization. Fully polarimetric radars on the other hand are used to obtain the polarimetric scattering matrix (S-matrix) and extract polari-metric scattering information that otherwise remains concealed [2]. Combining both approaches by assembly of a dual-polarized waveguide antenna and a 77 GHz MIMO FMCW radar system results in the fully polarimetric MIMO radar system presented in this paper. By applying a MIMO-adapted version of the isolated antenna calibration technique (IACT) from [3], the radar system is calibrated and laboratory measurements of different canonical objects such as spheres, plates, dihedrals and trihedrals are performed. A statistical evaluation of these measurement results demonstrates the usability of the approach and shows that basic polarimetric scattering phenomena are reliably identified.",
"title": ""
},
{
"docid": "6ec223d9246430f190bb71f2dc825772",
"text": "Emerging large-scale monitoring applications rely on continuous tracking of complex data-analysis queries over collections of massive, physically-distributed data streams. Thus, in addition to the spaceand time-efficiency requirements of conventional stream processing (at each remote monitor site), effective solutions also need to guarantee communication efficiency (over the underlying communication network). The complexity of the monitored query adds to the difficulty of the problem — this is especially true for nonlinear queries (e.g., joins), where no obvious solutions exist for distributing the monitor condition across sites. The recently proposed geometric method offers a generic methodology for splitting an arbitrary (non-linear) global threshold-monitoring task into a collection of local site constraints; still, the approach relies on maintaining the complete stream(s) at each site, thus raising serious efficiency concerns for massive data streams. In this paper, we propose novel algorithms for efficiently tracking a broad class of complex aggregate queries in such distributed-streams settings. Our tracking schemes rely on a novel combination of the geometric method with compact sketch summaries of local data streams, and maintain approximate answers with provable error guarantees, while optimizing space and processing costs at each remote site and communication cost across the network. One of our key technical insights for the effective use of the geometric method lies in exploiting a much lower-dimensional space for monitoring the sketch-based estimation query. Due to the complex, highly nonlinear nature of these estimates, efficiently monitoring the local geometric constraints poses challenging algorithmic issues for which we propose novel solutions. Experimental results on real-life data streams verify the effectiveness of our approach.",
"title": ""
}
] |
scidocsrr
|
49b541dccb64a145cc40ba737de3937a
|
Decoding sequential finger movements from preparatory activity in higher-order motor regions: a functional magnetic resonance imaging multi-voxel pattern analysis.
|
[
{
"docid": "7eaf23745e25a7beb5183457599bcdaf",
"text": "Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2(100) possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.",
"title": ""
}
] |
[
{
"docid": "26cd0260e2a460ac5aa96466ff92f748",
"text": "Deep Convolutional Neural Networks (CNNs) have demonstrated excellent performance in image classification, but still show room for improvement in object-detection tasks with many categories, in particular for cluttered scenes and occlusion. Modern detection algorithms like Regions with CNNs (Girshick et al., 2014) rely on Selective Search (Uijlings et al., 2013) to propose regions which with high probability represent objects, where in turn CNNs are deployed for classification. Selective Search represents a family of sophisticated algorithms that are engineered with multiple segmentation, appearance and saliency cues, typically coming with a significant runtime overhead. Furthermore, (Hosang et al., 2014) have shown that most methods suffer from low reproducibility due to unstable superpixels, even for slight image perturbations. Although CNNs are subsequently used for classification in top-performing object-detection pipelines, current proposal methods are agnostic to how these models parse objects and their rich learned representations. As a result they may propose regions which may not resemble high-level objects or totally miss some of them. To overcome these drawbacks we propose a boosting approach which directly takes advantage of hierarchical CNN features for detecting regions of interest fast. We demonstrate its performance on ImageNet 2013 detection benchmark and compare it with state-of-the-art methods. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.",
"title": ""
},
{
"docid": "f9eed4f99d70c51dc626a61724540d3c",
"text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.",
"title": ""
},
{
"docid": "04d94b476a40466117af236870f22035",
"text": "With advances in deep learning, neural network variants are becoming the dominant architecture for many NLP tasks. In this project, we apply several deep learning approaches to question answering, with a focus on the bAbI dataset.",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "884b880ac8f8c406baec25d616643ac0",
"text": "Repeated retrieval practice is a powerful learning tool for promoting long-term retention, but students use this tool ineffectively when regulating their learning. The current experiments evaluated the efficacy of a minimal intervention aimed at improving students' self-regulated use of repeated retrieval practice. Across 2 experiments, students made decisions about when to study, engage in retrieval practice, or stop learning a set of foreign language word pairs. Some students received direct instruction about how to use repeated retrieval practice. These instructions emphasized the mnemonic benefits of retrieval practice over a less effective strategy (restudying) and told students how to use repeated retrieval practice to maximize their performance-specifically, that they should recall a translation correctly 3 times during learning. This minimal intervention promoted more effective self-regulated use of retrieval practice and better retention of the translations compared to a control group that received no instruction. Students who experienced this intervention also showed potential for long-term changes in self-regulated learning: They spontaneously used repeated retrieval practice 1 week later to learn new materials. These results provide a promising first step for developing guidelines for teaching students how to regulate their learning more effectively using repeated retrieval practice. (PsycINFO Database Record",
"title": ""
},
{
"docid": "c10829be320a9be6ecbc9ca751e8b56e",
"text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.",
"title": ""
},
{
"docid": "14cc3608216dd17e7bcbc3e6acba66db",
"text": "Fluorescamine is a new reagent for the detection of primary amines in the picomole range. Its reaction with amines is almost instantaneous at room temperature in aqueous media. The products are highly fluorescent, whereas the reagent and its degradation products are nonfluorescent. Applications are discussed.",
"title": ""
},
{
"docid": "a3802513f0f72104331106228799020f",
"text": "BACKGROUND\nQualitative systematic reviews are increasing in popularity in evidence based health care. Difficulties have been reported in conducting literature searches of qualitative research using the PICO search tool. An alternative search tool, entitled SPIDER, was recently developed for more effective searching of qualitative research, but remained untested beyond its development team.\n\n\nMETHODS\nIn this article we tested the 'SPIDER' search tool in a systematic narrative review of qualitative literature investigating the health care experiences of people with Multiple Sclerosis. Identical search terms were combined into the PICO or SPIDER search tool and compared across Ovid MEDLINE, Ovid EMBASE and EBSCO CINAHL Plus databases. In addition, we added to this method by comparing initial SPIDER and PICO tools to a modified version of PICO with added qualitative search terms (PICOS).\n\n\nRESULTS\nResults showed a greater number of hits from the PICO searches, in comparison to the SPIDER searches, with greater sensitivity. SPIDER searches showed greatest specificity for every database. The modified PICO demonstrated equal or higher sensitivity than SPIDER searches, and equal or lower specificity than SPIDER searches. The modified PICO demonstrated lower sensitivity and greater specificity than PICO searches.\n\n\nCONCLUSIONS\nThe recommendations for practice are therefore to use the PICO tool for a fully comprehensive search but the PICOS tool where time and resources are limited. Based on these limited findings the SPIDER tool would not be recommended due to the risk of not identifying relevant papers, but has potential due to its greater specificity.",
"title": ""
},
{
"docid": "7374e16190e680669f76fc7972dc3975",
"text": "Open-plan office layout is commonly assumed to facilitate communication and interaction between co-workers, promoting workplace satisfaction and team-work effectiveness. On the other hand, open-plan layouts are widely acknowledged to be more disruptive due to uncontrollable noise and loss of privacy. Based on the occupant survey database from Center for the Built Environment (CBE), empirical analyses indicated that occupants assessed Indoor Environmental Quality (IEQ) issues in different ways depending on the spatial configuration (classified by the degree of enclosure) of their workspace. Enclosed private offices clearly outperformed open-plan layouts in most aspects of IEQ, particularly in acoustics, privacy and the proxemics issues. Benefits of enhanced ‘ease of interaction’ were smaller than the penalties of increased noise level and decreased privacy resulting from open-plan office configuration.",
"title": ""
},
{
"docid": "568bc5272373a4e3fd38304f2c381e0f",
"text": "With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage.",
"title": ""
},
{
"docid": "a3cea7fc6c034c7f06595e8e1150e3c8",
"text": "Tweets are Donald Trump's quickest and most frequently employed way to send shockwaves. Tweets allow the user to respond quickly to the Kairos of developing situations—an advantage to the medium, but perhaps also a disadvantage, as Trump's 3am and 4am tweets tend to show. In this paper, we apply the three classical modes of rhetoric—forensic/judicial, deliberative, and epideictic/ceremonial rhetoric—to see how the modes manifest in Donald Trump's tweets as a presidential candidate, as President-Elect, and as President. Does the use of these three modes shift as Trump's rhetorical situation and especially subject position shift? Besides looking for quantitative changes in Trump's favored modes over time, our qualitative analysis includes representative examples and interesting examples of Trump's use of each mode (and combinations of them) during each time period.",
"title": ""
},
{
"docid": "5a7f0a75bacc6dcf6d80246b0cdae01c",
"text": "This paper concerns the fully automatic direct in vivo measurement of active and passive dynamic skeletal muscle states using ultrasound imaging. Despite the long standing medical need (myopathies, neuropathies, pain, injury, ageing), currently technology (electromyography, dynamometry, shear wave imaging) provides no general, non-invasive method for online estimation of skeletal intramuscular states. Ultrasound provides a technology in which static and dynamic muscle states can be observed non-invasively, yet current computational image understanding approaches are inadequate. We propose a new approach in which deep learning methods are used for understanding the content of ultrasound images of muscle in terms of its measured state. Ultrasound data synchronized with electromyography of the calf muscles, with measures of joint torque/angle were recorded from 19 healthy participants (6 female, ages: 30 ± 7.7). A segmentation algorithm previously developed by our group was applied to extract a region of interest of the medial gastrocnemius. Then a deep convolutional neural network was trained to predict the measured states (joint angle/torque, electromyography) directly from the segmented images. Results revealed for the first time that active and passive muscle states can be measured directly from standard b-mode ultrasound images, accurately predicting for a held out test participant changes in the joint angle, electromyography, and torque with as little error as 0.022°, 0.0001V, 0.256Nm (root mean square error) respectively.",
"title": ""
},
{
"docid": "347e7b80b2b0b5cd5f0736d62fa022ae",
"text": "This article presents the results of an interview study on how people perceive and play social network games on Facebook. During recent years, social games have become the biggest genre of games if measured by the number of registered users. These games are designed to cater for large audiences in their design principles and values, a free-to-play revenue model and social network integration that make them easily approachable and playable with friends. Although these games have made the headlines and have been seen to revolutionize the game industry, we still lack an understanding of how people perceive and play them. For this article, we interviewed 18 Finnish Facebook users from a larger questionnaire respondent pool of 134 people. This study focuses on a user-centric approach, highlighting the emergent experiences and the meaning-making of social games players. Our findings reveal that social games are usually regarded as single player games with a social twist, and as suffering partly from their design characteristics, while still providing a wide spectrum of playful experiences for different needs. The free-to-play revenue model provides an easy access to social games, but people disagreed with paying for additional content for several reasons.",
"title": ""
},
{
"docid": "7a0ed38af9775a77761d6c089db48188",
"text": "We introduce polyglot language models, recurrent neural network models trained to predict symbol sequences in many different languages using shared representations of symbols and conditioning on typological information about the language to be predicted. We apply these to the problem of modeling phone sequences—a domain in which universal symbol inventories and cross-linguistically shared feature representations are a natural fit. Intrinsic evaluation on held-out perplexity, qualitative analysis of the learned representations, and extrinsic evaluation in two downstream applications that make use of phonetic features show (i) that polyglot models better generalize to held-out data than comparable monolingual models and (ii) that polyglot phonetic feature representations are of higher quality than those learned monolingually.",
"title": ""
},
{
"docid": "385789e37297644dc79ce9e39ee0f7cd",
"text": "A key issue in Low Voltage (LV) distribution systems is to identify strategies for the optimal management and control in the presence of Distributed Energy Resources (DERs). To reduce the number of variables to be monitored and controlled, virtual levels of aggregation, called Virtual Microgrids (VMs), are introduced and identified by using new models of the distribution system. To this aim, this paper, revisiting and improving the approach outlined in a conference paper, presents a sensitivity-based model of an LV distribution system, supplied by an Medium/Low Voltage (MV/LV) substation and composed by several feeders, which is suitable for the optimal management and control of the grid and for VM definition. The main features of the proposed method are: it evaluates the sensitivity coefficients in a closed form; it provides an overview of the sensitivity of the network to the variations of each DER connected to the grid; and it presents a limited computational burden. A comparison of the proposed method with both the exact load flow solutions and a perturb-and-observe method is discussed in a case study. Finally, the method is used to evaluate the impact of the DERs on the nodal voltages of the network.",
"title": ""
},
{
"docid": "f018db7f20245180d74e4eb07b99e8d3",
"text": "Particle filters can become quite inefficient when being applied to a high-dimensional state space since a prohibitively large number of samples may be required to approximate the underlying density functions with desired accuracy. In this paper, by proposing an adaptive Rao-Blackwellized particle filter for tracking in surveillance, we show how to exploit the analytical relationship among state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, the distributions of the linear variables are updated analytically using a Kalman filter which is associated with each particle in a particle filtering framework. Experiments and detailed performance analysis using both simulated data and real video sequences reveal that the proposed method results in more accurate tracking than a regular particle filter",
"title": ""
},
{
"docid": "95b0ad5e4898cb1610f2a48c3828eb92",
"text": "Talent management is found to be important for modern organizations because of the advent of the Modern economy, new generations entering the human resource and the need for businesses to become more strategic and competitive, which implies new ways of managing resource and human capital. In this research, the relationship between Talent management, employee Retention and organizational trust is investigated. The aim of the article is to examine the effect of Talent management on employee Retention through organizational trust among staffs of Isfahan University in Iran. The research method is a descriptive survey. The statistical population consists of staffs of Isfahan University in Iran. The sample included 280 employees, which were selected randomly. Data have been collected by a researcher-developed questionnaire and sampling has been done through census and analyzed using SPSS and AMOS software. The validity of the instrument was achieved through content validity and the reliability through Cronbach Alpha. The results of hypothesis testing indicate that there is a significant relationship between Talent management, employee Retention and organizational trust. The study is significant in that it draws attention to the effects of talent management on organizational trust and employees Retention in organization.",
"title": ""
},
{
"docid": "37e552e4352cd5f8c76dcefd856e0fc8",
"text": "Following the increasing popularity of mobile ecosystems, cybercriminals have increasingly targeted them, designing and distributing malicious apps that steal information or cause harm to the device’s owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach. To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls’ sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and user-generated inputs. We find that combining both static and dynamic analysis yields the best performance, with F -measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and, finally, investigate the reasons for inconsistent misclassifications across methods.",
"title": ""
},
{
"docid": "7e6a3a04c24a0fc24012619d60ebb87b",
"text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.",
"title": ""
},
{
"docid": "091ce4faf552f5ab452d6b4d1aad284b",
"text": "An indoor climate is closely related to human health, well-being, and comfort. Thus, indoor climate monitoring and management are prevalent in many places, from public offices to residential houses. Our previous research has shown that an active plant wall system can effectively reduce the concentrations of particulate matter and volatile organic compounds and stabilize the carbon dioxide concentration in an indoor environment. However, regular plant care is restricted by geography and can be costly in terms of time and money, which poses a significant challenge to the widespread deployment of plant walls. In this paper, we propose a remote monitoring and control system that is specific to the plant walls. The system utilizes the Internet of Things technology and the Azure public cloud platform to automate the management procedure, improve the scalability, enhance user experiences of plant walls, and contribute to a green indoor climate.",
"title": ""
}
] |
scidocsrr
|
68a51e53c4e2baf2b1204c8f09a11ae1
|
Predictive phase locked loop for sensorless control of PMSG based variable-speed wind turbines
|
[
{
"docid": "fec4f80f907d65d4b73480b9c224d98a",
"text": "This paper presents a novel finite position set-phase locked loop (FPS-PLL) for sensorless control of surface-mounted permanent-magnet synchronous generators (PMSGs) in variable-speed wind turbines. The proposed FPS-PLL is based on the finite control set-model predictive control concept, where a finite number of rotor positions are used to estimate the back electromotive force of the PMSG. Then, the estimated rotor position, which minimizes a certain cost function, is selected to be the optimal rotor position. This eliminates the need of a fixed-gain proportional-integral controller, which is commonly utilized in the conventional PLL. The performance of the proposed FPS-PLL has been experimentally investigated and compared with that of the conventional one using a 14.5 kW PMSG with a field-oriented control scheme utilized as the generator control strategy. Furthermore, the robustness of the proposed FPS-PLL is investigated against PMSG parameters variations.",
"title": ""
}
] |
[
{
"docid": "781890e1325126fe262a0587b26f9b6b",
"text": "We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs). Using a sequence-tosequence model, and some trivial preprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1 (F-score on AMR-triples). We examine five different approaches to improve this baseline result: (i) reordering AMR branches to match the word order of the input sentence increases performance to 58.3; (ii) adding part-of-speech tags (automatically produced) to the input shows improvement as well (57.2); (iii) So does the introduction of super characters (conflating frequent sequences of characters to a single character), reaching 57.4; (iv) optimizing the training process by using pre-training and averaging a set of models increases performance to 58.7; (v) adding silver-standard training data obtained by an off-the-shelf parser yields the biggest improvement, resulting in an F-score of 64.0. Combining all five techniques leads to an F-score of 71.0 on holdout data, which is state-of-the-art in AMR parsing. This is remarkable because of the relative simplicity of the approach.",
"title": ""
},
{
"docid": "51ac5dde554fd8363fcf95e6d3caf439",
"text": "Swarm intelligence is a relatively novel field. It addresses the study of the collective behaviors of systems made by many components that coordinate using decentralized controls and self-organization. A large part of the research in swarm intelligence has focused on the reverse engineering and the adaptation of collective behaviors observed in natural systems with the aim of designing effective algorithms for distributed optimization. These algorithms, like their natural systems of inspiration, show the desirable properties of being adaptive, scalable, and robust. These are key properties in the context of network routing, and in particular of routing in wireless sensor networks. Therefore, in the last decade, a number of routing protocols for wireless sensor networks have been developed according to the principles of swarm intelligence, and, in particular, taking inspiration from the foraging behaviors of ant and bee colonies. In this paper, we provide an extensive survey of these protocols. We discuss the general principles of swarm intelligence and of its application to routing. We also introduce a novel taxonomy for routing protocols in wireless sensor networks and use it to classify the surveyed protocols. We conclude the paper with a critical analysis of the status of the field, pointing out a number of fundamental issues related to the (mis) use of scientific methodology and evaluation procedures, and we identify some future research directions. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5d21df36697616719bcc3e0ee22a08bd",
"text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics",
"title": ""
},
{
"docid": "2a3a1c67e118784aff9191e259ed32fd",
"text": "1474-0346/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.aei.2010.06.006 * Corresponding author. Tel.: +1 4048949881. E-mail address: brilakis@gatech.edu (I. Brilakis). Only very few constructed facilities today have a complete record of as-built information. Despite the growing use of Building Information Modelling and the improvement in as-built records, several more years will be required before guidelines that require as-built data modelling will be implemented for the majority of constructed facilities, and this will still not address the stock of existing buildings. A technical solution for scanning buildings and compiling Building Information Models is needed. However, this is a multidisciplinary problem, requiring expertise in scanning, computer vision and videogrammetry, machine learning, and parametric object modelling. This paper outlines the technical approach proposed by a consortium of researchers that has gathered to tackle the ambitious goal of automating as-built modelling as far as possible. The top level framework of the proposed solution is presented, and each process, input and output is explained, along with the steps needed to validate them. Preliminary experiments on the earlier stages (i.e. processes) of the framework proposed are conducted and results are shown; the work toward implementation of the remainder is ongoing. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "150c458df57685b78b0cc02953c98ff7",
"text": "The CRISPR-associated protein Cas9 is an RNA-guided endonuclease that cleaves double-stranded DNA bearing sequences complementary to a 20-nucleotide segment in the guide RNA. Cas9 has emerged as a versatile molecular tool for genome editing and gene expression control. RNA-guided DNA recognition and cleavage strictly require the presence of a protospacer adjacent motif (PAM) in the target DNA. Here we report a crystal structure of Streptococcus pyogenes Cas9 in complex with a single-molecule guide RNA and a target DNA containing a canonical 5′-NGG-3′ PAM. The structure reveals that the PAM motif resides in a base-paired DNA duplex. The non-complementary strand GG dinucleotide is read out via major-groove interactions with conserved arginine residues from the carboxy-terminal domain of Cas9. Interactions with the minor groove of the PAM duplex and the phosphodiester group at the +1 position in the target DNA strand contribute to local strand separation immediately upstream of the PAM. These observations suggest a mechanism for PAM-dependent target DNA melting and RNA–DNA hybrid formation. Furthermore, this study establishes a framework for the rational engineering of Cas9 enzymes with novel PAM specificities.",
"title": ""
},
{
"docid": "28016e339bab5c1f5daa6bf26c3a06dd",
"text": "In this paper, we propose a straightforward solution to the problems of compositional parallel programming by using skeletons as the uniform mechanism for structured composition. In our approach parallel programs are constructed by composing procedures in a conventional base language using a set of high-level, pre-defined, functional, parallel computational forms known as skeletons. The ability to compose skeletons provides us with the essential tools for building further and more complex application-oriented skeletons specifying important aspects of parallel computation. Compared with the process network based composition approach, such as PCN, the skeleton approach abstracts away the fine details of connecting communication ports to the higher level mechanism of making data distributions conform, thus avoiding the complexity of using lower level ports as the means of interaction. Thus, the framework provides a natural integration of the compositional programming approach with the data parallel programming paradigm.",
"title": ""
},
{
"docid": "cf2c8ab1b22ae1a33e9235a35f942e7e",
"text": "Adversarial attacks against neural networks are a problem of considerable importance, for which effective defenses are not yet readily available. We make progress toward this problem by showing that non-negative weight constraints can be used to improve resistance in specific scenarios. In particular, we show that they can provide an effective defense for binary classification problems with asymmetric cost, such as malware or spam detection. We also show the potential for non-negativity to be helpful to non-binary problems by applying it to image",
"title": ""
},
{
"docid": "05b1be7a90432eff4b62675826b77e09",
"text": "People invest time, attention, and emotion while engaging in various activities in the real-world, for either purposes of awareness or participation. Social media platforms such as Twitter offer tremendous opportunities for people to become engaged in such real-world events through information sharing and communicating about these events. However, little is understood about the factors that affect people’s Twitter engagement in such real-world events. In this paper, we address this question by first operationalizing a person’s Twitter engagement in real-world events such as posting, retweeting, or replying to tweets about such events. Next, we construct statistical models that examine multiple predictive factors associated with four different perspectives of users’ Twitter engagement, and quantify their potential influence on predicting the (i) presence; and (ii) degree – of the user’s engagement with 643 real-world events. We also consider the effect of these factors with respect to a finer granularization of the different categories of events. We find that the measures of people’s prior Twitter activities, topical interests, geolocation, and social network structures are all variously correlated to their engagement with real-world events.",
"title": ""
},
{
"docid": "ff7b8957aeedc0805f972bf5bd6923f0",
"text": "This study was designed to test the Fundamental Difference Hypothesis (Bley-Vroman, 1988), which states that, whereas children are known to learn language almost completely through (implicit) domain-specific mechanisms, adults have largely lost the ability to learn a language without reflecting on its structure and have to use alternative mechanisms, drawing especially on their problem-solving capacities, to learn a second language. The hypothesis implies that only adults with a high level of verbal analytical ability will reach near-native competence in their second language, but that this ability will not be a significant predictor of success for childhood second language acquisition. A study with 57 adult Hungarian-speaking immigrants confirmed the hypothesis in the sense that very few adult immigrants scored within the range of child arrivals on a grammaticality judgment test, and that the few who did had high levels of verbal analytical ability; this ability was not a significant predictor for childhood arrivals. This study replicates the findings of Johnson and Newport (1989) and provides an explanation for the apparent exceptions in their study. These findings lead to a reconceptualization of the Critical Period Hypothesis: If the scope of this hypothesis is lim-",
"title": ""
},
{
"docid": "c676d1a252a26d7a803d5f81c5787f69",
"text": "Ravi.P Head, Department of computer science, Govt .Arts College for Women, Ramanathapuram E-Mail: peeravig@yahoo.co.in Tamilselvi.S Department of computer science, Govt .Arts College for Women, Ramanathapuram E-Mail: aiswrayapriya12@yahoo.co.in -------------------------------------------------------------------ABSTRACT--------------------------------------------------------Images are an important form of data and are used in almost every application. Images occupy large amount of memory space. Image compression is most essential requirement for efficient utilization of storage space and transmission bandwidth. Image compression technique involves reducing the size of the image without degrading the quality of the image. A restriction on these methods is the high computational cost of image compression. Ant colony optimization is applied for image compression. An analogy with the real ants' behavior was presented as a new paradigm called Ant Colony Optimization (ACO). ACO is Probabilistic technique for Searching for optimal path in the graph based on behavior of ants seeking a path between their colony and source of food. The main features of ACO are the fast search of good solutions, parallel work and use of heuristic information, among others. Ant colony optimization (ACO) is a technique which can be used for various applications. This paper provides an insight optimization techniques used for image compression like Ant Colony Optimization (ACO) algorithm.",
"title": ""
},
{
"docid": "5b9d8b0786691f68659bcce2e6803cdb",
"text": "We introduce SentEval, a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders. The aim is to provide a fairer, less cumbersome and more centralized way for evaluating sentence representations.",
"title": ""
},
{
"docid": "fec5f91a1fd7fee96833fdde8f8f903e",
"text": "With the advent of rapid development of wearable technology and mobile computing, huge amount of personal health-related data is being generated and accumulated on continuous basis at every moment. These personal datasets contain valuable information and they belong to and asset of the individual users, hence should be owned and controlled by themselves. Currently most of such datasets are stored and controlled by different service providers and this centralised data storage brings challenges of data security and hinders the data sharing. These personal health data are valuable resources for healthcare research and commercial projects. In this research work, we propose a conceptual design for sharing personal continuous-dynamic health data using blockchain technology supplemented by cloud storage to share the health-related information in a secure and transparent manner. Besides, we also introduce a data quality inspection module based on machine learning techniques to have control over data quality. The primary goal of the proposed system is to enable users to own, control and share their personal health data securely, in a General Data Protection Regulation (GDPR) compliant way to get benefit from their personal datasets. It also provides an efficient way for researchers and commercial data consumers to collect high quality personal health data for research and commercial purposes.",
"title": ""
},
{
"docid": "e9ba9af6b349c5e79b21dac2d5f8e845",
"text": "Context: Software defect prediction is important for identification of defect-prone parts of a software. Defect prediction models can be developed using software metrics in combination with defect data for predicting defective classes. Various studies have been conducted to find the relationship between software metrics and defect proneness, but there are few studies that statistically determine the effectiveness of the results. Objective: The main objectives of the study are (i) comparison of the machine-learning techniques using data sets obtained from popular open source software (ii) use of appropriate performance measures for measuring the performance of defect prediction models (iii) use of statistical tests for effective comparison of machine-learning techniques and (iv) validation of models over different releases of data sets. Method: In this study we use object-oriented metrics for predicting defective classes using 18 machinelearning techniques. The proposed framework has been applied to seven application packages of well known, widely used Android operating system viz. Contact, MMS, Bluetooth, Email, Calendar, Gallery2 and Telephony. The results are validated using 10-fold and inter-release validation methods. The reliability and significance of the results are evaluated using statistical test and post-hoc analysis. Results: The results show that the area under the curve measure for Naïve Bayes, LogitBoost and Multilayer Perceptron is above 0.7 in most of the cases. The results also depict that the difference between the ML techniques is statistically significant. However, it is also proved that the Support Vector Machines based techniques such as Support Vector Machines and voted perceptron do not possess the predictive capability for predicting defects. Conclusion: The results confirm the predictive capability of various ML techniques for developing defect prediction models. The results also confirm the superiority of one ML technique over the other ML techniques. Thus, the software engineers can use the results obtained from this study in the early phases of the software development for identifying defect-prone classes of given software. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e918ae2b1312292836eb661497909a83",
"text": "We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.",
"title": ""
},
{
"docid": "3c8cc4192ee6ddd126e53c8ab242f396",
"text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.",
"title": ""
},
{
"docid": "91e2dadb338fbe97b009efe9e8f60446",
"text": "An efficient smoke detection algorithm on color video sequences obtained from a stationary camera is proposed. Our algorithm considers dynamic and static features of smoke and is composed of basic steps: preprocessing; slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction; merge slowly moving areas with pixels into blobs; classification of the blobs obtained before. We use adaptive background subtraction at a stage of moving detection. Moving blobs classification is based on optical flow calculation, Weber contrast analysis and takes into account primary direction of smoke propagation. Real video surveillance sequences were used for smoke detection with utilization our algorithm. A set of experimental results is presented in the paper.",
"title": ""
},
{
"docid": "b418734faef12396bbcef4df356c6fb6",
"text": "Active learning techniques were employed for classification of dialogue acts over two dialogue corpora, the English humanhuman Switchboard corpus and the Spanish human-machine Dihana corpus. It is shown clearly that active learning improves on a baseline obtained through a passive learning approach to tagging the same data sets. An error reduction of 7% was obtained on Switchboard, while a factor 5 reduction in the amount of labeled data needed for classification was achieved on Dihana. The passive Support Vector Machine learner used as baseline in itself significantly improves the state of the art in dialogue act classification on both corpora. On Switchboard it gives a 31% error reduction compared to the previously best reported result.",
"title": ""
},
{
"docid": "cbb080223b3279b1c07fa47e304f8d39",
"text": "■ HISTORICAL PERSPECTIVE The conception of 3D printing, also referred to as additive manufacturing (AM), rapid prototyping (RP), or solid-freeform technology (SFF), was developed by Charles Hull. With a B.S. in engineering physics from the University of Colorado, Hull started work on fabricating plastic devices from photopolymers in the early 1980s at Ultra Violet Products in California. The lengthy fabrication process (1−2 months) coupled with the high probability of design imperfections, thereby, requiring several iterations to perfect, provided Hull with the motivation to improve current methods in prototype development. In 1986, Hull obtained the patent for stereolithography and would go on to acquire countless more patents on the technology, including, but not limited to, those cited in this article. In 1986, he established 3D Systems and developed the .STL file format, which would “complete the electronic ‘handshake’ from computer aided design (CAD) software and transmit files for the printing of 3D objects.” Hull and 3D Systems continued to develop the first 3D printer termed the “Stereolithography Apparatus” as well as the first commercial 3D printer available to the general public, the SLA-250. With Hull’s work, in addition to the development and subsequent patenting of fused deposition modeling (FDM) by Scott Crump at Stratasys in 1990, 3D printing was poised to revolutionize manufacturing and research. MIT professors Michael Cima and Emanuel Sachs patented the first apparatus termed “3D printer” in 1993 to print plastic, metal, and ceramic parts. Many other companies have developed 3D printers for commercial applications, such as DTM Corporation and Z Corporation (which merged with 3D Systems), and Solidscape and Objet Geometries (which merged with Stratasys). Others include Helisys, Organovo, a company that prints objects from living human tissue, and Ultimaker. Open source options such as RepRap, a desktop 3D printer capable of replicating the majority of its own parts, have been available since 2008. 3D printing technology has found industrial applications in the automotive and aerospace industries for printing prototypes of car and airplane parts, in the architectural world for printing structural models, and in the consumer goods industry for prototype development for companies like Trek and Black and Decker. The applications of 3D printing in private and government defense have been rapidly recognized. For example, applications in gun prototyping and manufacturing processes for the military have already been established. Medical applications of 3D printing date back to the early 2000s, with the production of dental implants and prosthetics. Applications in the food industry, as well as in fashion, have also emerged. With regard to research settings, 3D printing has been limited to biomedical applications and engineering, although it shows tremendous potential in the chemical sciences. This feature aims to present and compare the basic printing methods available and discuss some of the current work in chemistry as well as in other research and teaching efforts that utilize 3D printing technology. Feature",
"title": ""
},
{
"docid": "1f4985ca0e188bfbf9145875cd7acfc5",
"text": "Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.",
"title": ""
}
] |
scidocsrr
|
8c5a94475ec71dc04ca3fda395b0f1e7
|
Blockchain platforms: A compendium
|
[
{
"docid": "6c09932a4747c7e2d15b06720b1c48d9",
"text": "A distributed ledger made up of mutually distrusting nodes would allow for a single global database that records the state of deals and obligations between institutions and people. This would eliminate much of the manual, time consuming effort currently required to keep disparate ledgers synchronised with each other. It would also allow for greater levels of code sharing than presently used in the financial industry, reducing the cost of financial services for everyone. We present Corda, a platform which is designed to achieve these goals. This paper provides a high level introduction intended for the general reader. A forthcoming technical white paper elaborates on the design and fundamental architectural decisions.",
"title": ""
},
{
"docid": "3f512d295ae6f9483b87f9dafcc20b61",
"text": "Byzantine Fault Tolerant state machine replication (BFT) protocols are replication protocols that tolerate arbitrary faults of a fraction of the replicas. Although significant efforts have been recently made, existing BFT protocols do not provide acceptable performance when faults occur. As we show in this paper, this comes from the fact that all existing BFT protocols targeting high throughput use a special replica, called the primary, which indicates to other replicas the order in which requests should be processed. This primary can be smartly malicious and degrade the performance of the system without being detected by correct replicas. In this paper, we propose a new approach, called RBFT for Redundant-BFT: we execute multiple instances of the same BFT protocol, each with a primary replica executing on a different machine. All the instances order the requests, but only the requests ordered by one of the instances, called the master instance, are actually executed. The performance of the different instances is closely monitored, in order to check that the master instance provides adequate performance. If that is not the case, the primary replica of the master instance is considered malicious and replaced. We implemented RBFT and compared its performance to that of other existing robust protocols. Our evaluation shows that RBFT achieves similar performance as the most robust protocols when there is no failure and that, under faults, its maximum performance degradation is about 3%, whereas it is at least equal to 78% for existing protocols.",
"title": ""
}
] |
[
{
"docid": "5077d46e909db94b510da0621fcf3a9e",
"text": "This paper presents a high SNR self-capacitance sensing 3D hover sensor that does not use panel offset cancelation blocks. Not only reducing noise components, but increasing the signal components together, this paper achieved a high SNR performance while consuming very low power and die-area. Thanks to the proposed separated structure between driving and sensing circuits of the self-capacitance sensing scheme (SCSS), the signal components are increased without using high-voltage MOS sensing amplifiers which consume big die-area and power and badly degrade SNR. In addition, since a huge panel offset problem in SCSS is solved exploiting the panel's natural characteristics, other costly resources are not required. Furthermore, display noise and parasitic capacitance mismatch errors are compressed. We demonstrate a 39dB SNR at a 1cm hover point under 240Hz scan rate condition with noise experiments, while consuming 183uW/electrode and 0.73mm2/sensor, which are the power per electrode and the die-area per sensor, respectively.",
"title": ""
},
{
"docid": "681f7022e45bd6c712d88613761cdd59",
"text": "We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which is split into a static and dynamic part, allowing us to approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificially generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.",
"title": ""
},
{
"docid": "da6a74341c8b12658aea2a267b7a0389",
"text": "An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or \"harmless error.\" Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE",
"title": ""
},
{
"docid": "3744510fa3cec75c1ccb5abbdb9d71ed",
"text": "49 Abstract— Typically, computer viruses and other malware are detected by searching for a string of bits found in the virus or malware. Such a string can be viewed as a \" fingerprint \" of the virus identified as the signature of the virus. The technique of detecting viruses using signatures is known as signature based detection. Today, virus writers often camouflage their viruses by using code obfuscation techniques in an effort to defeat signature-based detection schemes. So-called metamorphic viruses transform their code as they propagate, thus evading detection by static signature-based virus scanners, while keeping their functionality but differing in internal structure. Many dynamic analysis based detection have been proposed to detect metamorphic viruses but dynamic analysis technique have limitations like difficult to learn normal behavior, high run time overhead and high false positive rate compare to static detection technique. A similarity measure method has been successfully applied in the field of document classification problem. We want to apply similarity measures methods on static feature, API calls of executable to classify it as malware or benign. In this paper we present limitations of signature based detection for detecting metamorphic viruses. We focus on statically analyzing an executable to extract API calls and count the frequency this API calls to generate the feature set. These feature set is used to classify unknown executable as malware or benign by applying various similarity function. I. INTRODUCTION In today's age, where a majority of the transactions involving sensitive information access happen on computers and over the internet, it is absolutely imperative to treat information security as a concern of paramount importance. Computer viruses and other malware have been in existence from the very early days of the personal computer and continue to pose a threat to home and enterprise users alike. A computer virus by definition is \" A program that recursively and explicitly copies a possibly evolved version of itself \" [1]. A virus copies itself to a host file or system area. Once it gets control, it multiplies itself to form newer generations. A virus may carry out damaging activities on the host machine such as corrupting or erasing files, overwriting the whole hard disk, or crashing the computer. These viruses remain harmless but",
"title": ""
},
{
"docid": "aff44289b241cdeef627bba97b68a505",
"text": "Personalization is a ubiquitous phenomenon in our daily online experience. While such technology is critical for helping us combat the overload of information we face, in many cases, we may not even realize that our results are being tailored to our personal tastes and preferences. Worse yet, when such a system makes a mistake, we have little recourse to correct it.\n In this work, we propose a framework for addressing this problem by developing a new user-interpretable feature set upon which to base personalized recommendations. These features, which we call badges, represent fundamental traits of users (e.g., \"vegetarian\" or \"Apple fanboy\") inferred by modeling the interplay between a user's behavior and self-reported identity. Specifically, we consider the microblogging site Twitter, where users provide short descriptions of themselves in their profiles, as well as perform actions such as tweeting and retweeting. Our approach is based on the insight that we can define badges using high precision, low recall rules (e.g., \"Twitter profile contains the phrase 'Apple fanboy'\"), and with enough data, generalize to other users by observing shared behavior. We develop a fully Bayesian, generative model that describes this interaction, while allowing us to avoid the pitfalls associated with having positive-only data.\n Experiments on real Twitter data demonstrate the effectiveness of our model at capturing rich and interpretable user traits that can be used to provide transparency for personalization.",
"title": ""
},
{
"docid": "aef55420ff44872ee35ecfd4cd6528e0",
"text": "Data quality and especially the assessment of data quality have been intensively discussed in research and practice alike. To support an economically oriented management of data quality and decision making under uncertainty, it is essential to assess the data quality level by means of well-founded metrics. However, if not adequately defined, these metrics can lead to wrong decisions and economic losses. Therefore, based on a decision-oriented framework, we present a set of five requirements for data quality metrics. These requirements are relevant for a metric that aims to support an economically oriented management of data quality and decision making under uncertainty. We further demonstrate the applicability and efficacy of these requirements by evaluating five data quality metrics for different data quality dimensions. Moreover, we discuss practical implications when applying the presented requirements.",
"title": ""
},
{
"docid": "7ea3d3002506e0ea6f91f4bdab09c2d5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "2f761de3f94d86a2c73aac3dce413dca",
"text": "The class imbalance problem has been recognized in many practical domains and a hot topic of machine learning in recent years. In such a problem, almost all the examples are labeled as one class, while far fewer examples are labeled as the other class, usually the more important class. In this case, standard machine learning algorithms tend to be overwhelmed by the majority class and ignore the minority class since traditional classifiers seeking an accurate performance over a full range of instances. This paper reviewed academic activities special for the class imbalance problem firstly. Then investigated various remedies in four different levels according to learning phases. Following surveying evaluation metrics and some other related factors, this paper showed some future directions at last.",
"title": ""
},
{
"docid": "b76e466d4b446760bf3fd5d70e2edc1b",
"text": "Cloud computing has emerged as a long-dreamt vision of the utility computing paradigm that provides reliable and resilient infrastructure for users to remotely store data and use on-demand applications and services. Currently, many individuals and organizations mitigate the burden of local data storage and reduce the maintenance cost by outsourcing data to the cloud. However, the outsourced data is not always trustworthy due to the loss of physical control and possession over the data. As a result, many scholars have concentrated on relieving the security threats of the outsourced data by designing the Remote Data Auditing (RDA) technique as a new concept to enable public auditability for the stored data in the cloud. The RDA is a useful technique to check the reliability and integrity of data outsourced to a single or distributed servers. This is because all of the RDA techniques for single cloud servers are unable to support data recovery; such techniques are complemented with redundant storage mechanisms. The article also reviews techniques of remote data auditing more comprehensively in the domain of the distributed clouds in conjunction with the presentation of classifying ongoing developments within this specified area. The thematic taxonomy of the distributed storage auditing is presented based on significant parameters, such as scheme nature, security pattern, objective functions, auditing mode, update mode, cryptography model, and dynamic data structure. The more recent remote auditing approaches, which have not gained considerable attention in distributed cloud environments, are also critically analyzed and further categorized into three different classes, namely, replication based, erasure coding based, and network coding based, to present a taxonomy. This survey also aims to investigate similarities and differences of such a framework on the basis of the thematic taxonomy to diagnose significant and explore major outstanding issues.",
"title": ""
},
{
"docid": "b1cad8dde7d9ceb1bb973fb323652d05",
"text": "Sites for online classified ads selling sex are widely used by human traffickers to support their pernicious business. The sheer quantity of ads makes manual exploration and analysis unscalable. In addition, discerning whether an ad is advertising a trafficked victim or an independent sex worker is a very difficult task. Very little concrete ground truth (i.e., ads definitively known to be posted by a trafficker) exists in this space. In this work, we develop tools and techniques that can be used separately and in conjunction to group sex ads by their true owner (and not the claimed author in the ad). Specifically, we develop a machine learning classifier that uses stylometry to distinguish between ads posted by the same vs. different authors with 90% TPR and 1% FPR. We also design a linking technique that takes advantage of leakages from the Bitcoin mempool, blockchain and sex ad site, to link a subset of sex ads to Bitcoin public wallets and transactions. Finally, we demonstrate via a 4-week proof of concept using Backpage as the sex ad site, how an analyst can use these automated approaches to potentially find human traffickers.",
"title": ""
},
{
"docid": "16a0329d2b7a6995a48bdef0e845658a",
"text": "Digital market has never been so unstable due to more and more demanding users and new disruptive competitors. CEOs from most of industries investigate digitalization opportunities. Through a Systematic Literature Review, we found that digital transformation is more than just a technological shift. According to this study, these transformations have had an impact on the business models, the operational processes and the end-users experience. Considering the richness of this topic, we had proposed a research agenda of digital transformation in a managerial perspective.",
"title": ""
},
{
"docid": "37f55e03f4d1ff3b9311e537dc7122b5",
"text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.",
"title": ""
},
{
"docid": "c8ca57db545f2d1f70f3640651bb3e79",
"text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.",
"title": ""
},
{
"docid": "9fdba452394ba0a8ed3b75f222de9590",
"text": "We present a theory of compositionality in stochastic optimal control, showing how task-optimal controllers can be constructed from certain primitives. The primitives are themselves feedback controllers pursuing their own agendas. They are mixed in proportion to how much progress they are making towards their agendas and how compatible their agendas are with the present task. The resulting composite control law is provably optimal when the problem belongs to a certain class. This class is rather general and yet has a number of unique properties – one of which is that the Bellman equation can be made linear even for non-linear or discrete dynamics. This gives rise to the compositionality developed here. In the special case of linear dynamics and Gaussian noise our framework yields analytical solutions (i.e. non-linear mixtures of LQG controllers) without requiring the final cost to be quadratic. More generally, a natural set of control primitives can be constructed by applying SVD to Green’s function of the Bellman equation. We illustrate the theory in the context of human arm movements. The ideas of optimality and compositionality are both very prominent in the field of motor control, yet they have been difficult to reconcile. Our work makes this possible.",
"title": ""
},
{
"docid": "8bf63451cf6b83f3da4d4378de7bfd7f",
"text": "This paper presents a high-efficiency and smoothtransition buck-boost (BB) converter to extend the battery life of portable devices. Owing to the usage of four switches, the BB control topology needs to minimize the switching and conduction losses at the same time. Therefore, over a wide input voltage range, the proposed BB converter consumes minimum switching loss like the basic operation of buck or boost converter. Besides, the conduction loss is reduced by means of the reduction of the inductor current level. Especially, the proposed BB converter offers good line/load regulation and thus provides a smooth and stable output voltage when the battery voltage decreases. Simulation results show that the output voltage drops is very small during the whole battery life time and the output transition is very smooth during the mode transition by the proposed BB control scheme.",
"title": ""
},
{
"docid": "f78ba23b912c6875587c9f00d45676b4",
"text": "OBJECTIVES\nThe aim of this study was to assess the impact of the advanced technology of the new ExAblate 2100 system (Insightec Ltd, Haifa, Israel) for magnetic resonance imaging (MRI)-guided focused ultrasound surgery on treatment outcomes in patients with symptomatic uterine fibroids, as measured by the nonperfused volume ratio.\n\n\nMATERIALS AND METHODS\nThis is a retrospective analysis of 115 women (mean age, 42 years; range, 27-54 years) with symptomatic fibroids who consecutively underwent MRI-guided focused ultrasound treatment in a single center with the new generation ExAblate 2100 system from November 2010 to June 2011. Mean ± SD total volume and number of treated fibroids (per patient) were 89 ± 94 cm and 2.2 ± 1.7, respectively. Patient baseline characteristics were analyzed regarding their impact on the resulting nonperfused volume ratio.\n\n\nRESULTS\nMagnetic resonance imaging-guided focused ultrasound treatment was technically successful in 115 of 123 patients (93.5%). In 8 patients, treatment was not possible because of bowel loops in the beam pathway that could not be mitigated (n = 6), patient movement (n = 1), and system malfunction (n = 1). Mean nonperfused volume ratio was 88% ± 15% (range, 38%-100%). Mean applied energy level was 5400 ± 1200 J, and mean number of sonications was 74 ± 27. No major complications occurred. Two cases of first-degree skin burn resolved within 1 week after the intervention. Of the baseline characteristics analyzed, only the planned treatment volume had a statistically significant impact on nonperfused volume ratio.\n\n\nCONCLUSIONS\nWith technological advancement, the outcome of MRI-guided focused ultrasound treatment in terms of the nonperfused volume ratio can be enhanced with a high safety profile, markedly exceeding results reported in previous clinical trials.",
"title": ""
},
{
"docid": "98f8994f1ad9315f168878ff40c29afc",
"text": "OBJECTIVE\nSuicide remains a major global public health issue for young people. The reach and accessibility of online and social media-based interventions herald a unique opportunity for suicide prevention. To date, the large body of research into suicide prevention has been undertaken atheoretically. This paper provides a rationale and theoretical framework (based on the interpersonal theory of suicide), and draws on our experiences of developing and testing online and social media-based interventions.\n\n\nMETHOD\nThe implementation of three distinct online and social media-based intervention studies, undertaken with young people at risk of suicide, are discussed. We highlight the ways that these interventions can serve to bolster social connectedness in young people, and outline key aspects of intervention implementation and moderation.\n\n\nRESULTS\nInsights regarding the implementation of these studies include careful protocol development mindful of risk and ethical issues, establishment of suitably qualified teams to oversee development and delivery of the intervention, and utilisation of key aspects of human support (i.e., moderation) to encourage longer-term intervention engagement.\n\n\nCONCLUSIONS\nOnline and social media-based interventions provide an opportunity to enhance feelings of connectedness in young people, a key component of the interpersonal theory of suicide. Our experience has shown that such interventions can be feasibly and safely conducted with young people at risk of suicide. Further studies, with controlled designs, are required to demonstrate intervention efficacy.",
"title": ""
},
{
"docid": "451dfbeb491d2766d61c46c5a9cc7809",
"text": "Despite the significant role that skin color plays in material well-being and social perceptions, scholars know little if anything about whether skin color and afrocentric features influence political cognition and behavior and specifically, if intraracial variation in addition to categorical difference affects the choices of voters. Do more phenotypically black minorities suffer an electoral penalty as they do in most aspects of life? This study investigates the impact of color and phenotypically black facial features on candidate evaluation, using a nationally representative survey experiment of over 2000 whites. Subjects were randomly assigned to campaign literature of two opposing candidates, in which the race, skin color and features, and issue stance of candidates was varied. I find that afrocentric phenotype is an important, albeit hidden, form of bias in racial attitudes and that the importance of race on candidate evaluation depends largely on skin color and afrocentric features. However, like other racial cues, color and black phenotype don’t influence voters’ evaluations uniformly but vary in magnitude and direction across the gender and partisan makeup of the electorate in theoretically explicable ways. Ultimately, I argue, scholars of race politics, implicit racial bias, and minority candidates are missing an important aspect of racial bias.",
"title": ""
},
{
"docid": "6576e5e58ea3298889a4ae27c86a49c9",
"text": "Jordan Baker instinctively avoided clever shrewd men . . . because she felt safer on a plane where any divergence from a code would be thought impossible. She was incurably dishonest. She wasn’t able to endure being at a disadvantage, and given this unwillingness I suppose she had begun dealing in subterfuges when she was very young in order to keep that cool insolent smile turned to the world and yet satisfy the demands of her hard jaunty body. --F. Scott Fitzgerald, The Great Gatsby (63)",
"title": ""
},
{
"docid": "4c290421dc42c3a5a56c7a4b373063e5",
"text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.",
"title": ""
}
] |
scidocsrr
|
6f5d72d948720686ba2dbbb1626da2ef
|
Fast Best-Effort Search on Graphs with Multiple Attributes
|
[
{
"docid": "97368057a975a4642f086c31a0d58b38",
"text": "In this paper, we present a topic level expertise search framework for heterogeneous networks. Different from the traditional Web search engines that perform retrieval and ranking at document level (or at object level), we investigate the problem of expertise search at topic level over heterogeneous networks. In particular, we study this problem in an academic search and mining system, which extracts and integrates the academic data from the distributed Web. We present a unified topic model to simultaneously model topical aspects of different objects in the academic network. Based on the learned topic models, we investigate the expertise search problem from three dimensions: ranking, citation tracing analysis, and topical graph search. Specifically, we propose a topic level random walk method for ranking the different objects. In citation tracing analysis, we aim to uncover how a piece of work influences its follow-up work. Finally, we have developed a topical graph search function, based on the topic modeling and citation tracing analysis. Experimental results show that various expertise search and mining tasks can indeed benefit from the proposed topic level analysis approach.",
"title": ""
}
] |
[
{
"docid": "da91dc8ab78a585b81fba42bed1a6af3",
"text": "Integrating magnetic parasitics in the design of LCC resonant converters provides a solution to reduce parts count and increase power density. This paper provides an efficient design procedure for small planar transformers which integrate transformer leakage inductance and magnetizing inductance into the resonant tank by employing an accurate parasitic prediction model. Finite element simulations were used to create the models using Design of Experiment (DoE) methodology. A planar transformer prototype was designed and tested within a 2.5W LLC resonant converter and results under different operating modes are included to illustrate the resonant behaviour and to validate the presented design procedure.",
"title": ""
},
{
"docid": "8020c4f3df7bca37b7ebfcd14ae5299d",
"text": "We present a two-part case study to explore how technology toys can promote computational thinking for young children. First, we conducted a formal study using littleBits, a commercially available technology toy, to explore its potential as a learning tool for computational thinking in three different educational settings. Our findings revealed differences in learning indicators across settings. We applied these insights during a teaching project in Cape Town, South Africa, where we partnered with an educational NGO, ORT SA CAPE, to offer enriching learning opportunities for both privileged and impoverished children. We describe our methods, observations, and lessons learned using littleBits to teach computational thinking to children in early elementary school, and discuss how our lab study informed practical work in the developing world.",
"title": ""
},
{
"docid": "d48ea163dd0cd5d80ba95beecee5102d",
"text": "Foodborne pathogens (FBP) represent an important threat to the consumers' health as they are able to cause different foodborne diseases. In order to eliminate the potential risk of those pathogens, lactic acid bacteria (LAB) have received a great attention in the food biotechnology sector since they play an essential function to prevent bacterial growth and reduce the biogenic amines (BAs) formation. The foodborne illnesses (diarrhea, vomiting, and abdominal pain, etc.) caused by those microbial pathogens is due to various reasons, one of them is related to the decarboxylation of available amino acids that lead to BAs production. The formation of BAs by pathogens in foods can cause the deterioration of their nutritional and sensory qualities. BAs formation can also have toxicological impacts and lead to different types of intoxications. The growth of FBP and their BAs production should be monitored and prevented to avoid such problems. LAB is capable of improving food safety by preventing foods spoilage and extending their shelf-life. LAB are utilized by the food industries to produce fermented products with their antibacterial effects as bio-preservative agents to extent their storage period and preserve their nutritive and gustative characteristics. Besides their contribution to the flavor for fermented foods, LAB secretes various antimicrobial substances including organic acids, hydrogen peroxide, and bacteriocins. Consequently, in this paper, the impact of LAB on the growth of FBP and their BAs formation in food has been reviewed extensively.",
"title": ""
},
{
"docid": "63914ebf92c3c4d84df96f9b965bea5b",
"text": "In this paper we study different types of Recurrent Neural Networks (RNN) for sequence labeling tasks. We propose two new variants of RNNs integrating improvements for sequence labeling, and we compare them to the more traditional Elman and Jordan RNNs. We compare all models, either traditional or new, on four distinct tasks of sequence labeling: two on Spoken Language Understanding (ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the Penn Treebank (PTB) corpora. The results show that our new variants of RNNs are always more effective than the others.",
"title": ""
},
{
"docid": "a04302721f62c1af3b9be630524f03ab",
"text": "Hyperspectral image processing has been a very dynamic area in remote sensing and other applications in recent years. Hyperspectral images provide ample spectral information to identify and distinguish spectrally similar materials for more accurate and detailed information extraction. Wide range of advanced classification techniques are available based on spectral information and spatial information. To improve classification accuracy it is essential to identify and reduce uncertainties in image processing chain. This paper presents the current practices, problems and prospects of hyperspectral image classification. In addition, some important issues affecting classification performance are discussed.",
"title": ""
},
{
"docid": "971227f276624394bf87678186d99e2d",
"text": "Some of the most challenging issues in data outsourcing scenario are the enforcement of authorization policies and the support of policy updates. Ciphertext-policy attribute-based encryption is a promising cryptographic solution to these issues for enforcing access control policies defined by a data owner on outsourced data. However, the problem of applying the attribute-based encryption in an outsourced architecture introduces several challenges with regard to the attribute and user revocation. In this paper, we propose an access control mechanism using ciphertext-policy attribute-based encryption to enforce access control policies with efficient attribute and user revocation capability. The fine-grained access control can be achieved by dual encryption mechanism which takes advantage of the attribute-based encryption and selective group key distribution in each attribute group. We demonstrate how to apply the proposed mechanism to securely manage the outsourced data. The analysis results indicate that the proposed scheme is efficient and secure in the data outsourcing systems.",
"title": ""
},
{
"docid": "a8d78c6fd0f2f5792d5eaab3ddd577dc",
"text": "This paper describes the new Java memory model, which has been revised as part of Java 5.0. The model specifies the legal behaviors for a multithreaded program; it defines the semantics of multithreaded Java programs and partially determines legal implementations of Java virtual machines and compilers.The new Java model provides a simple interface for correctly synchronized programs -- it guarantees sequential consistency to data-race-free programs. Its novel contribution is requiring that the behavior of incorrectly synchronized programs be bounded by a well defined notion of causality. The causality requirement is strong enough to respect the safety and security properties of Java and weak enough to allow standard compiler and hardware optimizations. To our knowledge, other models are either too weak because they do not provide for sufficient safety/security, or are too strong because they rely on a strong notion of data and control dependences that precludes some standard compiler transformations.Although the majority of what is currently done in compilers is legal, the new model introduces significant differences, and clearly defines the boundaries of legal transformations. For example, the commonly accepted definition for control dependence is incorrect for Java, and transformations based on it may be invalid.In addition to providing the official memory model for Java, we believe the model described here could prove to be a useful basis for other programming languages that currently lack well-defined models, such as C++ and C#.",
"title": ""
},
{
"docid": "4a4a1ca40a819e07fb92444a44e87d6b",
"text": "We announce the public availability of the RWTH Aachen University speech recognition toolkit. The toolkit includes state of the art speech recognition technology for acoustic model training and decoding. Speaker adaptation, speaker adaptive training, unsupervised training, a finite state automata library, and an efficient tree search decoder are notable components. Comprehensive documentation, example setups for training and recognition, and a tutorial are provided to support newcomers.",
"title": ""
},
{
"docid": "67bc52adf7c42c7a0ef6178ce4990e57",
"text": "Recognizing oneself as the owner of a body and the agent of actions requires specific mechanisms which have been elucidated only recently. One of these mechanisms is the monitoring of signals arising from bodily movements, i.e. the central signals which contribute to the generation of the movements and the sensory signals which arise from their execution. The congruence between these two sets of signals is a strong index for determining the experiences of ownership and agency, which are the main constituents of the experience of being an independent self. This mechanism, however, does not account from the frequent cases where an intention is generated but the corresponding action is not executed. In this paper, it is postulated that such covert actions are internally simulated by activating specific cortical networks or representations of the intended actions. This process of action simulation is also extended to the observation and the recognition of actions performed or intended by other agents. The problem of disentangling representations that pertain to self-intended actions from those that pertain to actions executed or intended by others, is a critical one for attributing actions to their respective agents. Failure to recognize one's own actions and misattribution of actions may result from pathological conditions which alter the readability of these representations.",
"title": ""
},
{
"docid": "4feb1e6348bb9d5cc4b68957392c35fb",
"text": "Multibiometric recognition systems, which aggregate information from multiple biometric sources, are gaining popularity because they are able to overcome limitaions such as non-universality, noisy sensor data a nd susceptibility. Multibiometric systems promise significant improvements as higher accuracy and increased resistance to spoofing over the single biometric systems. This paper proposes a method which integrates fingerprint, palmprint and face and performs the fusion at score level. Three biometric traits are collected and stored into database at t he time of Enrollment. In the Authentication stage query images will be co mpared against the stored templates and match score is generated.AOV based minutiae algorithm is proposed for fingerprint matching. To compare Face images PCA analysis is used. Palmprint matching score can be generated usi ng PCA analysis. This matching score will be passed to the fusion stage. Fusion stage includes normalization of the s cores. Weights can be assigned according to the imp ortance of the biometric traits. These weighted and normalized sco re will be combined to generate a total score. This total score will be passed to the decision stage. In the decision stage total score will be compared with certain threshol d value. That will realize person’s authenticity whether a person is g enuine or imposter.",
"title": ""
},
{
"docid": "3f88c453eab8b2fbfffbf98fee34d086",
"text": "Face recognition become one of the most important and fastest growing area during the last several years and become the most successful application of image analysis and broadly used in security system. It has been a challenging, interesting, and fast growing area in real time applications. The propose method is tested using a benchmark ORL database that contains 400 images of 40 persons. Pre-Processing technique are applied on the ORL database to increase the recognition rate. The best recognition rate is 97.5% when tested using 9 training images and 1 testing image. Increasing image database brightness is efficient and will increase the recognition rate. Resizing images using 0.3 scale is also efficient and will increase the recognition rate. PCA is used for feature extraction and dimension reduction. Euclidean distance is used for matching process.",
"title": ""
},
{
"docid": "0c3387ec7ed161d931bc08151e722d10",
"text": "New updated! The latest book from a very famous author finally comes out. Book of the tower of hanoi myths and maths, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.",
"title": ""
},
{
"docid": "544cfa381dad24a53a31e368e10d8f75",
"text": "Several previous works have shown that TCP exhibits poor performance in mobile ad hoc networks (MANETs). The ultimate reason for this is that MANETs behave in a significantly different way from traditional wired networks, like the Internet, for which TCP was originally designed. In this paper we propose a novel transport protocol - named TPA - specifically tailored to the characteristics of the MANET environment. It is based on a completely new congestion control mechanism, and designed in such a way to minimize the number of useless transmissions and, hence, power consumption. Furthermore, it is able to manage efficiently route changes and route failures. We evaluated the TPA protocol in a static scenario where TCP exhibits good performance. Simulation results show that, even in such a scenario, TPA significantly outperforms TCP.",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "4f9dd51d77b6a7008b213042a825c748",
"text": "A crucial capability of real-world intelligent agents is their ability to plan a sequence of actions to achieve their goals in the visual world. In this work, we address the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state. Doing so entails knowledge about objects and their affordances, as well as actions and their preconditions and effects. We propose learning these through interacting with a visual and dynamic environment. Our proposed solution involves bootstrapping reinforcement learning with imitation learning. To ensure cross task generalization, we develop a deep predictive model based on successor representations. Our experimental results show near optimal results across a wide range of tasks in the challenging THOR environment.",
"title": ""
},
{
"docid": "59ec5715b15e3811a0d9010709092d03",
"text": "We propose two new models for human action recognition from video sequences using topic models. Video sequences are represented by a novel “bag-of-words” representation, where each frame corresponds to a “word”. Our models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in our models directly correspond to class labels; secondly, some of the latent variables in previous topic models become observed in our case. Our models have several advantages over other latent topic models used in visual recognition. First of all, the training is much easier due to the decoupling of the model parameters. Secondly, it alleviates the issue of how to choose the appropriate number of latent topics. Thirdly, it achieves much better performance by utilizing the information provided by the class labels in the training set. We present action classification results on five different datasets. Our results are either comparable to, or significantly better than previous published results on these datasets. Index Terms —Human action recognition, video analysis, bag-of-words, probabilistic graphical models, event and activity understanding",
"title": ""
},
{
"docid": "95f1862369f279f20fc1fb10b8b41ea8",
"text": "This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted , or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Intrusion detection in wireless ad-hoc networks / editors, Nabendu Chaki and Rituparna Chaki. pages cm Includes bibliographical references and index. Contents Preface ix a b o u t t h e e d i t o r s xi c o n t r i b u t o r s xiii chaP t e r 1 intro d u c t i o n 1 Nova ru N De b , M a N a l i CH a k r a bor T y, a N D N a beN Du CH a k i chaP t e r 2 a r c h i t e c t u r e a n d o r g a n i z at i o n is s u e s 43 M a N a l i CH a k r a bor T y, Nova ru N De b , De bDu T Ta ba r M a N roy, a N D r i T u pa r N a CH a k i chaP t e r 3 routin g f o r …",
"title": ""
},
{
"docid": "4a6523b16ebe8bfa04421530f6252bd5",
"text": "Representations are fundamental to artificial intelligence. The performance of a learning system depends on how the data is represented. Typically, these representations are hand-engineered using domain knowledge. Recently, the trend is to learn these representations through stochastic gradient descent in multi-layer neural networks, which is called backprop. Learning representations directly from the incoming data stream reduces human labour involved in designing a learning system. More importantly, this allows in scaling up a learning system to difficult tasks. In this paper, we introduce a new incremental learning algorithm called crossprop, that learns incoming weights of hidden units based on the meta-gradient descent approach. This meta-gradient descent approach was previously introduced by Sutton (1992) and Schraudolph (1999) for learning stepsizes. The final update equation introduces an additional memory parameter for each of these weights and generalizes the backprop update equation. From our empirical experiments, we show that crossprop learns and reuses its feature representation while tackling new and unseen tasks whereas backprop relearns a new feature representation.",
"title": ""
},
{
"docid": "880a0dc7a717d9d68761232516b150b5",
"text": "A longstanding vision in distributed systems is to build reliable systems from unreliable components. An enticing formulation of this vision is Byzantine Fault-Tolerant (BFT) state machine replication, in which a group of servers collectively act as a correct server even if some of the servers misbehave or malfunction in arbitrary (“Byzantine”) ways. Despite this promise, practitioners hesitate to deploy BFT systems, at least partly because of the perception that BFT must impose high overheads.\n In this article, we present Zyzzyva, a protocol that uses speculation to reduce the cost of BFT replication. In Zyzzyva, replicas reply to a client's request without first running an expensive three-phase commit protocol to agree on the order to process requests. Instead, they optimistically adopt the order proposed by a primary server, process the request, and reply immediately to the client. If the primary is faulty, replicas can become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minima and to achieve throughputs of tens of thousands of requests per second, making BFT replication practical for a broad range of demanding services.",
"title": ""
},
{
"docid": "a1d0bf0d28bbe3dd568e7e01bc9d59c3",
"text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.",
"title": ""
}
] |
scidocsrr
|
38390bd00fbb8d04ff911103e9eadbd7
|
Quantifying and optimizing visualization: An evolutionary computing-based approach
|
[
{
"docid": "2f8d586efc46fa6354ced3cdc96e43c7",
"text": "This paper reviews the trajectory of three information visualization innovations: treemaps, cone trees, and hyperbolic trees. These three ideas were first published around the same time in the early 1990s, so we are able to track academic publications, patents, and trade press articles over almost two decades. We describe the early history of each approach, problems with data collection from differing sources, appropriate metrics, and strategies for visualizing these longitudinal data sets. This paper makes two contributions: (1) it offers the information visualization community a history of how certain ideas evolved, influenced others, and were adopted for widespread use and (2) it provides an example of how such scientometric trajectories of innovations can be gathered and visualized. Guidance for designers is offered, but these conjectures may also be useful to researchers, research managers, science policy analysts, and venture capitalists.",
"title": ""
}
] |
[
{
"docid": "467d953d489ca8f7d75c798d6e948a86",
"text": "The ability to detect recent natural selection in the human population would have profound implications for the study of human history and for medicine. Here, we introduce a framework for detecting the genetic imprint of recent positive selection by analysing long-range haplotypes in human populations. We first identify haplotypes at a locus of interest (core haplotypes). We then assess the age of each core haplotype by the decay of its association to alleles at various distances from the locus, as measured by extended haplotype homozygosity (EHH). Core haplotypes that have unusually high EHH and a high population frequency indicate the presence of a mutation that rose to prominence in the human gene pool faster than expected under neutral evolution. We applied this approach to investigate selection at two genes carrying common variants implicated in resistance to malaria: G6PD and CD40 ligand. At both loci, the core haplotypes carrying the proposed protective mutation stand out and show significant evidence of selection. More generally, the method could be used to scan the entire genome for evidence of recent positive selection.",
"title": ""
},
{
"docid": "83a13b090260a464064a3c884a75ad91",
"text": "While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called Word Mover’s Distance (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the Word Mover’s Embedding (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.",
"title": ""
},
{
"docid": "97fe7ea7806a0edf74413b8c8f5abea0",
"text": "The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. \"Deep learning\", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.",
"title": ""
},
{
"docid": "bf14fb39f07e01bd6dc01b3583a726b6",
"text": "To provide a general context for library implementations of open source software (OSS), the purpose of this paper is to assess and evaluate the awareness and adoption of OSS by the LIS professionals working in various engineering colleges of Odisha. The study is based on survey method and questionnaire technique was used for collection data from the respondents. The study finds that although the LIS professionals of engineering colleges of Odisha have knowledge on OSS, their uses in libraries are in budding stage. Suggests that for the widespread use of OSS in engineering college libraries of Odisha, a cooperative and participatory organisational system, positive attitude of authorities and LIS professionals, proper training provision for LIS professionals need to be developed.",
"title": ""
},
{
"docid": "e08914f566fde1dd91a5270d0e12d886",
"text": "Automation in agriculture system is very important these days. This paper proposes an automated system for irrigating the fields. ESP-8266 WIFI module chip is used to connect the system to the internet. Various types of sensors are used to check the content of moisture in the soil, and the water is supplied to the soil through the motor pump. IOT is used to inform the farmers of the supply of water to the soil through an android application. Every time water is given to the soil, the farmer will get to know about that.",
"title": ""
},
{
"docid": "9258d109966511d303a86d4320065069",
"text": "This paper presents the design of a fast comparator, where the dual hysteresis and the digital offset control are integrated in a source degenerated preamplifier stage. The selected architecture of the preamplifier stage, followed by a differential symmetrical OTA allowed easily to be achieved design requirements as 8mV hysteresis, 1σ input random offset of 0.58mV, range of ±35mV additional digital offset control with 5mV step, 30ns propagation delay, while the power dissipation is 90μW and the block area is 0.015mm2 at 0.18μm CMOS technology. The source degenerating resistors in the preamplifier stage can be used to create dual or single, digitally programmed hysteretic comparators.",
"title": ""
},
{
"docid": "e2ea233e4baaf3c76337c779060531cf",
"text": "OBJECTIVES\nAnticoagulant and antiplatelet medications are known to increase the risk and severity of traumatic intracranial hemorrhage (tICH), even with minor head trauma. Most studies on bleeding propensity with head trauma are retrospective, are based on trauma registries, or include heterogeneous mechanisms of injury. The goal of this study was to determine the rate of tICH from only a common low-acuity mechanism of injury, that of a ground-level fall, in patients taking one or more of the following antiplatelet or anticoagulant medications: aspirin, warfarin, prasugrel, ticagrelor, dabigatran, rivaroxaban, apixaban, or enoxaparin.\n\n\nMETHODS\nThis was a prospective cohort study conducted at a Level I tertiary care trauma center of consecutive patients meeting the inclusion criteria of a ground-level fall with head trauma as affirmed by the treating clinician, a computed tomography (CT) head obtained, and taking and one of the above antiplatelet or anticoagulants. Patients were identified prospectively through electronic screening with confirmatory chart review. Emergency department charts were abstracted without subsequent knowledge of the hospital course. Patients transferred with a known abnormal CT head were excluded. Primary outcome was rate of tICH on initial CT head. Rates with 95% confidence intervals (CIs) were compared.\n\n\nRESULTS\nOver 30 months, we enrolled 939 subjects. The mean ± SD age was 78.3 ± 11.9 years and 44.6% were male. There were a total of 33 patients with tICH (3.5%, 95% CI = 2.5%-4.9%). Antiplatelets had a rate of tICH of 4.3% (95% CI = 3.0%-6.2%) compared to anticoagulants with a rate of 1.7% (95% CI = 0.4%-4.5%). Aspirin without other agents had an tICH rate of 4.6% (95% CI = 3.2%-6.6%); of these, 81.5% were taking low-dose 81 mg aspirin. Two patients received a craniotomy (one taking aspirin, one taking warfarin). There were four deaths (three taking aspirin, one taking warfarin). Most (72.7%) subjects with tICH were discharged home or to a rehabilitation facility. There were no tICH in 31 subjects taking a direct oral anticoagulant. CIs were overlapping for the groups.\n\n\nCONCLUSION\nThere is a low incidence of clinically significant tICH with a ground-level fall in head trauma in patients taking an anticoagulant or antiplatelet medication. There was no statistical difference in rate of tICH between antiplatelet and anticoagulants, which is unanticipated and counterintuitive as most literature and teaching suggests a higher rate with anticoagulants. A larger data set is needed to determine if small differences between the groups exist.",
"title": ""
},
{
"docid": "67bcf63602f849c9566e69d1e5cb24f6",
"text": "Previous approaches Using low-precision weights for forward pass and shadow fullprecision weights for backward pass Applying soft thresholding during training and hard thresholding at inference Usually using uniform quantization, not optimal for the empirically normally distributed weights: Injecting noise to weights at training time emulating quantization noise at inference The model becomes (quantization) noise invariant k-quantile quantization – equiprobable bin (same number of samples in each bin): For applying k-quantile quantization “uniformization trick” is used Uniform quantization on FX x , FX is the CDF The quantization noise is modeled as an additive uniform noise applied to FX x “Plug and play” in each model, minimal deviation from standard training scheme Work both for training from scratch and fine-tuning",
"title": ""
},
{
"docid": "ef2cde91b19d5816867e691c806a9a4b",
"text": "It is well known that different types of social ties have essentially different influence on people. However, users in online social networks rarely categorize their contacts into \"family\", \"colleagues\", or \"classmates\". While a bulk of research has focused on inferring particular types of relationships in a specific social network, few publications systematically study the generalization of the problem of inferring social ties over multiple heterogeneous networks. In this work, we develop a framework for classifying the type of social relationships by learning across heterogeneous networks. The framework incorporates social theories into a factor graph model, which effectively improves the accuracy of inferring the type of social relationships in a target network by borrowing knowledge from a different source network. Our empirical study on five different genres of networks validates the effectiveness of the proposed framework. For example, by leveraging information from a coauthor network with labeled advisor-advisee relationships, the proposed framework is able to obtain an F1-score of 90% (8-28% improvements over alternative methods) for inferring manager-subordinate relationships in an enterprise email network.",
"title": ""
},
{
"docid": "c61a6e26941409db9cb4a95c05a82785",
"text": "An important aspect in visualization design is the connection between what a designer does and the decisions the designer makes. Existing design process models, however, do not explicitly link back to models for visualization design decisions. We bridge this gap by introducing the design activity framework, a process model that explicitly connects to the nested model, a well-known visualization design decision model. The framework includes four overlapping activities that characterize the design process, with each activity explicating outcomes related to the nested model. Additionally, we describe and characterize a list of exemplar methods and how they overlap among these activities. The design activity framework is the result of reflective discussions from a collaboration on a visualization redesign project, the details of which we describe to ground the framework in a real-world design process. Lastly, from this redesign project we provide several research outcomes in the domain of cybersecurity, including an extended data abstraction and rich opportunities for future visualization research.",
"title": ""
},
{
"docid": "5c5f75a7dd8f3241346b45592fa60faa",
"text": "Participants searched for discrepant fear-relevant pictures (snakes or spiders) in grid-pattern arrays of fear-irrelevant pictures belonging to the same category (flowers or mushrooms) and vice versa. Fear-relevant pictures were found more quickly than fear-irrelevant ones. Fear-relevant, but not fear-irrelevant, search was unaffected by the location of the target in the display and by the number of distractors, which suggests parallel search for fear-relevant targets and serial search for fear-irrelevant targets. Participants specifically fearful of snakes but not spiders (or vice versa) showed facilitated search for the feared objects but did not differ from controls in search for nonfeared fear-relevant or fear-irrelevant, targets. Thus, evolutionary relevant threatening stimuli were effective in capturing attention, and this effect was further facilitated if the stimulus was emotionally provocative.",
"title": ""
},
{
"docid": "72e158d509f7f95eb0a3dcf699922be2",
"text": "We perform an asset market experiment in order to investigate whether overconfidence induces trading. We investigate three manifestations of overconfidence: calibration-based overconfidence, the better-than-average effect and illusion of control. Novelly, the measure employed for calibration-based overconfidence is task-specific in that it is designed to influence behavior. We find that calibration-based overconfidence does engender additional trade, though the better-than-average also appears to play a role. This is true both at the level of the individual and also at the level of the market. There is little evidence that gender influences trading activity. JEL Classification: G10, G11, G12, G14 The authors gratefully acknowledge the co-editor’s valuable suggestions in improving the paper’s exposition and two anonymous referrers’ valuable comments. In addition, the authors would like to thank the very helpful comments of Lucy Ackert, Ben Amoako-Adu, Bruno Biais, Tim Cason, Narat Charupat, Günter Franke, Simon Gervais, Markus Glaser, Patrik Guggenberger, Michael Haigh, Joachim Inkmann, Marhuenda Joaquin, Alexander Kempf, Brian Kluger, Roman Kraeussl, Bina Lehmann, Tao Lin, Harald Lohre, Greg Lypny, Elizabeth Maynes, Moshe Milevsky, Dean Mountain, Gordon Roberts, Chris Robinson, Stefan Rünzi, Gideon Saar, Dirk Schiereck, Harris Schlesinger, Chuck Schnitzlein, Michael Schröder, Betty Simkins, Brian Smith, Issouf Soumare, Yisong Tian, Chris Veld, Boyce Watkins, Martin Weber and Stephan Wiehler, along with seminar participants from American Finance Association 2005 (Philadelphia), the Economic Science Association 2004 (Amsterdam), the Financial Management Association 2004 (New Orleans), the Financial Management Association European Meeting 2004 (Zurich), European Financial Management Association 2004 (Basle), the Northern Finance Association (St. John’s, Newfoundland), the 2004 Symposium for Experimental Finance at the Aston Centre for Experimental Finance (Aston Business School), the 2005 Federal Reserve Bank of Atlanta Experimental Finance Conference, the University of Köln, the University of Konstanz, McMaster University, the University of Tilburg, Wilfred Laurier University and York University. Valuable technical assistance was provided by Harald Lohre, Amer Mohamed and John O’Brien. Generous financial assistance from ZEW, Institut de Finance Mathématiques de Montréal and SSHRC is gratefully acknowledged. Any views expressed represent those of the authors only and not necessarily those of McKinsey & Company, Inc. C © The Authors 2008. Published by Oxford University Press on behalf of the European Finance Association. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org 556 RICHARD DEAVES, ERIK LÜDERS, AND GUO YING LUO",
"title": ""
},
{
"docid": "0a5a5ec133b4630472ba95c5f6253bed",
"text": "SummaryOngoing political controversies around the world exemplify a long-standing and widespread preoccupation with the acceptability of homosexuality. Nonheterosexual people have seen dramatic surges both in their rights and in positive public opinion in many Western countries. In contrast, in much of Africa, the Middle East, the Caribbean, Oceania, and parts of Asia, homosexual behavior remains illegal and severely punishable, with some countries retaining the death penalty for it. Political controversies about sexual orientation have often overlapped with scientific controversies. That is, participants on both sides of the sociopolitical debates have tended to believe that scientific findings-and scientific truths-about sexual orientation matter a great deal in making political decisions. The most contentious scientific issues have concerned the causes of sexual orientation-that is, why are some people heterosexual, others bisexual, and others homosexual? The actual relevance of these issues to social, political, and ethical decisions is often poorly justified, however.",
"title": ""
},
{
"docid": "cbe70e9372d1588f075d2037164b3077",
"text": "Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.",
"title": ""
},
{
"docid": "1d9dc60534c6f5fa0d510b41bd151b33",
"text": "Android multitasking provides rich features to enhance user experience and offers great flexibility for app developers to promote app personalization. However, the security implication of Android multitasking remains under-investigated. With a systematic study of the complex tasks dynamics, we find design flaws of Android multitasking which make all recent versions of Android vulnerable to task hijacking attacks. We demonstrate proof-of-concept examples utilizing the task hijacking attack surface to implement UI spoofing, denialof-service and user monitoring attacks. Attackers may steal login credentials, implement ransomware and spy on user’s activities. We have collected and analyzed over 6.8 million apps from various Android markets. Our analysis shows that the task hijacking risk is prevalent. Since many apps depend on the current multitasking design, defeating task hijacking is not easy. We have notified the Android team about these issues and we discuss possible mitigation techniques in this paper.",
"title": ""
},
{
"docid": "187c696aeb78607327fd817dfa9446ba",
"text": "OBJECTIVE\nThe integration of SNOMED CT into the Unified Medical Language System (UMLS) involved the alignment of two views of synonymy that were different because the two vocabulary systems have different intended purposes and editing principles. The UMLS is organized according to one view of synonymy, but its structure also represents all the individual views of synonymy present in its source vocabularies. Despite progress in knowledge-based automation of development and maintenance of vocabularies, manual curation is still the main method of determining synonymy. The aim of this study was to investigate the quality of human judgment of synonymy.\n\n\nDESIGN\nSixty pairs of potentially controversial SNOMED CT synonyms were reviewed by 11 domain vocabulary experts (six UMLS editors and five noneditors), and scores were assigned according to the degree of synonymy.\n\n\nMEASUREMENTS\nThe synonymy scores of each subject were compared to the gold standard (the overall mean synonymy score of all subjects) to assess accuracy. Agreement between UMLS editors and noneditors was measured by comparing the mean synonymy scores of editors to noneditors.\n\n\nRESULTS\nAverage accuracy was 71% for UMLS editors and 75% for noneditors (difference not statistically significant). Mean scores of editors and noneditors showed significant positive correlation (Spearman's rank correlation coefficient 0.654, two-tailed p < 0.01) with a concurrence rate of 75% and an interrater agreement kappa of 0.43.\n\n\nCONCLUSION\nThe accuracy in the judgment of synonymy was comparable for UMLS editors and nonediting domain experts. There was reasonable agreement between the two groups.",
"title": ""
},
{
"docid": "f749c2ac068000ad489904e56d0fdf70",
"text": "–- Line maze solving algorithm is an algorithm used to solve a maze made of lines to be traced by a mobile robot. But it is designed only for lines with right angle intersection or turn. Meanwhile in real world, there are also curved and zig-zag turn. In this work, this algorithm is tested for curved and zig-zag track by using Arduino Uno. It turns out that line maze solving algorithm still has some deficiencies, even for the maze without curved and zig-zag line. Moreover, for the curved and zig-zag track, algorithm improvements are needed. Therefore, some of existing functions has been modified and replaced, and one new function added. When the improvements have been done, new algorithm is obtained. Then the test is done again on the mobile robot in a line maze with curved and zig-zag track. The result has proven that the new algorithm has successfully solved the maze.",
"title": ""
},
{
"docid": "8150f588c5eb3919d13f976fec58b736",
"text": "We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.",
"title": ""
},
{
"docid": "5da1f0692a71e4dde4e96009b99e0c13",
"text": "The McKibben artificial muscle is a pneumatic actuator whose properties include a very high force to weight ratio. This characteristic makes it very attractive for a wide range of applications such as mobile robots and prosthetic appliances for the disabled. Typical applications often require a significant number of repeated contractions and extensions or cycles of the actuator. This repeated action leads to fatigue and failure of the actuator, yielding a life span that is often shorter than its more common robotic counterparts such as electric motors or pneumatic cylinders. In this paper, we develop a model that predicts the maximum number of life cycles of the actuator based on available uniaxial tensile properties of the actuator’s inner bladder. Experimental results, which validate the model, reveal McKibben actuators fabricated with natural latex rubber bladders have a fatigue limit 24 times greater than actuators fabricated with synthetic silicone rubber at large contraction ratios.",
"title": ""
},
{
"docid": "5bf385c6ae80f8a8f9dd22592c2530b4",
"text": "This paper represents a reliable, compact, fast and low cost smart home automation system, based on Arduino (microcontroller) and Android app. Bluetooth chip has been used with Arduino, thus eliminating the use of personal computers (PCs). Various devices such as lights, DC Servomotors have been incorporated in the designed system to demonstrate the feasibility, reliability and quick operation of the proposed smart home system. The entire designed system has been tested and it is seen capable of running successfully and perform the desired operations, such as switching functionalities, position control of Servomotor, speed control of D.C motor and light intensity control (Via Voltage Regulation).",
"title": ""
}
] |
scidocsrr
|
5695a91d67276f889e83af6dd4bfacbc
|
Bayesian Optimisation for informative continuous path planning
|
[
{
"docid": "5508603a802abb9ab0203412b396b7bc",
"text": "We present an optimal algorithm for informative path planning (IPP), using a branch and bound method inspired by feature selection algorithms. The algorithm uses the monotonicity of the objective function to give an objective function-dependent speedup versus brute force search. We present results which suggest that when maximizing variance reduction in a Gaussian process model, the speedup is significant.",
"title": ""
},
{
"docid": "187127dd1ab5f97b1158a77a25ddce91",
"text": "We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our approach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.",
"title": ""
}
] |
[
{
"docid": "c14b9f8bb1fe8914ca4a07742b476824",
"text": "This paper presents an improved TIQ comparator based 3-bit Flash ADC. A modification is suggested for improvement in switching power consumption by eliminating halt stage from 2PASC based TIQ comparator used for ADC. It has been found that switching power consumption is reduced in comparison with other types of FLASH ADC. Switching power dissipation of 27.35pW at 1MHz input signal frequency for 3-bit flash ADC is obtained. The simulation has been carried out at TSMC 180nm Technology in LTspice.",
"title": ""
},
{
"docid": "3f8ed9f5b015f50989ebde22329e6e7c",
"text": "In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided.",
"title": ""
},
{
"docid": "4019d3f46ec0ef42145d8d63b62a88d0",
"text": "Learning policies on data synthesized by models can in principle quench the thirst of reinforcement learning algorithms for large amounts of real experience, which is often costly to acquire. However, simulating plausible experience de novo is a hard problem for many complex environments, often resulting in biases for modelbased policy evaluation and search. Instead of de novo synthesis of data, here we assume logged, real experience and model alternative outcomes of this experience under counterfactual actions, i.e. actions that were not actually taken. Based on this, we propose the Counterfactually-Guided Policy Search (CF-GPS) algorithm for learning policies in POMDPs from off-policy experience. It leverages structural causal models for counterfactual evaluation of arbitrary policies on individual off-policy episodes. CF-GPS can improve on vanilla model-based RL algorithms by making use of available logged data to de-bias model predictions. In contrast to off-policy algorithms based on Importance Sampling which re-weight data, CF-GPS leverages a model to explicitly consider alternative outcomes, allowing the algorithm to make better use of experience data. We find empirically that these advantages translate into improved policy evaluation and search results on a non-trivial grid-world task. Finally, we show that CF-GPS generalizes the previously proposed Guided Policy Search and that reparameterization-based algorithms such Stochastic Value Gradient can be interpreted as counterfactual methods.",
"title": ""
},
{
"docid": "6f768934f02c0e559801a7b98d0fbbd7",
"text": "Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.",
"title": ""
},
{
"docid": "25f625c93eb2b9963f331b012e9d973d",
"text": "This paper proposes a method to evaluate induction machines for electric vehicles (EVs) and hybrid electric vehicles (HEVs). Some performance aspects of induction machines are also compared to permanent magnet synchronous machines (PMSMs). An overview of static efficiency maps is presented, but efficiency maps miss dynamic effects and under-predict induction machine efficiencies. The proposed evaluation method is based on dynamic efficiency under loss minimization and overall energy consumption over standard driving cycles that are provided by the U.S. Environmental Protection Agency. Over each of these cycles, the dynamic efficiency and drive-cycle energy are determined based on experimental motor data in combination with a dynamic HEV simulator. Results show that efficiency in the fast-changing dynamic environment of a vehicle can be higher than inferred from static efficiency maps. Overall machine efficiency is compared for rated flux, and for dynamic loss-minimizing flux control. The energy efficiency given optimum flux is typically five points higher than for rated flux. This result is comparable to published PMSM results. A PMSM is also used for comparisons, and results show that both machines can perform well in HEV and EV applications.",
"title": ""
},
{
"docid": "4e40a94d748530450e7c0d6017cd39c3",
"text": "Segmentation of mandibles in CT scans during virtual surgical planning is crucial for 3D surgical planning in order to obtain a detailed surface representation of the patients bone. Automatic segmentation of mandibles in CT scans is a challenging task due to large variation in their shape and size between individuals. In order to address this challenge we propose a convolutional neural network approach for mandible segmentation in CT scans by considering the continuum of anatomical structures through different planes. The proposed convolutional neural network adopts the architecture of the U-Net and then combines the resulting 2D segmentations from three different planes into a 3D segmentation. We implement such a segmentation approach on 11 neck CT scans and then evaluate the performance. We achieve an average dice coefficient of 0.89 on two testing mandible segmentation. Experimental results show that our proposed approach for mandible segmentation in CT scans exhibits high accuracy.",
"title": ""
},
{
"docid": "5c0d3c8962d1f18a50162bbf3dcd4658",
"text": "The field of power electronics poses challenging control problems that cannot be treated in a complete manner using traditional modelling and controller design approaches. The main difficulty arises from the hybrid nature of these systems due to the presence of semiconductor switches that induce different modes of operation and operate with a high switching frequency. Since the control techniques traditionally employed in industry feature a significant potential for improving the performance and the controller design, the field of power electronics invites the application of advanced hybrid systems methodologies. The computational power available today and the recent theoretical advances in the control of hybrid systems allow one to tackle these problems in a novel way that improves the performance of the system, and is systematic and implementable. In this paper, this is illustrated by two examples, namely the Direct Torque Control of three-phase induction motors and the optimal control of switch-mode dc-dc converters.",
"title": ""
},
{
"docid": "babe85fa78ea1f4ce46eb0cfd77ae2b8",
"text": "x + a1x + · · ·+ an = 0. On s’interesse surtout à la résolution “par radicaux”, c’est-à-dire à la résolution qui n’utilise que des racines m √ a. Il est bien connu depuis le 16 siècle que l’on peut résoudre par radicaux des équations de degré n ≤ 4. Par contre, selon un résultat célèbre d’Abel, l’équation générale de degré n ≥ 5 n’est pas résoluble par radicaux. L’idée principale de la théorie de Galois est d’associer à chaque équation son groupe de symétrie. Cette construction permet de traduire des propriétés de l’équation (telles que la résolubilité par radicaux) aux propriétés du groupe associé. Le cours ne suivra pas le chemin historique. L’ouvrage [Ti 1, 2] est une référence agréable pour l’histoire du sujet.",
"title": ""
},
{
"docid": "cf7c5ae92a0514808232e4e9d006024a",
"text": "We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.",
"title": ""
},
{
"docid": "4e4dfbacd8888535a024878f25d3302d",
"text": "“Big data” is the term for data sets that are so large or complex that traditional data processing applications are inadequate to deal with them. Big data offers unprecedented opportunities for enterprises to use analytics for achieving new levels of competitive advantage, including optimizing operations, customer intelligence and innovation in products and services. It is one of the most significant technology disruptions for businesses since the meteoric rise of the Internet and the digital economy.3 Proponents of big data expect organizations to derive much value from high velocity,4 massive volumes of data originating from “everywhere,” including the Internet of Things (IoT),5",
"title": ""
},
{
"docid": "40d28bd6b2caedec17a0990b8020c918",
"text": "The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed.",
"title": ""
},
{
"docid": "4c64b652d9135dae74de4f167c61e896",
"text": "An important task in computational statistics and machine learning is to approximate a posterior distribution p(x) with an empirical measure supported on a set of representative points {xi}i=1. This paper focuses on methods where the selection of points is essentially deterministic, with an emphasis on achieving accurate approximation when n is small. To this end, we present Stein Points. The idea is to exploit either a greedy or a conditional gradient method to iteratively minimise a kernel Stein discrepancy between the empirical measure and p(x). Our empirical results demonstrate that Stein Points enable accurate approximation of the posterior at modest computational cost. In addition, theoretical results are provided to establish convergence of the method.",
"title": ""
},
{
"docid": "b084146e68ae9b6400019f69573086c3",
"text": "Soccer is the most popular sport in the world and is performed by men and women, children and adults with different levels of expertise. Soccer performance depends upon a myriad of factors such as technical/biomechanical, tactical, mental and physiological areas. One of the reasons that soccer is so popular worldwide is that players may not need to have an extraordinary capacity within any of these performance areas, but possess a reasonable level within all areas. However, there are trends towards more systematic training and selection influencing the anthropometric profiles of players who compete at the highest level. As with other activities, soccer is not a science, but science may help improve performance. Efforts to improve soccer performance often focus on technique and tactics at the expense of physical fitness. During a 90-minute game, elite-level players run about 10 km at an average intensity close to the anaerobic threshold (80-90% of maximal heart rate). Within this endurance context, numerous explosive bursts of activity are required, including jumping, kicking, tackling, turning, sprinting, changing pace, and sustaining forceful contractions to maintain balance and control of the ball against defensive pressure. The best teams continue to increase their physical capacities, whilst the less well ranked have similar values as reported 30 years ago. Whether this is a result of fewer assessments and training resources, selling the best players, and/or knowledge of how to perform effective exercise training regimens in less well ranked teams, is not known. As there do exist teams from lower divisions with as high aerobic capacity as professional teams, the latter factor probably plays an important role. This article provides an update on the physiology of soccer players and referees, and relevant physiological tests. It also gives examples of effective strength- and endurance-training programmes to improve on-field performance. The cited literature has been accumulated by computer searching of relevant databases and a review of the authors' extensive files. From a total of 9893 papers covering topics discussed in this article, 843 were selected for closer scrutiny, excluding studies where information was redundant, insufficient or the experimental design was inadequate. In this article, 181 were selected and discussed. The information may have important implications for the safety and success of soccer players and hopefully it should be understood and acted upon by coaches and individual soccer players.",
"title": ""
},
{
"docid": "8626803a7fd8a2190f4d6c4b56b04489",
"text": "Quotes, or quotations, are well known phrases or sentences that we use for various purposes such as emphasis, elaboration, and humor. In this paper, we introduce a task of recommending quotes which are suitable for given dialogue context and we present a deep learning recommender system which combines recurrent neural network and convolutional neural network in order to learn semantic representation of each utterance and construct a sequence model for the dialog thread. We collected a large set of twitter dialogues with quote occurrences in order to evaluate proposed recommender system. Experimental results show that our approach outperforms not only the other state-of-the-art algorithms in quote recommendation task, but also other neural network based methods built for similar tasks.",
"title": ""
},
{
"docid": "ddf8bc756d2b2dcfddd107ac972297a3",
"text": "This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.",
"title": ""
},
{
"docid": "4e5d46d9bb7b9edbc4fc6a42b6314703",
"text": "Positive body image among adults is related to numerous indicators of well-being. However, no research has explored body appreciation among children. To facilitate our understanding of children’s positive body image, the current study adapts and validates the Body Appreciation Scale-2 (BAS-2; Tylka & WoodBarcalow, 2015a) for use with children. Three hundred and forty-four children (54.4% girls) aged 9–11 completed the adapted Body Appreciation Scale-2 for Children (BAS-2C) alongside measures of body esteem, media influence, body surveillance, mood, and dieting. A sub-sample of 154 participants (62.3% girls) completed the questionnaire 6-weeks later to examine stability (test-retest) reliability. The BAS-2C",
"title": ""
},
{
"docid": "3ce574cede850ade17a9600a54c7adbf",
"text": "Cloud computing is an emerging and fast-growing computing paradigm that has gained great interest from both industry and academia. Consequently, many researchers are actively involved in cloud computing research projects. One major challenge facing cloud computing researchers is the lack of a comprehensive cloud computing experimental tool to use in their studies. This paper introduces CloudExp, a modeling and simulation environment for cloud computing. CloudExp can be used to evaluate a wide spectrum of cloud components such as processing elements, data centers, storage, networking, Service Level Agreement (SLA) constraints, web-based applications, Service Oriented Architecture (SOA), virtualization, management and automation, and Business Process Management (BPM). Moreover, CloudExp introduces the Rain workload generator which emulates real workloads in cloud environments. Also, MapReduce processing model is integrated in CloudExp in order to handle the processing of big data problems. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b4c25df52a0a5f6ab23743d3ca9a3af2",
"text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.",
"title": ""
},
{
"docid": "822e6c57ea2bbb53d43e44cf1bda8833",
"text": "The investigators proposed that transgression-related interpersonal motivations result from 3 psychological parameters: forbearance (abstinence from avoidance and revenge motivations, and maintenance of benevolence), trend forgiveness (reductions in avoidance and revenge, and increases in benevolence), and temporary forgiveness (transient reductions in avoidance and revenge, and transient increases in benevolence). In 2 studies, the investigators examined this 3-parameter model. Initial ratings of transgression severity and empathy were directly related to forbearance but not trend forgiveness. Initial responsibility attributions were inversely related to forbearance but directly related to trend forgiveness. When people experienced high empathy and low responsibility attributions, they also tended to experience temporary forgiveness. The distinctiveness of each of these 3 parameters underscores the importance of studying forgiveness temporally.",
"title": ""
}
] |
scidocsrr
|
098a4b0b9f4c07d581950261eebf7939
|
Learning word embeddings via context grouping
|
[
{
"docid": "497088def9f5f03dcb32e33d1b6fcb64",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "21139973d721956c2f30e07ed1ccf404",
"text": "Representing words into vectors in continuous space can form up a potentially powerful basis to generate high-quality textual features for many text mining and natural language processing tasks. Some recent efforts, such as the skip-gram model, have attempted to learn word representations that can capture both syntactic and semantic information among text corpus. However, they still lack the capability of encoding the properties of words and the complex relationships among words very well, since text itself often contains incomplete and ambiguous information. Fortunately, knowledge graphs provide a golden mine for enhancing the quality of learned word representations. In particular, a knowledge graph, usually composed by entities (words, phrases, etc.), relations between entities, and some corresponding meta information, can supply invaluable relational knowledge that encodes the relationship between entities as well as categorical knowledge that encodes the attributes or properties of entities. Hence, in this paper, we introduce a novel framework called RC-NET to leverage both the relational and categorical knowledge to produce word representations of higher quality. Specifically, we build the relational knowledge and the categorical knowledge into two separate regularization functions, and combine both of them with the original objective function of the skip-gram model. By solving this combined optimization problem using back propagation neural networks, we can obtain word representations enhanced by the knowledge graph. Experiments on popular text mining and natural language processing tasks, including analogical reasoning, word similarity, and topic prediction, have all demonstrated that our model can significantly improve the quality of word representations.",
"title": ""
},
{
"docid": "76882dc402b82d9fffb0621bc6016259",
"text": "Representing discrete words in a continuous vector space turns out to be useful for natural language applications related to text understanding. Meanwhile, it poses extensive challenges, one of which is due to the polysemous nature of human language. A common solution (a.k.a word sense induction) is to separate each word into multiple senses and create a representation for each sense respectively. However, this approach is usually computationally expensive and prone to data sparsity, since each sense needs to be managed discriminatively. In this work, we propose a new framework for generating context-aware text representations without diving into the sense space. We model the concept space shared among senses, resulting in a framework that is efficient in both computation and storage. Specifically, the framework we propose is one that: i) projects both words and concepts into the same vector space; ii) obtains unambiguous word representations that not only preserve the uniqueness among words, but also reflect their context-appropriate meanings. We demonstrate the effectiveness of the framework in a number of tasks on text understanding, including word/phrase similarity measurements, paraphrase identification and question-answer relatedness classification.",
"title": ""
}
] |
[
{
"docid": "5dfda76bf2065850492406fdf7cfed81",
"text": "We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure-ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automatically segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made possible by our use of an Implicit Shape Model, which integrates both capabilities into a common probabilistic framework. This model can be thought of as a non-parametric approach which can easily handle configurations of large numbers of object parts. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a natural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with an MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method outperforms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in different articulations and with widely varying texture patterns, even under significant partial occlusion.",
"title": ""
},
{
"docid": "d753d442d99ac49569aa93e33a658ad6",
"text": "Emotion is at the core of understanding ourselves and others, and the automatic expression and detection of emotion could enhance our experience with technologies. In this paper, we explore the use of computational linguistic tools to derive emotional features. Using 50 and 200 word samples of naturally-occurring blog texts, we find that some emotions are more discernible than others. In particular automated content analysis shows that authors expressing anger use the most affective language and also negative affect words; authors expressing joy use the most positive emotion words. In addition we explore the use of co-occurrence semantic space techniques to classify texts via their distance from emotional concept exemplar words: This demonstrated some success, particularly for identifying author expression of fear and joy emotions. This extends previous work by using finer-grained emotional categories and alternative linguistic analysis techniques. We relate our finding to human emotion perception and note potential applications.",
"title": ""
},
{
"docid": "e8c63552492e4285d09877b1b02cad79",
"text": "EPIDEMIOLOGY\nVulvar cancer can be classified into two groups according to predisposing factors: the first type correlates with a HPV infection and occurs mostly in younger patients. The second group is not HPV associated and occurs often in elderly women without neoplastic epithelial disorders.\n\n\nHISTOLOGY\nSquamous cell carcinoma (SCC) is the most common malignant tumor of the vulva (95%).\n\n\nCLINICAL FEATURES\nPruritus is the most common and long-lasting reported symptom of vulvar cancer, followed by vulvar bleeding, discharge, dysuria, and pain.\n\n\nTHERAPY\nThe gold standard for even a small invasive carcinoma of the vulva was historically radical vulvectomy with removal of the tumor with a wide margin followed by an en bloc resection of the inguinal and often the pelvic lymph nodes. Currently, a more individualized and less radical treatment is suggested: a radical wide local excision is possible in the case of localized lesions (T1). A sentinel lymph node (SLN) biopsy may be performed to reduce wound complications and lymphedema.\n\n\nPROGNOSIS\nThe survival of patients with vulvar cancer is good when convenient therapy is arranged quickly after initial diagnosis. Inguinal and/or femoral node involvement is the most significant prognostic factor for survival.",
"title": ""
},
{
"docid": "4eafe7f60154fa2bed78530735a08878",
"text": "Although Android's permission system is intended to allow users to make informed decisions about their privacy, it is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. We present an alternate approach to providing users the knowledge needed to make informed decisions about the applications they install. First, we create a knowledge base of mappings between API calls and fine-grained privacy-related behaviors. We then use this knowledge base to produce, through static analysis, high-level behavior profiles of application behavior. We have analyzed almost 80,000 applications to date and have made the resulting behavior profiles available both through an Android application and online. Nearly 1500 users have used this application to date. Based on 2782 pieces of application-specific feedback, we analyze users' opinions about how applications affect their privacy and demonstrate that these profiles have had a substantial impact on their understanding of those applications. We also show the benefit of these profiles in understanding large-scale trends in how applications behave and the implications for user privacy.",
"title": ""
},
{
"docid": "cc8e52fdb69a9c9f3111287905f02bfc",
"text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.",
"title": ""
},
{
"docid": "e5bf05ae6700078dda83eca8d2f65cd4",
"text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.",
"title": ""
},
{
"docid": "0bb5bbdf7043eed23cafdd54df68c709",
"text": "We present two studies of online ephemerality and anonymity based on the popular discussion board /b/ at 4chan.org: a website with over 7 million users that plays an influential role in Internet culture. Although researchers and practitioners often assume that user identity and data permanence are central tools in the design of online communities, we explore how /b/ succeeds despite being almost entirely anonymous and extremely ephemeral. We begin by describing /b/ and performing a content analysis that suggests the community is dominated by playful exchanges of images and links. Our first study uses a large dataset of more than five million posts to quantify ephemerality in /b/. We find that most threads spend just five seconds on the first page and less than five minutes on the site before expiring. Our second study is an analysis of identity signals on 4chan, finding that over 90% of posts are made by fully anonymous users, with other identity signals adopted and discarded at will. We describe alternative mechanisms that /b/ participants use to establish status and frame their interactions.",
"title": ""
},
{
"docid": "548d87ac6f8a023d9f65af371ad9314c",
"text": "Mindfiilness meditation is an increasingly popular intervention for the treatment of physical illnesses and psychological difficulties. Using intervention strategies with mechanisms familiar to cognitive behavioral therapists, the principles and practice of mindfijlness meditation offer promise for promoting many of the most basic elements of positive psychology. It is proposed that mindfulness meditation promotes positive adjustment by strengthening metacognitive skills and by changing schemas related to emotion, health, and illness. Additionally, the benefits of yoga as a mindfulness practice are explored. Even though much empirical work is needed to determine the parameters of mindfulness meditation's benefits, and the mechanisms by which it may achieve these benefits, theory and data thus far clearly suggest the promise of mindfulness as a link between positive psychology and cognitive behavioral therapies.",
"title": ""
},
{
"docid": "99361418a043f546f5eaed54746d6abc",
"text": "Non-negative Matrix Factorization (NMF) and Probabilistic Latent Semantic Indexing (PLSI) have been successfully applied to document clustering recently. In this paper, we show that PLSI and NMF (with the I-divergence objective function) optimize the same objective function, although PLSI and NMF are different algorithms as verified by experiments. This provides a theoretical basis for a new hybrid method that runs PLSI and NMF alternatively, each jumping out of local minima of the other method successively, thus achieving a better final solution. Extensive experiments on five real-life datasets show relations between NMF and PLSI, and indicate the hybrid method leads to significant improvements over NMFonly or PLSI-only methods. We also show that at first order approximation, NMF is identical to χ-statistic.",
"title": ""
},
{
"docid": "4b8ae0880755d20fc7bb18e05fe2d18d",
"text": "Understanding how to build cognitive systems with commonsense is a difficult problem. Since one goal of qualitative reasoning is to explain human mental models of the continuous world, hopefully qualitative representations and reasoning have a role to play. But how much of a role? Standardized tests used in education provide a potentially useful way to measure both how much qualitative knowledge is used in commonsense science, and to assess progress in qualitative representation and reasoning. This paper analyzes a small corpus of science tests from US classrooms and shows that QR techniques are central in answering 13% of them, and play a role in at least an additional 16%. We found that today’s QR techniques suffice for standard QR questions, but integrating QR with broader knowledge about the world and automatically understanding the questions as expressed in language and pictures provide new research challenges.",
"title": ""
},
{
"docid": "73325aa0f4253294e7f116f7e0706766",
"text": "To protect SDN-enabled networks under large-scale, unexpected link failures, we propose ResilientFlow that deploys distributed modules called Control Channel Maintenance Module (CCMM) for every switch and controllers. The CCMMs makes switches able to maintain their own control channels, which are core and fundamental part of SDN. In this paper, we design, implement, and evaluate the ResilientFlow.",
"title": ""
},
{
"docid": "a0407424fce71b9e4119d1d9fefc5542",
"text": "The design and development of complex engineering products require the efforts and collaboration of hundreds of participants from diverse backgrounds resulting in complex relationships among both people and tasks. Many of the traditional project management tools (PERT, Gantt and CPM methods) do not address problems stemming from this complexity. While these tools allow the modeling of sequential and parallel processes, they fail to address interdependency (feedback and iteration), which is common in complex product development (PD) projects. To address this issue, a matrix-based tool called the Design Structure Matrix (DSM) has evolved. This method differs from traditional project-management tools because it focuses on representing information flows rather than work flows. The DSM method is an information exchange model that allows the representation of complex task (or team) relationships in order to determine a sensible sequence (or grouping) for the tasks (or teams) being modeled. This article will cover how the basic method works and how you can use the DSM to improve the planning, execution, and management of complex PD projects using different algorithms (i.e., partitioning, tearing, banding, clustering, simulation, and eigenvalue analysis). Introduction: matrices and projects Consider a system (or project) that is composed of two elements /sub-systems (or activities/phases): element \"A\" and element \"B\". A graph may be developed to represent this system pictorially. The graph is constructed by allowing a vertex/node on the graph to represent a system element and an edge joining two nodes to represent the relationship between two system elements. The directionality of influence from one element to another is captured by an arrow instead of a simple link. The resultant graph is called a directed graph or simply a digraph. There are three basic building blocks for describing the relationship amongst system elements: parallel (or concurrent), sequential (or dependent) and coupled (or interdependent) (fig. 1) Fig.1 Three Configurations that Characterize a System Relationship Parallel Sequential Coupled Graph Representation A B A",
"title": ""
},
{
"docid": "846aff14ba654f154b37ae03089bb19f",
"text": "This paper presents a procedure to model the drawbar pull and resistive torque of an unknown terrain as a function of normal load and slip using on-board rover instruments. Kapvik , which is a planetary micro-rover prototype with a rocker-bogie mobility system, is simulated in two dimensions. A suite of sensors is used to take relevant measurements; in addition to typical rover measurements, forces above the wheel hubs and rover forward velocity are sensed. An estimator determines the drawbar pull, resistive torque, normal load, and slip of the rover. The collected data are used to create a polynomial fit model that closely resembles the real terrain response.",
"title": ""
},
{
"docid": "70e6e2105a1ca32c47478f52162eeb1a",
"text": "In this paper, we discuss the control of a ball and beam system subject to an input constraint. Model predictive control (MPC) approaches are employed to derive a nonlinear control law satisfying the constraint. The control law is given by solving the optimization problem at each sample time, where the primal-dual interior point algorithm is implemented and used as the optimization solver. An experimental comparison of three control methods, two different MPCs and saturated LQR, has been presented for the control of the ball and beam system.",
"title": ""
},
{
"docid": "4a8c8c09fe94cddbc9cadefa014b1165",
"text": "A solution to trajectory-tracking control problem for a four-wheel-steering vehicle (4WS) is proposed using sliding-mode approach. The advantage of this controller over current control procedure is that it is applicable to a large class of vehicles with single or double steering and to a tracking velocity that is not necessarily constant. The sliding-mode approach make the solutions robust with respect to errors and disturbances, as demonstrated by the simulation results.",
"title": ""
},
{
"docid": "e4c2fcc09b86dc9509a8763e7293cfe9",
"text": "This paperinvestigatesthe useof particle (sub-word) -grams for languagemodelling. One linguistics-basedand two datadriven algorithmsare presentedand evaluatedin termsof perplexity for RussianandEnglish. Interpolatingword trigramand particle6-grammodelsgivesup to a 7.5%perplexity reduction over thebaselinewordtrigrammodelfor Russian.Latticerescor ing experimentsarealsoperformedon1997DARPA Hub4evaluationlatticeswheretheinterpolatedmodelgivesa 0.4%absolute reductionin worderrorrateoverthebaselinewordtrigrammodel.",
"title": ""
},
{
"docid": "275504a9fd95b8259ec18cdfcbd6caa0",
"text": "Context: Females are two to eight times more likely to sustain an ACL injury than males participating in the same sport. The primary mechanism reported for noncontact ACL injury involves landing from a jump, unanticipated change of direction, and/or deceleration activities. Objective: The purpose of this study was to determine if adolescent female athletes perform athletic activities with decreased hip and knee flexion angles, and decreased EMG activity of the gluteus medius relative to their male counterparts. Design: Cohort study from local club basketball teams. Setting: University Laboratory. Participants: Ten healthy adolescent basketball athletes (5 females, 5 males). Interventions: Each participant was instructed to jump over a barrier, land with each foot on a floor-mounted force plate, and cut in a specific direction. Participants made a side cut either to the right or left, or stepped forward into a straight run. Each subject performed fifteen (15) randomized jump, land, and unanticipated cutting maneuvers. Main outcome measures: The peak electromyography (EMG) and ground reaction force (GRF) [normalized with body weight] data were analyzed during the landing for the three cutting directions. Kinematic variables include joint angles for the ankle, knee and hip at landing and push off. Analysis: Independent samples t-tests examined differences between the genders for dependent variables. Results: No differences were noted for the left or right EMG amplitudes or muscle onsets. The joint angle in the left ankle (p = 0.019) during peak knee flexion of the left cut demonstrated the females performed tasks with greater dorsiflexion angles than males. However, during the peak GRF of the center cut in the right ankle (p = 0.012) males had greater dorsiflexion. The male participants sustained greater anterior forces in the left leg during the peak knee flexion angle (p = 0.022) and push off (p = 0.040) during the left cut. The male participants sustained lateral forces and female participants sustained medial forces (p = 0.010) during the center cut. The female participants sustained greater anterior forces in the right leg than the males (p = 0.041) during the peak knee flexion angles, and that females sustained anterior forces, while the male’s sustained posterior forces (p = 0.009) in the right leg during peak GRF. The male participants sustained greater medial forces during the peak knee flexion angles (p = 0.031) compared to the female participants. Clinical relevance: This study may advance our understanding of potential forces and muscle activation strategies about the ankle, knee, and hip during sport specific activities as our findings suggest women might sustain different forces during landing and cutting.",
"title": ""
},
{
"docid": "f10d79d1eb6d3ec994c1ec7ec3769437",
"text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]",
"title": ""
},
{
"docid": "d83ecee8e5f59ee8e6a603c65f952c22",
"text": "PredPatt is a pattern-based framework for predicate-argument extraction. While it works across languages and provides a well-formed syntax-semantics interface for NLP tasks, a large-scale and reproducible evaluation has been lacking, which prevents comparisons between PredPatt and other related systems, and inhibits the updates of the patterns in PredPatt. In this work, we improve and evaluate PredPatt by introducing a large set of high-quality annotations converted from PropBank, which can also be used as a benchmark for other predicate-argument extraction systems. We compare PredPatt with other prominent systems and shows that PredPatt achieves the best precision and recall.",
"title": ""
},
{
"docid": "243391e804c06f8a53af906b31d4b99a",
"text": "As key decisions are often made based on information contained in a database, it is important for the database to be as complete and correct as possible. For this reason, many data cleaning tools have been developed to automatically resolve inconsistencies in databases. However, data cleaning tools provide only best-effort results and usually cannot eradicate all errors that may exist in a database. Even more importantly, existing data cleaning tools do not typically address the problem of determining what information is missing from a database.\n To overcome the limitations of existing data cleaning techniques, we present QOCO, a novel query-oriented system for cleaning data with oracles. Under this framework, incorrect (resp. missing) tuples are removed from (added to) the result of a query through edits that are applied to the underlying database, where the edits are derived by interacting with domain experts which we model as oracle crowds. We show that the problem of determining minimal interactions with oracle crowds to derive database edits for removing (adding) incorrect (missing) tuples to the result of a query is NP-hard in general and present heuristic algorithms that interact with oracle crowds. Finally, we implement our algorithms in our prototype system QOCO and show that it is effective and efficient through a comprehensive suite of experiments.",
"title": ""
}
] |
scidocsrr
|
fdb1afea677d77bd283056de4a007243
|
GPflow: A Gaussian Process Library using TensorFlow
|
[
{
"docid": "b26c2a76a1a64aa98ac5c380947dcf4d",
"text": "The GPML toolbox provides a wide range of functionality for G aussian process (GP) inference and prediction. GPs are specified by mean and covariance func tions; we offer a library of simple mean and covariance functions and mechanisms to compose mor e complex ones. Several likelihood functions are supported including Gaussian and heavytailed for regression as well as others suitable for classification. Finally, a range of inference m thods is provided, including exact and variational inference, Expectation Propagation, and Lapl ace’s method dealing with non-Gaussian likelihoods and FITC for dealing with large regression task s.",
"title": ""
},
{
"docid": "20deb56f6d004a8e33d1e1a4f579c1ba",
"text": "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.",
"title": ""
},
{
"docid": "187127dd1ab5f97b1158a77a25ddce91",
"text": "We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our approach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.",
"title": ""
}
] |
[
{
"docid": "0153774b49121d8735cc3d33df69fc00",
"text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.",
"title": ""
},
{
"docid": "9b2dd28151751477cc46f6c6d5ec475f",
"text": "Clinical and experimental data indicate that most acupuncture clinical results are mediated by the central nervous system, but the specific effects of acupuncture on the human brain remain unclear. Even less is known about its effects on the cerebellum. This fMRI study demonstrated that manual acupuncture at ST 36 (Stomach 36, Zusanli), a main acupoint on the leg, modulated neural activity at multiple levels of the cerebro-cerebellar and limbic systems. The pattern of hemodynamic response depended on the psychophysical response to needle manipulation. Acupuncture stimulation typically elicited a composite of sensations termed deqi that is related to clinical efficacy according to traditional Chinese medicine. The limbic and paralimbic structures of cortical and subcortical regions in the telencephalon, diencephalon, brainstem and cerebellum demonstrated a concerted attenuation of signal intensity when the subjects experienced deqi. When deqi was mixed with sharp pain, the hemodynamic response was mixed, showing a predominance of signal increases instead. Tactile stimulation as control also elicited a predominance of signal increase in a subset of these regions. The study provides preliminary evidence for an integrated response of the human cerebro-cerebellar and limbic systems to acupuncture stimulation at ST 36 that correlates with the psychophysical response.",
"title": ""
},
{
"docid": "520e184186888ca59feecf0dd3823f2d",
"text": "Wireless telemonitoring of physiological signals is an important topic in eHealth. In order to reduce on-chip energy consumption and extend sensor life, recorded signals are usually compressed before transmission. In this paper, we adopt compressed sensing (CS) as a low-power compression framework, and propose a fast block sparse Bayesian learning (BSBL) algorithm to reconstruct original signals. Experiments on real-world fetal ECG signals and epilepsy EEG signals showed that the proposed algorithm eywords: ow-power data compression ompressed sensing (CS) lock sparse Bayesian learning (BSBL) lectrocardiography (ECG) lectroencephalography (EEG) has good balance between speed and data reconstruction fidelity when compared to state-of-the-art CS algorithms. Further, we implemented the CS-based compression procedure and a low-power compression procedure based on a wavelet transform in field programmable gate array (FPGA), showing that the CS-based compression can largely save energy and other on-chip computing resources. © 2014 Elsevier Ltd. All rights reserved. ield programmable gate array (FPGA)",
"title": ""
},
{
"docid": "cde1b5f21bdc05aa5a86aa819688d63c",
"text": "This paper presents two fuzzy portfolio selection models where the objective is to minimize the downside risk constrained by a given expected return. We assume that the rates of returns on securities are approximated as LR-fuzzy numbers of the same shape, and that the expected return and risk are evaluated by interval-valued means. We establish the relationship between those mean-interval definitions for a given fuzzy portfolio by using suitable ordering relations. Finally, we formulate the portfolio selection problem as a linear program when the returns on the assets are of trapezoidal form. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4ebd98d8efa7bbd6b7cda9f39701ec15",
"text": "Solving statistical learning problems often involves nonconvex optimization. Despite the empirical success of nonconvex statistical optimization methods, their global dynamics, especially convergence to the desirable local minima, remain less well understood in theory. In this paper, we propose a new analytic paradigm based on diffusion processes to characterize the global dynamics of nonconvex statistical optimization. As a concrete example, we study stochastic gradient descent (SGD) for the tensor decomposition formulation of independent component analysis. In particular, we cast different phases of SGD into diffusion processes, i.e., solutions to stochastic differential equations. Initialized from an unstable equilibrium, the global dynamics of SGD transit over three consecutive phases: (i) an unstable Ornstein-Uhlenbeck process slowly departing from the initialization, (ii) the solution to an ordinary differential equation, which quickly evolves towards the desirable local minimum, and (iii) a stable Ornstein-Uhlenbeck process oscillating around the desirable local minimum. Our proof techniques are based upon Stroock and Varadhan’s weak convergence of Markov chains to diffusion processes, which are of independent interest.",
"title": ""
},
{
"docid": "a9975365f0bad734b77b67f63bdf7356",
"text": "Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.",
"title": ""
},
{
"docid": "1bb149552a2506d7305641e7e4300d3a",
"text": "This paper presents the LineScout Technology, a mobile teleoperated robot for power line inspection and maintenance. Optimizing several geometric parameters achieved a compact design that was successfully tested over many line configurations and obstacle sequences. An overview of the technology is presented, including a description of the control strategy, followed by a section focusing on key aspects of the prototype thorough validation. Working on live lines, up to 735 kV and 1,000 A, means that the technology must be robust to electromagnetic interference. The third generation prototype, tested in laboratory and in field conditions, is now ready to undertake inspection pilot projects.",
"title": ""
},
{
"docid": "4eeef9a48f282bc6214c39d4c40303e7",
"text": "Failure management is a particular challenge problem in the automotive domain. Today's cars host a network of 30 to 80 electronic control units (ECUs), distributed over up to five interconnected in-car networks supporting hundreds to thousands of softwaredefined functions. This high degree of distribution of hard- and software components is a key contributor to the difficulty of failure management in vehicle. This paper addresses comprehensive failure management, starting from domain models for logical and deployment models of automotive software. These models capture interaction patterns as a critical part of both logical and deployment architectures, introducing failure detection and mitigation as \"wrapper\" services to \"unmanaged services\", i.e. services without failure management. We show how these models can be embedded into an interaction-centric development process, which captures failure management information across development phases. Finally, we exploit the failure management models to verify that a particular architecture meets its requirements under the stated failure hypothesis.",
"title": ""
},
{
"docid": "8a2d99e9b8156ed42a906ed8f5cc7772",
"text": "A continuous path is one of the most important requirements for solid freeform fabrication (SFF) based on welding. This paper proposes a method for torch path planning applicable to SFF based on welding with an emphasis on minimum human intervention. The suggested approach describes a method based on the subdivision of a two-dimensional (2-D) polygonal section into a set of monotone polygons to generate a continuous path for material deposition. A two-dimensional contour is subdivided into smaller polygons (subpolygons). The path for each individual subpolygon is generated. The final torch path is obtained by connecting individual paths for all the subpolygons and trimming along the points of intersection of the paths for individual subpolygons. The final path is a closed loop; therefore, any point can be selected as the starting point for material deposition. The proposed method can be used to develop the toolpath for CNC milling.",
"title": ""
},
{
"docid": "c0350ac9bd1c38252e04a3fd097ae6ee",
"text": "In contrast to the increasing popularity of REpresentational State Transfer (REST), systematic testing of RESTful Application Programming Interfaces (API) has not attracted much attention so far. This paper describes different aspects of automated testing of RESTful APIs. Later, we focus on functional and security tests, for which we apply a technique called model-based software development. Based on an abstract model of the RESTful API that comprises resources, states and transitions a software generator not only creates the source code of the RESTful API but also creates a large number of test cases that can be immediately used to test the implementation. This paper describes the process of developing a software generator for test cases using state-of-the-art tools and provides an example to show the feasibility of our approach.",
"title": ""
},
{
"docid": "00f7fb960e1cfc1a4382a55d1038135a",
"text": "Cyber-physical systems, used in domains such as avionics or medical devices, perform critical functions where a fault might have catastrophic consequences (mission failure, severe injuries, etc.). Their development is guided by rigorous practice standards that prescribe safety analysis methods in order to verify that failure have been correctly evaluated and/or mitigated. This laborintensive practice typically focuses system safety analysis on system engineering activities.\n As reliance on software for system operation grows, embedded software systems have become a major source of hazard contributors. Studies show that late discovery of errors in embedded software system have resulted in costly rework, making up as much as 50% of the total software system cost. Automation of the safety analysis process is key to extending safety analysis to the software system and to accommodate system evolution.\n In this paper we discuss three elements that are key to safety analysis automation in the context of fault tree analysis (FTA). First, generation of fault trees from annotated architecture models consistently reflects architecture changes in safety analysis results. Second, use of a taxonomy of failure effects ensures coverage of potential hazard contributors is achieved. Third, common cause failures are identified based on architecture information and reflected appropriately in probabilistic fault tree analysis. The approach utilizes the SAE Architecture Analysis & Design Language (AADL) standard and the recently published revised Error Model Annex V2 (EMV2) standard to represent annotated architecture models of systems and embedded software systems.\n The approach takes into account error sources specified with an EMV2 error propagation type taxonomy and occurrence probabilities as well as direct and indirect propagation paths between system components identified in the architecture model to generate a fault graph and apply transformations into a fault tree representation to support common mode analysis, cut set determination and probabilistic analysis.",
"title": ""
},
{
"docid": "c699ce2a06276f722bf91806378b11eb",
"text": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.",
"title": ""
},
{
"docid": "8bf0f899173b997e41b936825a92c2aa",
"text": "This paper discusses vehicle traffic congestion which leads to air pollution, driver frustration, and costs billions of dollars annually in fuel consumption. Finding a proper solution to vehicle congestion is a considerable challenge due to the dynamic and unpredictable nature of the network topology of vehicular environments, especially in urban areas. Recent advances in sensing, communication and computing technologies enables us to gather real-time data about traffic condition of the roads and mitigate the traffic congestion via various ways such as Vehicle Traffic Routing Systems (VTRSs), electronic toll collection system (ETCS), and intelligent traffic light signals (TLSs). Regarding this issue, an innovative technology, called Intelligent Guardrails (IGs), is presented in this paper. IGs takes advantages of Internet of Things (IoT) and vehicular networks to provide a solution for vehicle traffic congestion in large cities. IGs senses the roads' traffic condition and uses this information to set the capacity of the roads dynamically.",
"title": ""
},
{
"docid": "1968573cf98307276bf0f10037aa3623",
"text": "In many imaging applications, the continuous phase information of the measured signal is wrapped to a single period of 2π, resulting in phase ambiguity. In this paper we consider the two-dimensional phase unwrapping problem and propose a Maximum a Posteriori (MAP) framework for estimating the true phase values based on the wrapped phase data. In particular, assuming a joint Gaussian prior on the original phase image, we show that the MAP formulation leads to a binary quadratic minimization problem. The latter can be efficiently solved by semidefinite relaxation (SDR). We compare the performances of our proposed method with the existing L1/L2-norm minimization approaches. The numerical results demonstrate that the SDR approach significantly outperforms the existing phase unwrapping methods.",
"title": ""
},
{
"docid": "c224e2fd513b6dafe3862b12a5bec9b9",
"text": "NF-kappaB (nuclear factor-kappaB) is a collective name for inducible dimeric transcription factors composed of members of the Rel family of DNA-binding proteins that recognize a common sequence motif. NF-kappaB is found in essentially all cell types and is involved in activation of an exceptionally large number of genes in response to infections, inflammation, and other stressful situations requiring rapid reprogramming of gene expression. NF-kappaB is normally sequestered in the cytoplasm of nonstimulated cells and consequently must be translocated into the nucleus to function. The subcellular location of NF-kappaB is controlled by a family of inhibitory proteins, IkappaBs, which bind NF-kappaB and mask its nuclear localization signal, thereby preventing nuclear uptake. Exposure of cells to a variety of extracellular stimuli leads to the rapid phosphorylation, ubiquitination, and ultimately proteolytic degradation of IkappaB, which frees NF-kappaB to translocate to the nucleus where it regulates gene transcription. NF-kappaB activation represents a paradigm for controlling the function of a regulatory protein via ubiquitination-dependent proteolysis, as an integral part of a phosphorylationbased signaling cascade. Recently, considerable progress has been made in understanding the details of the signaling pathways that regulate NF-kappaB activity, particularly those responding to the proinflammatory cytokines tumor necrosis factor-alpha and interleukin-1. The multisubunit IkappaB kinase (IKK) responsible for inducible IkappaB phosphorylation is the point of convergence for most NF-kappaB-activating stimuli. IKK contains two catalytic subunits, IKKalpha and IKKbeta, both of which are able to correctly phosphorylate IkappaB. Gene knockout studies have shed light on the very different physiological functions of IKKalpha and IKKbeta. After phosphorylation, the IKK phosphoacceptor sites on IkappaB serve as an essential part of a specific recognition site for E3RS(IkappaB/beta-TrCP), an SCF-type E3 ubiquitin ligase, thereby explaining how IKK controls IkappaB ubiquitination and degradation. A variety of other signaling events, including phosphorylation of NF-kappaB, hyperphosphorylation of IKK, induction of IkappaB synthesis, and the processing of NF-kappaB precursors, provide additional mechanisms that modulate the level and duration of NF-kappaB activity.",
"title": ""
},
{
"docid": "0ef2c10b511454cc4432217062e8f50d",
"text": "Non-volatile memory (NVM) is a new storage technology that combines the performance and byte addressability of DRAM with the persistence of traditional storage devices like flash (SSD). While these properties make NVM highly promising, it is not yet clear how to best integrate NVM into the storage layer of modern database systems. Two system designs have been proposed. The first is to use NVM exclusively, i.e., to store all data and index structures on it. However, because NVM has a higher latency than DRAM, this design can be less efficient than main-memory database systems. For this reason, the second approach uses a page-based DRAM cache in front of NVM. This approach, however, does not utilize the byte addressability of NVM and, as a result, accessing an uncached tuple on NVM requires retrieving an entire page.\n In this work, we evaluate these two approaches and compare them with in-memory databases as well as more traditional buffer managers that use main memory as a cache in front of SSDs. This allows us to determine how much performance gain can be expected from NVM. We also propose a lightweight storage manager that simultaneously supports DRAM, NVM, and flash. Our design utilizes the byte addressability of NVM and uses it as an additional caching layer that improves performance without losing the benefits from the even faster DRAM and the large capacities of SSDs.",
"title": ""
},
{
"docid": "41a722e35b2efa8286250db983e11e45",
"text": "Graphene is a relatively new material with unique properties that holds promise for electronic applications. Since 2004, when the first graphene samples were intentionally fabricated, the worldwide research activities on graphene have literally exploded. Apart from physicists, also device engineers became interested in the new material and soon the prospects of graphene in electronics have been considered. For the most part, the early discussions on the potential of graphene had a prevailing positive mood, mainly based on the high carrier mobilities observed in this material. This has repeatedly led to very optimistic assessments of the potential of graphene transistors and to an underestimation of their problems. In this paper, we discuss the properties of graphene relevant for electronic applications, examine its advantages and problems, and summarize the state of the art of graphene transistors.",
"title": ""
},
{
"docid": "f95e1f2604c03ad6c275125a980a3f1d",
"text": "A compact dual band three way power divider based on the Bagley Polygon has been implemented using composite right/left handed (CRLH) transmission lines consisting of microstrip lines and lumped elements for the GSM frequencies of 860MHz and 1.92GHz. Also, a dual band power divider consisting of conventional quarter wavelength (λ/4) transmission lines with shunt connections of open and short stubs has been implemented for comparison. An advantage of using the Bagley Polygon for three way power divider is that it allows an arbitrary phase selection at the third port by using single or dual band, CRLH or conventional transmission lines. The CRLH based power divider shows an area of 7.95 cm2 and a bandwidth of approximately 8% at both bands while the comparison structure shows an area of 51.98 cm2, a bandwidth of 5.8% at 860MHz, and 2.6% at 1.92GHz. Less than 5.5dB insertion loss has been achieved in both cases. Also, full wave structure simulation has been performed and the simulation results agree well with the measurement results.",
"title": ""
},
{
"docid": "0b4cc0182fba2ca580e44beee5c35f8f",
"text": "A good user experience depends on predictable performance within the data-center network.",
"title": ""
},
{
"docid": "a81e4b95dfaa7887f66066343506d35f",
"text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.",
"title": ""
}
] |
scidocsrr
|
41e1298b83ed463e7b2018a238b05e4d
|
INFERRING REWARD FUNCTIONS
|
[
{
"docid": "5182d5c7bff7ebc4b2a3491e115bd602",
"text": "Planning problems are among the most important and well-studied problems in artificial intelligence. They are most typically solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of a search tree. Among these algorithms, Monte-Carlo tree search (MCTS) is one of the most general, powerful and widely used. A typical implementation of MCTS uses cleverly designed rules, optimised to the particular characteristics of the domain. These rules control where the simulation traverses, what to evaluate in the states that are reached, and how to back-up those evaluations. In this paper we instead learn where, what and how to search. Our architecture, which we call an MCTSnet, incorporates simulation-based search inside a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimisation. When applied to small searches in the well-known planning problem Sokoban, the learned search algorithm significantly outperformed MCTS baselines.",
"title": ""
}
] |
[
{
"docid": "45e9a2d70f14a918ff8bed0116042368",
"text": "INTRODUCTION\nPatients with mitral regurgitation are increasingly treated by percutaneous implantation of a MitraClip device (Abbott Park, IL, USA). We investigate the feasibility and safety of the transmitral catheter route for catheter ablation of ventricular tachycardia (VT) in these patients.\n\n\nMETHODS\nThe mitral valve with the MitraClip in situ was crossed under transesophageal 3-dimensional echocardiographic and fluoroscopic guidance using a steerable sheath for ablation of the left ventricle.\n\n\nRESULTS\nFive patients (all males, median age 74.0 ± 16.0 years) who had previously a MitraClip implanted were referred for catheter ablation of VT. The left ventricular ejection fraction was 29.0% ± 24.0%. One patient had both an atrial septal defect and a left atrial appendage occluder device in addition to a MitraClip. The duration between MitraClip implantation and ablation was 1019.0 ± 783.0 days. After transseptal puncture, ablation catheter was successfully steered through the mitral valve with the use of fluoroscopy. A complete high-density map of the substrate in sinus rhythm could be obtained in all patients using multipolar mapping catheters. In 1 patient, mapping was carried out using a mini-basket catheter. Procedural endpoints, noninducibility of all VTs, and abolition of all late potentials were achieved in all patients. Procedure time was 255.0 ± 52.5 minute, fluoroscopy time was 23.0 ± 7.3, and the radiation dose was 61.0 ± 37.5 Gycm2 . No mitral insufficiency or worsening of regurgitation was documented after the procedure.\n\n\nCONCLUSIONS\nThis is the first report demonstrating the feasibility and safety of VT ablation in patients with a MitraClip device using the anterograde transmitral catheter route.",
"title": ""
},
{
"docid": "3dcb93232121be1ff8a2d96ecb25bbdd",
"text": "We describe the approach that won the preliminary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%.We obtain an even better recognition rate of 99.15% by further training the nets. Our fast, fully parameterizable GPU implementation of a Convolutional Neural Network does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. A CNN/MLP committee further boosts recognition performance.",
"title": ""
},
{
"docid": "cdb54f1f475daf78dd584c83980f6227",
"text": "In this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner. As in the deformable template paradigm, shape is represented as a deformation between a canonical coordinate system (‘template’) and an observed image, while appearance is modeled in deformation-invariant, template coordinates. We introduce novel techniques that allow this approach to be deployed in the setting of autoencoders and show that this method can be used for unsupervised group-wise image alignment. We show experiments with expression morphing in humans, hands, and digits, face manipulation, such as shape and appearance interpolation, as well as unsupervised landmark localization. We also achieve a more powerful form of unsupervised disentangling in template coordinates, that successfully decomposes face images into shading and albedo, allowing us to further manipulate face images.",
"title": ""
},
{
"docid": "ee3b9382afc9455e53dd41d3725eb74a",
"text": "Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy stateof-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https://github.com/huangzehao/ sparse-structure-selection.",
"title": ""
},
{
"docid": "59655f76a875e189913029102ed8f77c",
"text": "Metaphorical expressions are pervasive in natural language and pose a substantial challenge for computational semantics. The inherent compositionality of metaphor makes it an important test case for compositional distributional semantic models (CDSMs). This paper is the first to investigate whether metaphorical composition warrants a distinct treatment in the CDSM framework. We propose a method to learn metaphors as linear transformations in a vector space and find that, across a variety of semantic domains, explicitly modeling metaphor improves the resulting semantic representations. We then use these representations in a metaphor identification task, achieving a high performance of 0.82 in terms of F-score.",
"title": ""
},
{
"docid": "32acba3e072e0113759278c57ee2aee2",
"text": "Software product lines (SPL) relying on UML technology have been a breakthrough in software reuse in the IT domain. In the industrial automation domain, SPL are not yet established in industrial practice. One reason for this is that conventional function block programming techniques do not adequately support SPL architecture definition and product configuration, while UML tools are not industrially accepted for control software development. In this paper, the use of object oriented (OO) extensions of IEC 61131–3 are used to bridge this gap. The SPL architecture and product specifications are expressed as UML class diagrams, which serve as straightforward specifications for configuring the IEC 61131–3 control application with OO extensions. A product configurator tool has been developed using PLCopen XML technology to support the generation of an executable IEC 61131–3 application according to chosen product options. The approach is demonstrated using a mobile elevating working platform as a case study.",
"title": ""
},
{
"docid": "fb809c5e2a15a49a449a818a1b0d59a5",
"text": "Neural responses are modulated by brain state, which varies with arousal, attention, and behavior. In mice, running and whisking desynchronize the cortex and enhance sensory responses, but the quiescent periods between bouts of exploratory behaviors have not been well studied. We found that these periods of \"quiet wakefulness\" were characterized by state fluctuations on a timescale of 1-2 s. Small fluctuations in pupil diameter tracked these state transitions in multiple cortical areas. During dilation, the intracellular membrane potential was desynchronized, sensory responses were enhanced, and population activity was less correlated. In contrast, constriction was characterized by increased low-frequency oscillations and higher ensemble correlations. Specific subtypes of cortical interneurons were differentially activated during dilation and constriction, consistent with their participation in the observed state changes. Pupillometry has been used to index attention and mental effort in humans, but the intracellular dynamics and differences in population activity underlying this phenomenon were previously unknown.",
"title": ""
},
{
"docid": "5e8e15a57395af3bebb52feb5f208309",
"text": "Recent blockchain-technology related innovations enable the governance of collaborating decentralized autonomous organizations (DAO) to engage in agile business-network collaborations that are based on the novel concept of smart contracting. DAOs utilize service-oriented cloud computing in a loosely coupled collaboration lifecycle with the main steps of setup, enactment, possible rollbacks and finally, an orderly termination. This lifecycle supports the selection of services provided and used by DAOs, smart contract negotiations, and behavior monitoring during enactment with the potential for breach management. Based on a sound understanding of the collaboration lifecycle in a Governance- as-a-Service (GaaS)-platform, a new type of conflict management must safeguard business-semantics induced consistency rules. This conflict management involves breach detection with recovery aspects. To fill the detected gap, we employ a formal design-notation that comprises the definition of structural and behavioral properties for exploring conflict-related exception- and compensation management during a decentralized collaboration. With the formal approach, we generate a highly dependable DAO-GaaS conflict model that does not collapse under left-behind clutter such as orphaned processes and exponentially growing database entries that require an unacceptable periodic GaaS reset.",
"title": ""
},
{
"docid": "bb29a8e942c69cdb6634faa563cddb3a",
"text": "Convolutional neural network (CNN) finds applications in a variety of computer vision applications ranging from object recognition and detection to scene understanding owing to its exceptional accuracy. There exist different algorithms for CNNs computation. In this paper, we explore conventional convolution algorithm with a faster algorithm using Winograd's minimal filtering theory for efficient FPGA implementation. Distinct from the conventional convolution algorithm, Winograd algorithm uses less computing resources but puts more pressure on the memory bandwidth. We first propose a fusion architecture that can fuse multiple layers naturally in CNNs, reusing the intermediate data. Based on this fusion architecture, we explore heterogeneous algorithms to maximize the throughput of a CNN. We design an optimal algorithm to determine the fusion and algorithm strategy for each layer. We also develop an automated toolchain to ease the mapping from Caffe model to FPGA bitstream using Vivado HLS. Experiments using widely used VGG and AlexNet demonstrate that our design achieves up to 1.99X performance speedup compared to the prior fusion-based FPGA accelerator for CNNs.",
"title": ""
},
{
"docid": "ea88174db3648ad2348b7338f0b58871",
"text": "Classification Association Rule Mining (CARM) systems operate by applying an Association Rule Mining (ARM) method to obtain classification rules from a training set of previously-classified data. The rules thus generated will be influenced by the choice of ARM parameters employed by the algorithm (typically support and confidence threshold values). In this paper we examine the effect that this choice has on the predictive accuracy of CARM methods. We show that the accuracy can almost always be improved by a suitable choice of parameters, and describe a hill-climbing method for finding the best parameter settings. We also demonstrate that the proposed hill-climbing method is most effective when coupled with a fast CARM algorithm such as the TFPC algorithm which is also described.",
"title": ""
},
{
"docid": "7fed1248efb156c8b2585147e2791ed7",
"text": "In [1], we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend [1] in three ways: 1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. 2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. 3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of [1]. Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel feature, it eliminates the need for the intermediate tracklet representation of [1]. We demonstrate the effectiveness of our overall approach on the MOT16 benchmark [2], achieving state-of-art performance.",
"title": ""
},
{
"docid": "9932e16d2202a024223173613e19314c",
"text": "Systems for On-Line Analytical Processing (OLAP) consider ably ease the process of analyzing business data and have become widely used in industry. OLAP syste ms primarily employ multidimensional data models to structure their data. However, current multi dimensional data models fall short in their ability to model the complex data found in some real-world ap plication domains. The paper presents nine requirements to multidimensional data models, each of which is exemplified by a real-world, clinical case study. A survey of the existing models reveals that the requirements not currently met include support for many-to-many relationships between facts and d imensions, built-in support for handling change and time, and support for uncertainty as well as diffe rent levels of granularity in the data. The paper defines an extended multidimensional data model, whic h addresses all nine requirements. Along with the model, we present an associated algebra, and outlin e how to implement the model using relational databases.",
"title": ""
},
{
"docid": "078cdfda16742c6a2cad8867ddaf8419",
"text": "With the development of mobile Internet, various mobile applications have become increasingly popular. Many people are being benefited from the mobile healthcare services. Compared with the traditional healthcare services, patients’ medical behavior trajectories can be recorded by mobile healthcare services meticulously. They monitor the entire healthcare services process and help to improve the quality and standardization of healthcare services. By tracking and analyzing the patients’ medical records, they provide real-time protection for the patients’ healthcare activities. Therefore, medical fraud can be avoided and the loss of public health funds can be reduced. Although mobile healthcare services can provide a large amount of timely data, an effective real-time online algorithm is needed due to the timeliness of detecting the medical insurance fraud claims. However, because of the complex granularity of medical data, existing fraud detection approaches tend to be less effective in terms of monitoring the healthcare services process. In this paper, we propose an approach to deal with these problems. By means of the proposed SSIsomap activity clustering method, SimLOF outlier detection method, and the Dempster–Shafer theory-based evidence aggregation method, our approach is able to detect unusual categories and frequencies of behaviors simultaneously. Our approach is applied to a real-world data set containing more than 40 million medical insurance claim activities from over 40 000 users. Compared with two state-of-the-art approaches, the extensive experimental results show that our approach is significantly more effective and efficient. Our approach agent which provides decision support for the approval sender during the medical insurance claim approval process is undergoing trial in mobile healthcare services.",
"title": ""
},
{
"docid": "8c381b81b193032633e2fa836f0d7e23",
"text": "This study presents a modified flying capacitor three-level buck dc-dc converter with improved dynamic response. First, the limitations in the transient response improvement of the conventional and three-level buck converters are discussed. Then, the three-level buck converter is modified in a way that it would benefit from a faster dynamic during sudden changes in the load. Finally, a controller is proposed that detects load transients and responds appropriately. In order to verify the effectiveness of the modified topology and the proposed transient controller, a simulation model and a hardware prototype are developed. Analytical, simulation, and experimental results show a significant dynamic response improvement.",
"title": ""
},
{
"docid": "6abeb0710b4caf88ecc4feb7e2433ab1",
"text": "The recently introduced proportional-resonant (PR) controllers and filters, and their suitability for current/voltage control of grid-connected converters, are described. Using the PR controllers, the converter reference tracking performance can be enhanced and previously known shortcomings associated with conventional PI controllers can be alleviated. These shortcomings include steady-state errors in single-phase systems and the need for synchronous d–q transformation in three-phase systems. Based on similar control theory, PR filters can also be used for generating the harmonic command reference precisely in an active power filter, especially for single-phase systems, where d–q transformation theory is not directly applicable. Another advantage associated with the PR controllers and filters is the possibility of implementing selective harmonic compensation without requiring excessive computational resources. Given these advantages and the belief that PR control will find wide-ranging applications in grid-interfaced converters, PR control theory is revised in detail with a number of practical cases that have been implemented previously, described clearly to give a comprehensive reference on PR control and filtering.",
"title": ""
},
{
"docid": "58cc14528c7efe23628bbe7411b44ce8",
"text": "Many domains in the field of Inductive Logic Programming (ILP) involve highly unbalanced data. A common way to measure performance in these domains is to use precision and recall instead of simply using accuracy. The goal of our research is to find new approaches within ILP particularly suited for large, highly-skewed domains. We propose Gleaner, a randomized search method that collects good clauses from a broad spectrum of points along the recall dimension in recall-precision curves and employs an “at least L of these K clauses” thresholding method to combine sets of selected clauses. Our research focuses on Multi-Slot Information Extraction (IE), a task that typically involves many more negative examples than positive examples. We formulate this problem into a relational domain, using two large testbeds involving the extraction of important relations from the abstracts of biomedical journal articles. We compare Gleaner to ensembles of standard theories learned by Aleph, finding that Gleaner produces comparable testset results in a fraction of the training time.",
"title": ""
},
{
"docid": "e96dfb8aca4aa06b759b607ae1ffd005",
"text": "This paper describes scalable convex optimization methods for phase retrieval. The main characteristics of these methods are the cheap per-iteration complexity and the low-memory footprint. With a variant of the original PhaseLift formulation, we first illustrate how to leverage the scalable Frank-Wolfe (FW) method (also known as the conditional gradient algorithm), which requires a tuning parameter. We demonstrate that we can estimate the tuning parameter of the FW algorithm directly from the measurements, with rigorous theoretical guarantees. We then illustrate numerically that recent advances in universal primal-dual convex optimization methods offer significant scalability improvements over the FW method, by recovering full HD resolution color images from their quadratic measurements.",
"title": ""
},
{
"docid": "7e58396148d8e8c8ca7d3439c6b5c872",
"text": "The traditional inductor-based buck converter has been the dominant design for step-down switched-mode voltage regulators for decades. Switched-capacitor (SC) DC-DC converters, on the other hand, have traditionally been used in low- power (<;10mW) and low-conversion-ratio (<;4:1) applications where neither regulation nor efficiency is critical. However, a number of SC converter topologies are very effective in their utilization of switches and passive elements, especially in relation to the ever-popular buck converters [1,2,5]. This work encompasses the complete design, fabrication, and test of a CMOS-based switched-capacitor DC-DC converter, addressing the ubiquitous 12 to 1.5V board-mounted point-of-load application. In particular, the circuit developed in this work attains higher efficiency (92% peak, and >;80% over a load range of 5mA to 1A) than surveyed competitive buck converters, while requiring less board area and less costly passive components. The topology and controller enable a wide input voltage (V!N) range of 7.5 to 13.5V with an output voltage (Vοuτ) of 1.5V Control techniques based on feedback and feedforward provide tight regulation (30mVpp) under worst-case load-step (1A) conditions. This work shows that SC converters can outperform buck converters, and thus the scope of SC converter applications can and should be expanded.",
"title": ""
},
{
"docid": "9c262b845fff31abd1cbc2932957030d",
"text": "Dixon's method for computing multivariate resultants by simultaneously eliminating many variables is reviewed. The method is found to be quite restrictive because often the Dixon matrix is singular, and the Dixon resultant vanished identically yielding no information about solutions for many algebraic and geometry problems. We extend Dixon's method for the case when the Dixon matrix is singular, but satisfies a condition. An efficient algorithm is developed based on the proposed extension for extracting conditions for the existence of affine solutions of a finite set of polynomials. Using this algorithm, numerous geometric and algebraic identities are derived for examples which appear intractable with other techniques of triangulation such as the successive resultant method, the Gro¨bner basis method, Macaulay resultants and Characteristic set method. Experimental results suggest that the resultant of a set of polynomials which are symmetric in the variables is relatively easier to compute using the extended Dixon's method.",
"title": ""
},
{
"docid": "3f8e6ebe83ba2d4bf3a1b4ab5044b6e4",
"text": "-This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the \"classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration. Irony: combination of circumstances, the result of which is the direct opposite of what might be expected. Paradox: seemingly absurd though perhaps really well-founded",
"title": ""
}
] |
scidocsrr
|
3bef0144a89957b2d5e4baf82a73758a
|
A Patch Antenna With a Varactor-Loaded Slot for Reconfigurable Dual-Band Operation
|
[
{
"docid": "b29eec00ba053979967a61f595f22dfa",
"text": "A novel method is presented for electrically tuning the frequency of a planar inverted-F antenna (PIFA). A tuning circuit, comprising an RF switch and discrete passive components, has been completely integrated into the antenna element, which is thus free of dc wires. The proposed tuning method has been demonstrated with a dual-band PIFA capable of operating in four frequency bands. The antenna covers the GSM850, GSM900, GSM1800, PCS1900 and UMTS frequency ranges with over 40% total efficiency. The impact of the tuning circuit on the antenna's efficiency and radiation pattern have been experimentally studied through comparison with the performance of a reference antenna not incorporating the tuning circuit. The proposed frequency tuning concept can be extended to more complex PIFA structures as well as other types of antennas to give enhanced electrical performance.",
"title": ""
},
{
"docid": "c05bf2dedcb7837f877c7a3e257f4222",
"text": "In this letter, we propose a tunable patch antenna made of a slotted rectangular patch loaded by a number of posts close to the patch edge. The posts are short circuited to the ground plane via a set of PIN diode switches. Simulations and measurements verify the possibility of tuning the antenna in subbands from 620 to 1150 MHz. Good matching has been achieved over most of the bands. Other performed designs show that more than one octave can be achieved using the proposed structure.",
"title": ""
}
] |
[
{
"docid": "3f06fc0b50a1de5efd7682b4ae9f5a46",
"text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.",
"title": ""
},
{
"docid": "4933a947f4b0b9a0ca506d50f2010eaf",
"text": "For integers <i>k</i>≥1 and <i>n</i>≥2<i>k</i>+1, the <em>Kneser graph</em> <i>K</i>(<i>n</i>,<i>k</i>) is the graph whose vertices are the <i>k</i>-element subsets of {1,…,<i>n</i>} and whose edges connect pairs of subsets that are disjoint. The Kneser graphs of the form <i>K</i>(2<i>k</i>+1,<i>k</i>) are also known as the <em>odd graphs</em>. We settle an old problem due to Meredith, Lloyd, and Biggs from the 1970s, proving that for every <i>k</i>≥3, the odd graph <i>K</i>(2<i>k</i>+1,<i>k</i>) has a Hamilton cycle. This and a known conditional result due to Johnson imply that all Kneser graphs of the form <i>K</i>(2<i>k</i>+2<sup><i>a</i></sup>,<i>k</i>) with <i>k</i>≥3 and <i>a</i>≥0 have a Hamilton cycle. We also prove that <i>K</i>(2<i>k</i>+1,<i>k</i>) has at least 2<sup>2<sup><i>k</i>−6</sup></sup> distinct Hamilton cycles for <i>k</i>≥6. Our proofs are based on a reduction of the Hamiltonicity problem in the odd graph to the problem of finding a spanning tree in a suitably defined hypergraph on Dyck words.",
"title": ""
},
{
"docid": "6f768934f02c0e559801a7b98d0fbbd7",
"text": "Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.",
"title": ""
},
{
"docid": "35a6a9b41273d6064d4daf5f39f621af",
"text": "A systematic approach to develop a literature review is attractive because it aims to achieve a repeatable, unbiased and evidence-based outcome. However the existing form of systematic review such as Systematic Literature Review (SLR) and Systematic Mapping Study (SMS) are known to be an effort, time, and intellectual intensive endeavour. To address these issues, this paper proposes a model-based approach to Systematic Review (SR) production. The approach uses a domain-specific language expressed as a meta-model to represent research literature, a meta-model to specify SR constructs in a uniform manner, and an associated development process all of which can benefit from computer-based support. The meta-models and process are validated using real-life case study. We claim that the use of meta-modeling and model synthesis lead to a reduction in time, effort and the current dependence on human expertise.",
"title": ""
},
{
"docid": "3ba3ca22bb8d1e6c5c8416422d6a96ba",
"text": "Convolutional neural networks have recently been used to obtain record-breaking results in many vision benchmarks. In addition, the intermediate layer activations of a trained network when exposed to new data sources have been shown to perform very well as generic image features, even when there are substantial differences between the original training data of the network and the new domain. In this paper, we focus on scene recognition and show that convolutional networks trained on mostly object recognition data can successfully be used for feature extraction in this task as well. We train a total of four networks with different training data and architectures, and show that the proposed method combining multiple scales and multiple features obtains state-of-the-art performance on four standard scene datasets.",
"title": ""
},
{
"docid": "0f853c6ccf6ce4cf025050135662f725",
"text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.",
"title": ""
},
{
"docid": "c96dbf6084741f8b529e8a1de19cf109",
"text": "Metamorphic testing is an advanced technique to test programs without a true test oracle such as machine learning applications. Because these programs have no general oracle to identify their correctness, traditional testing techniques such as unit testing may not be helpful for developers to detect potential bugs. This paper presents a novel system, Kabu, which can dynamically infer properties of methods' states in programs that describe the characteristics of a method before and after transforming its input. These Metamorphic Properties (MPs) are pivotal to detecting potential bugs in programs without test oracles, but most previous work relies solely on human effort to identify them and only considers MPs between input parameters and output result (return value) of a program or method. This paper also proposes a testing concept, Metamorphic Differential Testing (MDT). By detecting different sets of MPs between different versions for the same method, Kabu reports potential bugs for human review. We have performed a preliminary evaluation of Kabu by comparing the MPs detected by humans with the MPs detected by Kabu. Our preliminary results are promising: Kabu can find more MPs than human developers, and MDT is effective at detecting function changes in methods.",
"title": ""
},
{
"docid": "d4bd495dd8fb2644bda772d46077e5a4",
"text": "BACKGROUND: Visceral adipose tissue is associated with increased risk for cardiovascular disease risk factors and morbidity from cardiovascular diseases. Waist measurement and waist-to-height ratio (WHtR) have been used as proxy measures of visceral adipose tissue, mainly in adults.OBJECTIVE: To validate body mass index (BMI), waist circumference and WHtR as predictors for the presence of cardiovascular disease risk factors in children of Greek-Cypriot origin.SUBJECTS AND METHODS: A total of 1037 boys and 950 girls with mean age 11.4±0.4 y were evaluated. Dependent variables for the study were total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholestrol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and systolic (SBP) and diastolic (DBP) blood pressure.RESULTS: When children were divided into two groups according to the 75th percentile for BMI, waist circumference and WHtR, all dependent variables had higher mean values in the highest percentile groups in WHtR groups and almost all variables in BMI and waist circumference groups. Adjusted odds ratios for predicting pathological values of cardiovascular disease risk factors were slightly higher for the highest WHtR group for predicting lipid and lipoprotein pathological values and for the highest BMI groups in predicting high blood pressure measurement. Using stepwise multiple regression analysis to explain the variance of the dependent variables, waist circumference was the most significant predictor for all variables both for boys and girls, whereas BMI had the lowest predictive value for the detection of cardiovascular disease risk factors.CONCLUSION: Waist circumference and WHtR are better predictors of cardiovascular disease risk factors in children than BMI. Further studies are necessary to determine the cutoff points for these indices for an accurate prediction of risk factors.",
"title": ""
},
{
"docid": "0c850cee404c406421de03cfd950c294",
"text": "Linguistically diverse datasets are critical for training and evaluating robust machine learning systems, but data collection is a costly process that often requires experts. Crowdsourcing the process of paraphrase generation is an effective means of expanding natural language datasets, but there has been limited analysis of the trade-offs that arise when designing tasks. In this paper, we present the first systematic study of the key factors in crowdsourcing paraphrase collection. We consider variations in instructions, incentives, data domains, and workflows. We manually analyzed paraphrases for correctness, grammaticality, and linguistic diversity. Our observations provide new insight into the trade-offs between accuracy and diversity in crowd responses that arise as a result of task design, providing guidance for future paraphrase generation procedures.",
"title": ""
},
{
"docid": "4984f9e1995cd69aac609374778d45c0",
"text": "We discuss the video recommendation system in use at YouTube, the world's most popular online video community. The system recommends personalized sets of videos to users based on their activity on the site. We discuss some of the unique challenges that the system faces and how we address them. In addition, we provide details on the experimentation and evaluation framework used to test and tune new algorithms. We also present some of the findings from these experiments.",
"title": ""
},
{
"docid": "27101c9dcb89149b68d3ad47b516db69",
"text": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.",
"title": ""
},
{
"docid": "298d0770cb97f124b06268f6de5b144f",
"text": "Cerebral blood flow (CBF) is coupled to neuronal activity and is imaged in vivo to map brain activation. CBF is also modified by afferent projection fibres that release vasoactive neurotransmitters in the perivascular region, principally on the astrocyte endfeet that outline cerebral blood vessels. However, the role of astrocytes in the regulation of cerebrovascular tone remains uncertain. Here we determine the impact of intracellular Ca2+ concentrations ([Ca2+]i) in astrocytes on the diameter of small arterioles by using two-photon Ca2+ uncaging to increase [Ca2+]i. Vascular constrictions occurred when Ca2+ waves evoked by uncaging propagated into the astrocyte endfeet and caused large increases in [Ca2+]i. The vasoactive neurotransmitter noradrenaline increased [Ca2+]i in the astrocyte endfeet, the peak of which preceded the onset of arteriole constriction. Depressing increases in astrocyte [Ca2+]i with BAPTA inhibited the vascular constrictions in noradrenaline. We find that constrictions induced in the cerebrovasculature by increased [Ca2+]i in astrocyte endfeet are generated through the phospholipase A2–arachidonic acid pathway and 20-hydroxyeicosatetraenoic acid production. Vasoconstriction by astrocytes is a previously unknown mechanism for the regulation of CBF.",
"title": ""
},
{
"docid": "283449016e04bcfff09fca91da137dca",
"text": "This paper proposes a depth hole filling method for RGBD images obtained from the Microsoft Kinect sensor. First, the proposed method labels depth holes based on 8-connectivity. For each labeled depth hole, the proposed method fills depth hole using the depth distribution of neighboring pixels of the depth hole. Then, we refine the hole filling result with cross-bilateral filtering. In experiments, by simply using the depth distribution of neighboring pixels, the proposed method improves the acquired depth map and reduces false filling caused by incorrect depth-color fusion.",
"title": ""
},
{
"docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b",
"text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.",
"title": ""
},
{
"docid": "c3f6e26eb8cccde1b462e2ab6bb199c3",
"text": "Scale-out distributed storage systems have recently gained high attentions with the emergence of big data and cloud computing technologies. However, these storage systems sometimes suffer from performance degradation, especially when the communication subsystem is not fully optimized. The problem becomes worse as the network bandwidth and its corresponding traffic increase. In this paper, we first conduct an extensive analysis of communication subsystem in Ceph, an object-based scale-out distributed storage system. Ceph uses asynchronous messenger framework for inter-component communication in the storage cluster. Then, we propose three major optimizations to improve the performance of Ceph messenger. These include i) deploying load balancing algorithm among worker threads based on the amount of workloads, ii) assigning multiple worker threads (we call dual worker) per single connection to maximize the overlapping activity among threads, and iii) using multiple connections between storage servers to maximize bandwidth usage, and thus reduce replication overhead. The experimental results show that the optimized Ceph messenger outperforms the original messenger implementation up to 40% in random writes with 4K messages. Moreover, Ceph with optimized communication subsystem shows up to 13% performance improvement as compared to original Ceph.",
"title": ""
},
{
"docid": "b4efebd49c8dd2756a4c2fb86b854798",
"text": "Mobile technologies (including handheld and wearable devices) have the potential to enhance learning activities from basic medical undergraduate education through residency and beyond. In order to use these technologies successfully, medical educators need to be aware of the underpinning socio-theoretical concepts that influence their usage, the pre-clinical and clinical educational environment in which the educational activities occur, and the practical possibilities and limitations of their usage. This Guide builds upon the previous AMEE Guide to e-Learning in medical education by providing medical teachers with conceptual frameworks and practical examples of using mobile technologies in medical education. The goal is to help medical teachers to use these concepts and technologies at all levels of medical education to improve the education of medical and healthcare personnel, and ultimately contribute to improved patient healthcare. This Guide begins by reviewing some of the technological changes that have occurred in recent years, and then examines the theoretical basis (both social and educational) for understanding mobile technology usage. From there, the Guide progresses through a hierarchy of institutional, teacher and learner needs, identifying issues, problems and solutions for the effective use of mobile technology in medical education. This Guide ends with a brief look to the future.",
"title": ""
},
{
"docid": "2ef92113a901df268261be56f5110cfa",
"text": "This paper studies the problem of finding a priori shortest paths to guarantee a given likelihood of arriving on-time in a stochastic network. Such ‘‘reliable” paths help travelers better plan their trips to prepare for the risk of running late in the face of stochastic travel times. Optimal solutions to the problem can be obtained from local-reliable paths, which are a set of non-dominated paths under first-order stochastic dominance. We show that Bellman’s principle of optimality can be applied to construct local-reliable paths. Acyclicity of local-reliable paths is established and used for proving finite convergence of solution procedures. The connection between the a priori path problem and the corresponding adaptive routing problem is also revealed. A label-correcting algorithm is proposed and its complexity is analyzed. A pseudo-polynomial approximation is proposed based on extreme-dominance. An extension that allows travel time distribution functions to vary over time is also discussed. We show that the time-dependent problem is decomposable with respect to arrival times and therefore can be solved as easily as its static counterpart. Numerical results are provided using typical transportation networks. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3fe42f71b484068b843fedbd3c24ec45",
"text": "We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) — an improved attention-based architecture for multiple object recognition. The proposed model is a fully differentiable unit that can be optimized end-to-end by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was employed as visual attention mechanism which allows to learn the geometric transformation of objects within images. With the combination of the Spatial Transformer and the powerful recurrent architecture, the proposed EDRAM can localize and recognize objects simultaneously. EDRAM has been evaluated on two publicly available datasets including MNIST Cluttered (with 70K cluttered digits) and SVHN (with up to 250k real world images of house numbers). Experiments show that it obtains superior performance as compared with the state-of-the-art models.",
"title": ""
},
{
"docid": "a26089c56be9fc140acc47086964ad5a",
"text": "Module integrated converters (MICs) have been under rapid development for single-phase grid-tied photovoltaic applications. The capacitive energy storage implementation for the double-line-frequency power variation represents a differentiating factor among existing designs. This paper introduces a new topology that places the energy storage block in a series-connected path with the line interface block. This design provides independent control over the capacitor voltage, soft-switching for all semiconductor devices, and the full four-quadrant operation with the grid. The proposed approach is analyzed and experimentally demonstrated.",
"title": ""
},
{
"docid": "82e7bdd78261e7339472c7278bff97ca",
"text": "A novel antenna with both horizontal and vertical polarizations is proposed for 1.7-2.1 GHz LTE band small cell base stations. Horizontal polarization is achieved by using the Vivaldi antennas at the main PCB board in azimuth plane, whereas the vertical polarization is obtained using the rectangular monopole with curved corners in proximity of the horizontal elements. A prototype antenna associated with 8-elements (four horizontal and four vertical) is fabricated on the FR4 substrate with the thickness of 0.2 cm and 0.12 cm for Vivaldi and monopole antennas, respectively. Experimental results have validated the design procedure of the antenna with a volume of 14 × 14 × 4.5 cm3 and indicated the realization of the requirements for the small cell base station applications.",
"title": ""
}
] |
scidocsrr
|
86c4822503d824f45b06eb1c971b36ea
|
Dual-Band Circularly Polarized Stacked Annular-Ring Patch Antenna for GPS Application
|
[
{
"docid": "440e45de4d13e89e3f268efa58f8a51a",
"text": "This letter describes the concept, design, and measurement of a low-profile integrated microstrip antenna for dual-band applications. The antenna operates at both the GPS L1 frequency of 1.575 GHz with circular polarization and 5.88 GHz with a vertical linear polarization for dedicated short-range communication (DSRC) application. The antenna is low profile and meets stringent requirements on pattern/polarization performance in both bands. The design procedure is discussed, and full measured data are presented.",
"title": ""
},
{
"docid": "a0c240efadc361ea36b441d34fc10a26",
"text": "We describe a single-feed stacked patch antenna design that is capable of simultaneously receiving both right hand circularly polarized (RHCP) satellite signals within the GPS LI frequency band and left hand circularly polarized (LHCP) satellite signals within the SDARS frequency band. In addition, the design provides improved SDARS vertical linear polarization (VLP) gain for terrestrial repeater signal reception at low elevation angles as compared to a current state of the art SDARS patch antenna.",
"title": ""
},
{
"docid": "9e292d43355dbdbcf6360c88e49ba38b",
"text": "This paper proposes stacked dual-patch CP antenna for GPS and SDMB services. The characteristic of CP at dual-frequency bands is achieved with a circular patch truncated corners with ears at diagonal direction. According to the dimensions of the truncated corners as well as spacing between centers of the two via-holes, the axial ratio of the CP antenna can be controlled. The good return loss results were obtained both at GPS and SDMB bands. The measured gains of the antenna system are 2.3 dBi and 2.4 dBi in GPS and SDMB bands, respectively. The measured axial ratio is slightly shifted frequencies due to diameter variation of via-holes and the spacing between lower patch and upper patch. The proposed low profile, low-cost fabrication, dual circularly polarization, and separated excitation ports make the proposed stacked antenna an applicable solution as a multi-functional antenna for GPS and SDMB operation on vehicle.",
"title": ""
}
] |
[
{
"docid": "06ae56bc104dbcaa6c82c5b3d021d7fe",
"text": "Open Innovation is a phenomenon that has become increasingly important for both practice and theory over the last few years. The reasons are to be found in shorter innovation cycles, industrial research and development’s escalating costs as well as in the dearth of resources. Subsequently, the open source phenomenon has attracted innovation researchers and practitioners. The recent era of open innovation started when practitioners realised that companies that wished to commercialise both their own ideas as well as other firms’ innovation should seek new ways to bring their in-house ideas to market. They need to deploy pathways outside their current businesses and should realise that the locus where knowledge is created does not necessarily always equal the locus of innovation they need not both be found within the company. Experience has furthermore shown that neither the locus of innovation nor exploitation need lie within companies’ own boundaries. However, emulation of the open innovation approach transforms a company’s solid boundaries into a semi-permeable membrane that enables innovation to move more easily between the external environment and the company’s internal innovation process. How far the open innovation approach is implemented in practice and whether there are identifiable patterns were the questions we investigated with our empirical study. Based on our own empirical database of 124 companies, we identified three core open innovation processes: (1) The outside-in process: Enriching a company’s own knowledge base through the integration of suppliers, customers, and external knowledge sourcing can increase a company’s innovativeness. (2) The inside-out process: The external exploitation of ideas in different markets, selling IP and multiplying technology by channelling ideas to the external environment. (3) The coupled process: Linking outside-in and inside-out by working in alliances with complementary companies during which give and take are crucial for success. Consequent thinking along the whole value chain and new business models enable this core process.",
"title": ""
},
{
"docid": "075742c6c4017f03fa72ebae69b4d857",
"text": "This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers accommodating multiple tenants. The scheme and the related protocols can be used in networks for cloud service providers and enterprise data centers. This memo documents the deployed VXLAN protocol for the benefit of the Internet community.",
"title": ""
},
{
"docid": "5c74d0cfcbeaebc29cdb58a30436556a",
"text": "Modular decomposition is an effective means to achieve a complex system, but that of current part-component-based does not meet the needs of the positive development of the production. Design Structure Matrix (DSM) can simultaneously reflect the sequence, iteration, and feedback information, and express the parallel, sequential, and coupled relationship between DSM elements. This article, a modular decomposition method, named Design Structure Matrix Clustering modularize method, is proposed, concerned procedures are define, based on sorting calculate and clustering analysis of DSM, according to the rules of rows exchanges and columns exchange with the same serial number. The purpose and effectiveness of DSM clustering modularize method are confirmed through case study of assembly and calibration system for the large equipment.",
"title": ""
},
{
"docid": "2887fb157126497032c31459a8c9ae46",
"text": "The amount of data in electronic and real world is constantly on the rise. Therefore, extracting useful knowledge from the total available data is very important and time consuming task. Data mining has various techniques for extracting valuable information or knowledge from data. These techniques are applicable for all data that are collected inall fields of science. Several research investigations are published about applications of data mining in various fields of sciences such as defense, banking, insurances, education, telecommunications, medicine and etc. This investigation attempts to provide a comprehensive survey about applications of data mining techniques in breast cancer diagnosis, treatment & prognosis till now. Further, the main challenges in these area is presented in this investigation. Since several research studies currently are going on in this issues, therefore, it is necessary to have a complete survey about all researches which are completed up to now, along with the results of those studies and important challenges which are currently exist in this area for helping young researchers and presenting to them the main problems that are still exist in this area.",
"title": ""
},
{
"docid": "8ab05713986a3fcb1ebe6973be40b13c",
"text": "Long-term care nursing staff are subject to considerable occupational stress and report high levels of burnout, yet little is known about how stress and social support are associated with burnout in this population. The present study utilized the job demands-resources model of burnout to examine relations between job demands (occupational and personal stress), job resources (sources and functions of social support), and burnout in a sample of nursing staff at a long-term care facility (N = 250). Hierarchical linear regression analyses revealed that job demands (greater occupational stress) were associated with more emotional exhaustion, more depersonalization, and less personal accomplishment. Job resources (support from supervisors and friends or family members, reassurance of worth, opportunity for nurturing) were associated with less emotional exhaustion and higher levels of personal accomplishment. Interventions to reduce burnout that include a focus on stress and social support outside of work may be particularly beneficial for long-term care staff.",
"title": ""
},
{
"docid": "7ecf315d70e6d438ef90ec76b192b65f",
"text": "Stress is a common condition, a response to a physical threat or psychological distress, that generates a host of chemical and hormonal reactions in the body. In essence, the body prepares to fight or fiee, pumping more blood to the heart and muscles and shutting down all nonessential functions. As a temporary state, this reaction serves the body well to defend itself When the stress reaction is prolonged, however, the normal physical functions that have in response either been exaggerated or shut down become dysfunctional. Many have noted the benefits of exercise in diminishing the stress response, and a host of studies points to these benefits. Yoga, too, has been recommended and studied in relationship to stress, although the studies are less scientifically replicable. Nonetheless, several researchers claim highly beneficial results from Yoga practice in alleviating stress and its effects. The practices recommended range from intense to moderate to relaxed asana sequences, along yNith.pranayama and meditation. In all these approaches to dealing with stress, one common element stands out: The process is as important as the activity undertaken. Because it fosters self-awareness. Yoga is a promising approach for dealing with the stress response. Yoga and the Stress Response Stress has become a common catchword in our society to indicate a host of difficulties, both as cause and effect. The American Academy of Family Physicians has noted that stress-related symptoms prompt two-thirds of the office visits to family physicians.' Exercise and alternative therapies are now commonly prescribed for stress-related complaints and illness. Even a recent issue of Consumer Reports suggests Yoga for stress relief.̂ Many books and articles claim, as does Dr. Susan Lark, that practicing Yoga will \"provide effective relief of anxiety and stress.\"^ But is this an accurate promise? What Is the Stress Response? A review of the current thinking on stress reveals that the process is both biochemical and psychological. A very good summary of research on the stress response is contained in Robert Sapolsky's Why Zebras Don't Get",
"title": ""
},
{
"docid": "3770720cff3a36596df097835f4f10a9",
"text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.",
"title": ""
},
{
"docid": "99c944265ca0d5d9de5bf5855c6ad1f4",
"text": "This study was designed to explore the impact of Yoga and Meditation based lifestyle intervention (YMLI) on cellular aging in apparently healthy individuals. During this 12-week prospective, open-label, single arm exploratory study, 96 apparently healthy individuals were enrolled to receive YMLI. The primary endpoints were assessment of the change in levels of cardinal biomarkers of cellular aging in blood from baseline to week 12, which included DNA damage marker 8-hydroxy-2'-deoxyguanosine (8-OH2dG), oxidative stress markers reactive oxygen species (ROS), and total antioxidant capacity (TAC), and telomere attrition markers telomere length and telomerase activity. The secondary endpoints were assessment of metabotrophic blood biomarkers associated with cellular aging, which included cortisol, β-endorphin, IL-6, BDNF, and sirtuin-1. After 12 weeks of YMLI, there were significant improvements in both the cardinal biomarkers of cellular aging and the metabotrophic biomarkers influencing cellular aging compared to baseline values. The mean levels of 8-OH2dG, ROS, cortisol, and IL-6 were significantly lower and mean levels of TAC, telomerase activity, β-endorphin, BDNF, and sirtuin-1 were significantly increased (all values p < 0.05) post-YMLI. The mean level of telomere length was increased but the finding was not significant (p = 0.069). YMLI significantly reduced the rate of cellular aging in apparently healthy population.",
"title": ""
},
{
"docid": "fd3dd59550806b93a625f6e6750e888f",
"text": "Location-based services have become widely available on mobile devices. Existing methods employ a pull model or user-initiated model, where a user issues a query to a server which replies with location-aware answers. To provide users with instant replies, a push model or server-initiated model is becoming an inevitable computing model in the next-generation location-based services. In the push model, subscribers register spatio-textual subscriptions to capture their interests, and publishers post spatio-textual messages. This calls for a high-performance location-aware publish/subscribe system to deliver publishers' messages to relevant subscribers.In this paper, we address the research challenges that arise in designing a location-aware publish/subscribe system. We propose an rtree based index structure by integrating textual descriptions into rtree nodes. We devise efficient filtering algorithms and develop effective pruning techniques to improve filtering efficiency. Experimental results show that our method achieves high performance. For example, our method can filter 500 tweets in a second for 10 million registered subscriptions on a commodity computer.",
"title": ""
},
{
"docid": "90a3dd2bc75817a49a408e7666660e29",
"text": "RATIONALE\nPulmonary arterial hypertension (PAH) is an orphan disease for which the trend is for management in designated centers with multidisciplinary teams working in a shared-care approach.\n\n\nOBJECTIVE\nTo describe clinical and hemodynamic parameters and to provide estimates for the prevalence of patients diagnosed for PAH according to a standardized definition.\n\n\nMETHODS\nThe registry was initiated in 17 university hospitals following at least five newly diagnosed patients per year. All consecutive adult (> or = 18 yr) patients seen between October 2002 and October 2003 were to be included.\n\n\nMAIN RESULTS\nA total of 674 patients (mean +/- SD age, 50 +/- 15 yr; range, 18-85 yr) were entered in the registry. Idiopathic, familial, anorexigen, connective tissue diseases, congenital heart diseases, portal hypertension, and HIV-associated PAH accounted for 39.2, 3.9, 9.5, 15.3, 11.3, 10.4, and 6.2% of the population, respectively. At diagnosis, 75% of patients were in New York Heart Association functional class III or IV. Six-minute walk test was 329 +/- 109 m. Mean pulmonary artery pressure, cardiac index, and pulmonary vascular resistance index were 55 +/- 15 mm Hg, 2.5 +/- 0.8 L/min/m(2), and 20.5 +/- 10.2 mm Hg/L/min/m(2), respectively. The low estimates of prevalence and incidence of PAH in France were 15.0 cases/million of adult inhabitants and 2.4 cases/million of adult inhabitants/yr. One-year survival was 88% in the incident cohort.\n\n\nCONCLUSIONS\nThis contemporary registry highlights current practice and shows that PAH is detected late in the course of the disease, with a majority of patients displaying severe functional and hemodynamic compromise.",
"title": ""
},
{
"docid": "b4bca1a35fca1cca92b4f2e2f77152e1",
"text": "This paper proposed design and development of a flexible UWB wearable antenna using flexible and elastic polymer substrate. Polydimethylsiloxane (PDMS) was chosen to be used as flexible substrate for the proposed antenna which is a kind of silicone elastic, it has attractive mechanical and electrical properties such as flexibility, softness, water resistance low permittivity and transparency. The proposed antenna consists of a rectangular patch with two steps notches in the upper side of the patch, resulting in a more compact and increase in the bandwidth. In addition, the proposed antenna has an elliptical slot for an enhancement of the bandwidth and gain. The bottom side edges of the patch have been truncated to provide an additional surface current path. The proposed UWB wearable antenna functions from 2.5 GHz to 12.4 GHz frequency range and it was successfully designed and the simulated result showed that the return loss was maintained less than -10 dB and VSWR kept less than 2 over the entire desired frequency range (2.5 GHz - 12.4 GHz). The gain of the proposed antenna varies with frequency and the maximum gain recorded is 4.56 dB at 6.5 GHz. Simultaneously, The radiation patterns of the proposed antenna are also presented. The performance of the antenna under bending condition is comparable with the normal condition's performance.",
"title": ""
},
{
"docid": "0218c583a8658a960085ddf813f38dbf",
"text": "The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.",
"title": ""
},
{
"docid": "1ab13d8abe63d25ba5da7f1e19e641fe",
"text": "Recording of patient-reported outcomes (PROs) enables direct measurement of the experiences of patients with cancer. In the past decade, the use of PROs has become a prominent topic in health-care innovation; this trend highlights the role of the patient experience as a key measure of health-care quality. Historically, PROs were used solely in the context of research studies, but a growing body of literature supports the feasibility of electronic collection of PROs, yielding reliable data that are sometimes of better quality than clinician-reported data. The incorporation of electronic PRO (ePRO) assessments into standard health-care settings seems to improve the quality of care delivered to patients with cancer. Such efforts, however, have not been widely adopted, owing to the difficulties of integrating PRO-data collection into clinical workflows and electronic medical-record systems. The collection of ePRO data is expected to enhance the quality of care received by patients with cancer; however, for this approach to become routine practice, uniquely trained people, and appropriate policies and analytical solutions need to be implemented. In this Review, we discuss considerations regarding measurements of PROs, implementation challenges, as well as evidence of outcome improvements associated with the use of PROs, focusing on the centrality of PROs as part of 'big-data' initiatives in learning health-care systems.",
"title": ""
},
{
"docid": "f32ed82c3ab67c711f50394eea2b9106",
"text": "Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (“what to say”) and surface realization (“how to say”) in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). We recast generation as the task of finding the best derivation tree for a set of database records and describe an algorithm for decoding in this framework that allows to intersect the grammar with additional information capturing fluency and syntactic well-formedness constraints. Experimental evaluation on several domains achieves results competitive with state-of-the-art systems that use domain specific constraints, explicit feature engineering or labeled data.",
"title": ""
},
{
"docid": "559893f48207bc694259712d4a607bad",
"text": "The purpose of this conceptual paper is to discuss four main different tools which are: mobile marketing, E-mail marketing, web marketing and marketing through social networking sites, which use to distribute e-marketing promotion and understanding their different influence on consumers` perception. This study also highlighted the E-marketing, marketing through internet, mobile marketing, web marketing and role of social networks and their component in term of perceptual differences and features which are important to them according to the literatures. The review of the research contains some aspect of mobile marketing, terms like adaption, role of trust, and customers’ satisfaction. Moreover some attributes of marketing through E-mail like Permission issue in Email in aim of using for marketing activity and key success factor base on previous literatures.",
"title": ""
},
{
"docid": "def31cb876d7f786d0759f273333f602",
"text": "Economic globalisation and internationalisation of operations are essential factors in integration of suppliers, partners and customers within and across national borders, the objective being to achieve integrated supply chains. In this effort, implementation of information technologies and systems such as enterprise resource planning (ERP) facilitate the desired level of integration. There are cases of successful and unsuccessful implementations. The principal reason for failure is often associated with poor management of the implementation process. This paper examines key dimensions of implementation of ERP system within a large manufacturing organisation and identifies core issues to confront in successful implementation of enterprise information system. A brief overview of the application of ERP system is also presented and in particular, ERP software package known as SAP R/3, which was the ERP software package selected by a c Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "401f93b2405bd54882fe876365195425",
"text": "Previous approaches to training syntaxbased sentiment classification models required phrase-level annotated corpora, which are not readily available in many languages other than English. Thus, we propose the use of tree-structured Long Short-Term Memory with an attention mechanism that pays attention to each subtree of the parse tree. Experimental results indicate that our model achieves the stateof-the-art performance in a Japanese sentiment classification task.",
"title": ""
},
{
"docid": "139ecd9ff223facaec69ad6532f650db",
"text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.",
"title": ""
},
{
"docid": "d9b8c9c1427fc68f9e40e24ae517c7e8",
"text": "Although studies have shown that Instagram use and young adults' mental health are cross-sectionally associated, longitudinal evidence is lacking. In addition, no study thus far examined this association, or the reverse, among adolescents. To address these gaps, we set up a longitudinal panel study among 12- to 19-year-old Flemish adolescents to investigate the reciprocal relationships between different types of Instagram use and depressed mood. Self-report data from 671 adolescent Instagram users (61% girls; MAge = 14.96; SD = 1.29) were used to examine our research question and test our hypotheses. Structural equation modeling showed that Instagram browsing at Time 1 was related to increases in adolescents' depressed mood at Time 2. In addition, adolescents' depressed mood at Time 1 was related to increases in Instagram posting at Time 2. These relationships were similar among boys and girls. Potential explanations for the study findings and suggestions for future research are discussed.",
"title": ""
},
{
"docid": "70cad4982e42d44eec890faf6ddc5c75",
"text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.",
"title": ""
}
] |
scidocsrr
|
ccbab0cd09ef77b5dbf4df4ff102e02a
|
ACTION SPACE DEFINED BY NATURAL LANGUAGE
|
[
{
"docid": "955ae6e1dffbe580217b812f943b4339",
"text": "Successful applications of reinforcement learning in realworld problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent’s entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we study reinforcement learning with deep neural networks, including RNN and LSTM, which are equipped with the desired property of being able to capture long-term dependency on history, and thus providing an effective way of learning the representation of hidden states. We further develop a hybrid approach that combines the strength of both supervised learning (for representing hidden states) and reinforcement learning (for optimizing control) with joint training. Extensive experiments based on a KDD Cup 1998 direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best across the board.",
"title": ""
}
] |
[
{
"docid": "060167f774d43cd41476de531ded40ad",
"text": "In this study, we proposed a research model to investigate the factors influencing users’ continuance intention to use Twitter. Building on the uses and gratification framework, we have proposed four types of gratifications for Twitter usage, including content gratification, technology gratification, process gratification, and social gratification. We conducted an online survey and collected 124 responses. The data was analyzed using Partial Least Squares. Our results showed that content gratifications and new technology gratification are the two key types of gratifications affecting the continuance intention to use Twitter. We conclude with a discussion of theoretical and practical implications. We believe that this study will provide important insights for future research on Twitter.",
"title": ""
},
{
"docid": "e18f8e9bcf60ca3b76abe65c908556b7",
"text": "Knowledge is a critical resource that can help organizations to sustain strategic advantage in competitive environments. Organizations in Asia and elsewhere are turning to knowledge management (KM) initiatives and technologies to leverage their knowledge resources. As a key component of KM initiatives, electronic knowledge repositories (EKR) are deployed by organizations to store codified knowledge for future reuse. Although EKR have been in use for some time, there is a lack of understanding about factors that motivate employees to seek knowledge from EKR. This study formulates and empirically tests a theoretical model relating potential antecedents to EKR usage for knowledge seeking. The model was operationalized and the survey instrument subjected to a conceptual validation process. The survey was administered to 160 knowledge professionals in public sector organizations in Singapore who had accessed the contents of EKR in the course of their work. Results reveal that perceived output quality directly affects EKR usage for knowledge seeking. Further, resource availability impacts EKR usage for knowledge seeking particularly when task tacitness is low and incentives affect EKR usage particularly when task interdependence is high. Implications of these results for further research and improving EKR implementation",
"title": ""
},
{
"docid": "a85b3be3060e64961b1d80b792d8cc63",
"text": "Replisomes are the protein assemblies that replicate DNA. They function as molecular motors to catalyze template-mediated polymerization of nucleotides, unwinding of DNA, the synthesis of RNA primers, and the assembly of proteins on DNA. The replisome of bacteriophage T7 contains a minimum of proteins, thus facilitating its study. This review describes the molecular motors and coordination of their activities, with emphasis on the T7 replisome. Nucleotide selection, movement of the polymerase, binding of the processivity factor, unwinding of DNA, and RNA primer synthesis all require conformational changes and protein contacts. Lagging-strand synthesis is mediated via a replication loop whose formation and resolution is dictated by switches to yield Okazaki fragments of discrete size. Both strands are synthesized at identical rates, controlled by a molecular brake that halts leading-strand synthesis during primer synthesis. The helicase serves as a reservoir for polymerases that can initiate DNA synthesis at the replication fork. We comment on the differences in other systems where applicable.",
"title": ""
},
{
"docid": "c4df2361d80e8619e2d3d8b052ae2abc",
"text": "Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. e field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have only recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 that gives a brief survey of psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 on interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides on best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects for this domain.",
"title": ""
},
{
"docid": "f2b7720c705b9d3ecba4ceacfb2c1c81",
"text": "Traditionally, word segmentation (WS) adopts the single-granularity formalism, where a sentence corresponds to a single word sequence. However, Sproat et al. (1996) show that the inter-nativespeaker consistency ratio over Chinese word boundaries is only 76%, indicating single-grained WS (SWS) imposes unnecessary challenges on both manual annotation and statistical modeling. Moreover, WS results of different granularities can be complementary and beneficial for high-level applications. This work proposes and addresses multi-grained WS (MWS). First, we build a large-scale pseudo MWS dataset for model training and tuning by leveraging the annotation heterogeneity of three SWS datasets. Then we manually annotate 1,500 test sentences with true MWS annotations. Finally, we propose three benchmark approaches by casting MWS as constituent parsing and sequence labeling. Experiments and analysis lead to many interesting findings.",
"title": ""
},
{
"docid": "a69d74fec7f10a66d46dee2c73439a11",
"text": "Service Function Chaining (SFC) serves the traffic of a specific service along an ordered set of Service Functions (SFs). SFC uses Software Defined Networking (SDN) and Network Function Virtualization (NFV) technologies to reach the deployment and removal of SFC in an appropriate time with minimal costs. However, during the life time of deployed SFCs, the SFs are exposed to the risk of overloading, which results in an end-to-end packets drop and high delay at the application level. This paper presents the concept of a Scalable SFC Orchestrator capable of deploying SF Chains following the ETSI NFV architectural model, as well as orchestrating the runtime phase by rerouting the traffic to a different path in case of overload of certain SF instances. Moreover, a selection algorithm is presented to build the updated paths after scaling. The Scalable Orchestration feature is implemented and added as a main feature to an SFC Orchestrator. Furthermore, it is integrated into a current NFV architecture and evaluated in an NFV environment making use of the Fraunhofer FOKUS Open Baton toolkit in an OpenStack and Open Daylight based environment.",
"title": ""
},
{
"docid": "a99ff426d58252b106082e01d85a2f4f",
"text": "OBJECTIVE\nIn the aftermath of the double suicide of two teenage girls in 2007, the media linked the themes of 'emo' music and the girls' mental state. But it is not just emo music that has been the subject of scrutiny by the media. Rap music, country, and heavy metal have also been blamed for antisocial behaviours including violence, theft, promiscuity and drug use. It remains an important research and clinical question as to whether music contributes to the acting out of behaviours described in the music lyrics or whether the preferred music represents the already existing behavioural tendencies in the subject. This paper surveys and discusses the relevant literature on music preference and adolescent music listening behaviours, and their links with adolescent mental health.\n\n\nCONCLUSION\nStudies have found a relationship between various genres of music and antisocial behaviours, vulnerability to suicide, and drug use. However, studies reject that music is a causal factor and suggest that music preference is more indicative of emotional vulnerability. A limited number of studies have found correlations between music preference and mental health status. More research is needed to determine whether music preferences of those with diagnosed mental health issues differ substantially from the general adolescent population.",
"title": ""
},
{
"docid": "818b2a97c4648f04feadbb3bd7da90cc",
"text": "Reducing the number of features whilst maintaining an acceptable classification accuracy is a fundamental step in the process of constructing cancer predictive models. In this work, we introduce a novel hybrid (MI-LDA) feature selection approach for the diagnosis of ovarian cancer. This hybrid approach is embedded within a global optimization framework and offers a promising improvement on feature selection and classification accuracy processes. Global Mutual Information (MI) based feature selection optimizes the search process of finding best feature subsets in order to select the highly correlated predictors for ovarian cancer diagnosis. The maximal discriminative cancer predictors are then passed to a Linear Discriminant Analysis (LDA) classifier, and a Genetic Algorithm (GA) is applied to optimise the search process with respect to the estimated error rate of the LDA classifier (MI-LDA). Experiments were performed using an ovarian cancer dataset obtained from the FDA-NCI Clinical Proteomics Program Databank. The performance of the hybrid feature selection approach was evaluated using the Support Vector Machine (SVM) classifier and the LDA classifier. A comparison of the results revealed that the proposed (MI-LDA)-LDA model outperformed the (MI-LDA)-SVM model on selecting the maximal discriminative feature subset and achieved the highest predictive accuracy. The proposed system can therefore be used as an efficient tool for finding predictors and patterns in serum (blood)-derived proteomic data for the detection of ovarian cancer.",
"title": ""
},
{
"docid": "13079b0e030d024e313502a7ce1eb692",
"text": "In this paper, an integrated three-port bidirectional dc-dc converter for a dc distribution system is presented. One port of the low-voltage side of the proposed converter is chosen as a current source port which fits for photovoltaic (PV) panels with wide voltage variation. In addition, the interleaved structure of the current source port can provide the desired small current ripple to benefit the PV panel to achieve the maximum power point tracking (MPPT). Another port of the low-voltage side is chosen as a voltage source port interfaced with battery that has small voltage variation; therefore, the PV panel and energy storage element can be integrated by using one converter topology. The voltage port on the high-voltage side will be connected to the dc distribution bus. A high-frequency transformer of the proposed converter not only provides galvanic isolation between energy sources and high voltage dc bus, but also helps to remove the leakage current resulted from PV panels. The MPPT and power flow regulations are realized by duty cycle control and phase-shift angle control, respectively. Different from the single-phase dual-half-bridge converter, the power flow between the low-voltage side and high-voltage side is only related to the phase-shift angle in a large operation area. The system operation modes under different conditions are analyzed and the zero-voltage switching can be guaranteed in the PV application even when the dc-link voltage varies. Finally, system simulation and experimental results on a 3-kW hardware prototype are presented to verify the proposed technology.",
"title": ""
},
{
"docid": "46cc515d0d41e0027cc975f37d9e1f7b",
"text": "A distributed data-stream architecture finds application in sensor networks for monitoring environment and activities. In such a network, large numbers of sensors deliver continuous data to a central server. The rate at which the data is sampled at each sensor affects the communication resource and the computational load at the central server. In this paper, we propose a novel adaptive sampling technique where the sampling rate at each sensor adapts to the streaming-data characteristics. Our approach employs a Kalman-Filter (KF)-based estimation technique wherein the sensor can use the KF estimation error to adaptively adjust its sampling rate within a given range, autonomously. When the desired sampling rate violates the range, a new sampling rate is requested from the server. The server allocates new sampling rates under the constraint of available resources such that KF estimation error over all the active streaming sensors is minimized. Through empirical studies, we demonstrate the flexibility and effectiveness of our model.",
"title": ""
},
{
"docid": "30363c549000b7da574ef2b850caea04",
"text": "Recent research works have highlight the invariance or the symmetry that exists in the weight space of a typical neural network and the negative effect of the symmetry on the training neural network due to the Euclidean gradient being not scaling-invariant. Although the problem of the symmetry can be solved by either defining a suitable Riemannian gradient, which is a scale-invariant or placing appropriate constraints on the weights, it will introduce very high-computation cost. In this paper, we first discuss various invariances or symmetries in the weight space, and then we propose to solve the problem via the scaling-invariance of the neural network itself, instead of the scaling-invariant updates methods. The motivation behind our method is that the optimized parameter point in the weight space may be moved from the ill-conditioning region to the flat region via a series of node-wise rescaling without changing the function represented by the neural network. Second, we proposed the scaling-based weight normalization. The proposed method can compatible with the commonly used optimization algorithms and collaborates well with batch normalization. Although our algorithm is very simple, it can accelerate the convergence speed. The additional computation cost introduced by our method is lower. Lastly, the experiments show that our proposed method can improve the performance of various networks architectures over large-scale datasets consistently. Our method outperforms the state-of-the-art methods on CIFAR-100: we obtain test errors as 17.18%.",
"title": ""
},
{
"docid": "785ee9de92bdcef648b5e43dd32e25f5",
"text": "A voltage reference using a depletion-mode device is designed in a 0.13µm CMOS process and achieves ultra-low power consumption and sub-1V operation without sacrificing temperature and supply voltage insensitivity. Measurements show a temperature coefficient of 19.4ppm/° (3.4 µV/°), line sensitivity of 0.033%/V, power supply rejection ratio of−67dB, and power consumption of 2.2pW. It requires only two devices and functions down to V<inf>dd</inf>=0.5V with an area of 1350µm<sup>2</sup>. A variant for higher Vout is also demonstrated.",
"title": ""
},
{
"docid": "053f700e2f0addc497f05c3734234a5e",
"text": "A S RECENTLY AS SIX YEARS AGO, COMPUTER viruses were considered an urban myth by many. At the time, only a handful of PC viruses had been written and infection was relatively uncommon. Today the situation is very different. As of November 1996, virus writers have programmed more than 10,000 DOS-based computer viruses. In addition to the sheer increase in the number of viruses, the virus writers have also become more clever. Their newer creations are significantly more complex and difficult to detect and remove. These “improvements” can be at least partially attributed to the efforts of antivirus producers. As antivirus products improve and detect the “latest and greatest” viruses, the virus authors invent new and more devious ways to hide their progeny. This coevolution has led to the creation of the most complex class of virus to date: the polymorphic computer virus. The polymorphic virus avoids detection by mutating itself each time it infects a new program; each mutated infection is capable of performing the same tasks as its parent, yet it may look entirely different. These cunning viruses simply cannot be detected costeffectively using traditional antivirus scanning algorithms. Fortunately, the antivirus producers have responded, as they have in the past, with an equally creative solution to the polymorphic virus threat. Many antivirus programs are now starting to employ a technique known as generic decryption to detect even the most complex polymorphic viruses quickly and cost effectively. A computer virus is a self-replicating computer pro-",
"title": ""
},
{
"docid": "461ec14463eb20962ef168de781ac2a2",
"text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.",
"title": ""
},
{
"docid": "2e87c4fbb42424f3beb07e685c856487",
"text": "Conventional wisdom ties the origin and early evolution of the genus Homo to environmental changes that occurred near the end of the Pliocene. The basic idea is that changing habitats led to new diets emphasizing savanna resources, such as herd mammals or underground storage organs. Fossil teeth provide the most direct evidence available for evaluating this theory. In this paper, we present a comprehensive study of dental microwear in Plio-Pleistocene Homo from Africa. We examined all available cheek teeth from Ethiopia, Kenya, Tanzania, Malawi, and South Africa and found 18 that preserved antemortem microwear. Microwear features were measured and compared for these specimens and a baseline series of five extant primate species (Cebus apella, Gorilla gorilla, Lophocebus albigena, Pan troglodytes, and Papio ursinus) and two protohistoric human foraging groups (Aleut and Arikara) with documented differences in diet and subsistence strategies. Results confirmed that dental microwear reflects diet, such that hard-object specialists tend to have more large microwear pits, whereas tough food eaters usually have more striations and smaller microwear features. Early Homo specimens clustered with baseline groups that do not prefer fracture resistant foods. Still, Homo erectus and individuals from Swartkrans Member 1 had more small pits than Homo habilis and specimens from Sterkfontein Member 5C. These results suggest that none of the early Homo groups specialized on very hard or tough foods, but that H. erectus and Swartkrans Member 1 individuals ate, at least occasionally, more brittle or tough items than other fossil hominins studied.",
"title": ""
},
{
"docid": "1ac3f0ba0502e01f0e5bae95329cd33f",
"text": "Two studies examined hypotheses drawn from a proposed modification of the social-cognitive model of achievement motivation that centered on the 2 x 2 achievement goal framework. Implicit theories of ability were shown to be direct predictors of performance attainment and intrinsic motivation, and the goals of the 2 x 2 framework were shown to account for these direct relations. Perceived competence was shown to be a direct predictor of achievement goals, not a moderator of relations implicit theory or achievement goal effects. The results highlight the utility of attending to the approach-avoidance distinction in conceptual models of achievement motivation and are fully in line with the hierarchical model of achievement motivation.",
"title": ""
},
{
"docid": "1db26155269f5b9a66979fabc3e8006c",
"text": "An output-capacitorless low-dropout regulator (OCL-LDO), which is based on flipped-voltage-follower (FVF) using damping-factor-control (DFC) frequency compensation for SOC application, is presented in this paper. The proposed LDO with 1.2V supply, 100mA load current was designed by SMIC 0.13um standard CMOS process. Simulation results have shown that the LDO can be stable for a load capacitance ranging from 0 to 80pF. The line regulation and load regulation are 3.3mV/V and 62uV/mA, respectively, and the quiescent current consumption is only 27uA. Under maximum load current changes, the overshoots and undershoots are less than 90mV and 140mV, and the recovery time is about 2.5us.",
"title": ""
},
{
"docid": "df609125f353505fed31eee302ac1742",
"text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].",
"title": ""
},
{
"docid": "1262b017b1d7f4ce516e247662d8ad27",
"text": "Detecting deception is an ancient problem that continues to find relevance today in many areas, such as border control, financial auditing, testimony assessment, and Internet fraud. Over the last century, research in this area has focused mainly on discovering physiological signals and psychological behaviors that are associated with lying. Using verbal cues (i.e., words and language structure) is not entirely new. But in recent years, data-driven and machine learning frameworks, which are now ubiquitous in the natural language processing (NLP) community, has brought new light to this old field. This highly accessible book puts last decade’s research in verbal deception in the context of traditional methods. It is a valuable resource to anyone with a basic understanding of machine learning looking to make inroads or break new ground in the subspecialty of detecting verbal deception. The book consists of five chapters organized into three parts—background on nonverbal cues, statistical NLP approaches, and future directions. The introductory chapter concisely defines the problem and relates verbal cues to the behavioral ones. It also provides an intuition of why patterns in language would be effective and provides an estimated state-of-the-art performance. Chapter 2 provides background on a behavioral approach to identifying deception. The first section gets readers acquainted with terms used by nonverbal cues to deception. These include physiological signals (which are the basis of well-known lie detection methods such as polygraphy and thermography), vocal cues (such as speech disfluencies), and body and facial expressions (e.g., pupil dilation). Although seemingly detached from the focus of the book, this preliminary material is an interesting introduction that also serves as a terminology reference later. The remaining chapter covers two topics: psychology of deception and applied criminal justice. The part on psychology presents a literature review of two definitive meta-analysis of the literature in the twentieth century. It first gives a theoretical account of deceptive behavior, such as motivation to lie and emotional states of liars. Next, it reports the experimental effectiveness of measurable cues, whether objectively or subjectively, such as complexity and amount of information. The second meta-analysis examines conditions that tend to make lying behavior more obvious, for example, interrogation. Although seemingly unrelated to NLP, I expect these reviews to be a source of inspiration for novel feature and model engineering. Because the material is very comprehensive and possibly foreign to the NLP community, I would like to see this part organized by the type of behavior cues (in the same vein as the preliminary material on physiological",
"title": ""
}
] |
scidocsrr
|
f3084cfbc2024883568105da62a17678
|
CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming
|
[
{
"docid": "cdefeefa1b94254083eba499f6f502fb",
"text": "problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a \"problem\" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that \"solves\" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out \"expensive\" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its \"complexity,\" that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a \"standard\" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by",
"title": ""
},
{
"docid": "06d30f5d22689e07190961ae76f7b9a0",
"text": "In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel.Key contributions of this work include: i) an algorithm that sends data to different points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing. In a tree, it is critical that a node's parent delivers a high rate of application data to each child. In Bullet however, nodes simultaneously receive data from multiple sources in parallel, making it less important to locate any single source capable of sustaining a high transmission rate.",
"title": ""
}
] |
[
{
"docid": "d1b79ace26173ebe954bca25a06c5e34",
"text": "Recent proposals for deterministic database system designs argue that deterministic database systems facilitate replication since the same input can be independently sent to two different replicas without concern for replica divergence. In addition, they argue that determinism yields performance benefits due to (1) the introduction of deadlock avoidance techniques, (2) the reduction (or elimination) of distributed commit protocols, and (3) light-weight locking. However, these performance benefits are not universally applicable, and there exist several disadvantages of determinism, including (1) the additional overhead of processing transactions for which it is not known in advance what data will be accessed, (2) an inability to abort transactions arbitrarily (e.g., in the case of database or partition overload), and (3) the increased latency required by a preprocessing layer that ensures that the same input is sent to every replica. This paper presents a thorough experimental study that carefully investigates both the advantages and disadvantages of determinism, in order to give a database user a more complete understanding of which database to use for a given database workload and cluster configuration.",
"title": ""
},
{
"docid": "37b22de12284d38f6488de74f436ccc8",
"text": "Entity disambiguation is an important step in many information retrieval applications. This paper proposes new research for entity disambiguation with the focus of name disambiguation in digital libraries. In particular, pairwise similarity is first learned for publications that share the same author name string (ANS) and then a novel Hierarchical Agglomerative Clustering approach with Adaptive Stopping Criterion (HACASC) is proposed to adaptively cluster a set of publications that share a same ANS to individual clusters of publications with different author identities. The HACASC approach utilizes a mixture of kernel ridge regressions to intelligently determine the threshold in clustering. This obtains more appropriate clustering granularity than non-adaptive stopping criterion. We conduct a large scale empirical study with a dataset of more than 2 million publication record pairs to demonstrate the advantage of the proposed HACASC approach.",
"title": ""
},
{
"docid": "bacb761bc173a07bf13558e2e5419c2b",
"text": "Rejection sensitivity is the disposition to anxiously expect, readily perceive, and intensely react to rejection. In response to perceived social exclusion, highly rejection sensitive people react with increased hostile feelings toward others and are more likely to show reactive aggression than less rejection sensitive people in the same situation. This paper summarizes work on rejection sensitivity that has provided evidence for the link between anxious expectations of rejection and hostility after rejection. We review evidence that rejection sensitivity functions as a defensive motivational system. Thus, we link rejection sensitivity to attentional and perceptual processes that underlie the processing of social information. A range of experimental and diary studies shows that perceiving rejection triggers hostility and aggressive behavior in rejection sensitive people. We review studies that show that this hostility and reactive aggression can perpetuate a vicious cycle by eliciting rejection from those who rejection sensitive people value most. Finally, we summarize recent work suggesting that this cycle can be interrupted with generalized self-regulatory skills and the experience of positive, supportive relationships.",
"title": ""
},
{
"docid": "3e9d7fed78af293ad6bce35ff34e1ddf",
"text": "Ontology researches have been carried out in many diverse research areas in the past decade for numerous purposes especially in the eRecruitment domain. In this article, we would like to take a closer look on the current work of such domain of ontologies such as eRecruitment. Ontology application for e-Recruitment is becoming an important task for matching job postings and applicants semantically in a Semantic web technology using ontology and ontology matching techniques. Most of the reviewed papers used currently (existing) available widespread standards and classifications to build human resource ontology that provide a way of semantic representation for positions offered and candidates to fulfil, some of other researches have been done created their own HR ontologies to build recruitment prototype. We have reviewed number of articles and identified few purposes for which ontology matching",
"title": ""
},
{
"docid": "dfe9536345590ead7dfc4b1aed41fe7b",
"text": "With the development of digital music technologies, it is an interesting and useful issue to recommend the ‘favored music’ from large amounts of digital music. Some Web-based music stores can recommend popular music which has been rated by many people. However, three problems that need to be resolved in the current methods are: (a) how to recommend the ‘favored music’ which has not been rated by anyone, (b) how to avoid repeatedly recommending the ‘disfavored music’ for users, and (c) how to recommend more interesting music for users besides the ones users have been used to listen. To achieve these goals, we proposed a novel method called personalized hybrid music recommendation, which combines the content-based, collaboration-based and emotion-based methods by computing the weights of the methods according to users’ interests. Furthermore, to evaluate the recommendation accuracy, we constructed a system that can recommend the music to users after mining users’ logs on music listening records. By the feedback of the user’s options, the proposed methods accommodate the variations of the users’ musical interests and then promptly recommend the favored and more interesting music via consecutive recommendations. Experimental results show that the recommendation accuracy achieved by our method is as good as 90%. Hence, it is helpful for recommending the ‘favored music’ to users, provided that each music object is annotated with the related music emotions. The framework in this paper could serve as a useful basis for studies on music recommendation. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bb4b0d85d4b7ae9fac7e745909badcc8",
"text": "The fifth Dialog State Tracking Challenge (DSTC5) introduces a new cross-language dialog state tracking scenario, where the participants are asked to build their trackers based on the English training corpus, while evaluating them with the unlabeled Chinese corpus. Although the computer-generated translations for both English and Chinese corpus are provided in the dataset, these translations contain errors and careless use of them can easily hurt the performance of the built trackers. To address this problem, we propose a multichannel Convolutional Neural Networks (CNN) architecture, in which we treat English and Chinese language as different input channels of one single CNN model. In the evaluation of DSTC5, we found that such multichannel architecture can effectively improve the robustness against translation errors. Additionally, our method for DSTC5 is purely machine learning based and requires no prior knowledge about the target language. We consider this a desirable property for building a tracker in the cross-language context, as not every developer will be familiar with both languages.",
"title": ""
},
{
"docid": "c42f395adaee401acdf31a1211d225f3",
"text": "In recent years, research efforts seeking to provide more natural, human-centered means of interacting with computers have gained growing interest. A particularly important direction is that of perceptive user interfaces, where the computer is endowed with perceptive capabilities that allow it to acquire both implicit and explicit information about the user and the environment. Vision has the potential of carrying a wealth of information in a non-intrusive manner and at a low cost, therefore it constitutes a very attractive sensing modality for developing perceptive user interfaces. Proposed approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. In this paper, we focus our attention to vision-based recognition of hand gestures. The first part of the paper provides an overview of the current state of the art regarding the recognition of hand gestures as these are observed and recorded by typical video cameras. In order to make the review of the related literature tractable, this paper does not discuss:",
"title": ""
},
{
"docid": "edccb0babf1e6fe85bb1d7204ab0ea0a",
"text": "OBJECTIVE\nControlled study of the long-term outcome of selective mutism (SM) in childhood.\n\n\nMETHOD\nA sample of 33 young adults with SM in childhood and two age- and gender-matched comparison groups were studied. The latter comprised 26 young adults with anxiety disorders in childhood (ANX) and 30 young adults with no psychiatric disorders during childhood. The three groups were compared with regard to psychiatric disorder in young adulthood by use of the Composite International Diagnostic Interview (CIDI). In addition, the effect of various predictors on outcome of SM was studied.\n\n\nRESULTS\nThe symptoms of SM improved considerably in the entire SM sample. However, both SM and ANX had significantly higher rates for phobic disorder and any psychiatric disorder than controls at outcome. Taciturnity in the family and, by trend, immigrant status and a severity indicator of SM had an impact on psychopathology and symptomatic outcome in young adulthood.\n\n\nCONCLUSION\nThis first controlled long-term outcome study of SM provides evidence of symptomatic improvement of SM in young adulthood. However, a high rate of phobic disorder at outcome points to the fact that SM may be regarded as an anxiety disorder variant.",
"title": ""
},
{
"docid": "2d0e362a903e18f39bbbae320d29b396",
"text": "We give algorithms for finding the k shortest paths (not required to be simple) connecting a pair of vertices in a digraph. Our algorithms output an implicit representation of these paths in a digraph with n vertices and m edges, in time O(m + nlogn + k). We can also find the k shortest paths from a given source s to each vertex in the graph, in total time O(m + n logn + kn). We describe applications to dynamic programming problems including the knapsack problem, sequence alignment, and maximum inscribed",
"title": ""
},
{
"docid": "5b1241edf4a9853614a18139323f74eb",
"text": "This paper presents a W-band SPDT switch implemented using PIN diodes in a new 90 nm SiGe BiCMOS technology. The SPDT switch achieves a minimum insertion loss of 1.4 dB and an isolation of 22 dB at 95 GHz, with less than 2 dB insertion loss from 77-134 GHz, and greater than 20 dB isolation from 79-129 GHz. The input and output return losses are greater than 10 dB from 73-133 GHz. By reverse biasing the off-state PIN diodes, the P1dB is larger than +24 dBm. To the authors' best knowledge, these results demonstrate the lowest loss and highest power handling capability achieved by a W-band SPDT switch in any silicon-based technology reported to date.",
"title": ""
},
{
"docid": "254f2ef4608ea3c959e049073ad063f8",
"text": "Recently, the long-term evolution (LTE) is considered as one of the most promising 4th generation (4G) mobile standards to increase the capacity and speed of mobile handset networks [1]. In order to realize the LTE wireless communication system, the diversity and multiple-input multiple-output (MIMO) systems have been introduced [2]. In a MIMO mobile user terminal such as handset or USB dongle, at least two uncorrelated antennas should be placed within an extremely restricted space. This task becomes especially difficult when a MIMO planar antenna is designed for LTE band 13 (the corresponding wavelength is 390 mm). Due to the limited space available for antenna elements, the antennas are strongly coupled with each other and have narrow bandwidth.",
"title": ""
},
{
"docid": "78ccfdac121daaae3abe3f8f7c73482b",
"text": "We present a method for constructing smooth n-direction fields (line fields, cross fields, etc.) on surfaces that is an order of magnitude faster than state-of-the-art methods, while still producing fields of equal or better quality. Fields produced by the method are globally optimal in the sense that they minimize a simple, well-defined quadratic smoothness energy over all possible configurations of singularities (number, location, and index). The method is fully automatic and can optionally produce fields aligned with a given guidance field such as principal curvature directions. Computationally the smoothest field is found via a sparse eigenvalue problem involving a matrix similar to the cotan-Laplacian. When a guidance field is present, finding the optimal field amounts to solving a single linear system.",
"title": ""
},
{
"docid": "f9a6acbe6f0218e6332b9ca95d6e38cf",
"text": "Entity alignment is the task of finding entities in two knowledge bases (KBs) that represent the same real-world object. When facing KBs in different natural languages, conventional cross-lingual entity alignment methods rely on machine translation to eliminate the language barriers. These approaches often suffer from the uneven quality of translations between languages. While recent embedding-based techniques encode entities and relationships in KBs and do not need machine translation for cross-lingual entity alignment, a significant number of attributes remain largely unexplored. In this paper, we propose a joint attribute-preserving embedding model for cross-lingual entity alignment. It jointly embeds the structures of two KBs into a unified vector space and further refines it by leveraging attribute correlations in the KBs. Our experimental results on real-world datasets show that this approach significantly outperforms the state-of-the-art embedding approaches for cross-lingual entity alignment and could be complemented with methods based on machine translation.",
"title": ""
},
{
"docid": "9ec718dd1b7eb98fb4fe895d76474c85",
"text": "The multibillion-dollar online advertising industry continues to debate whether to use the CPC (cost per click) or CPA (cost per action) pricing model as an industry standard. This article applies the economic framework of incentive contracts to study how these pricing models can lead to risk sharing between the publisher and the advertiser and incentivize them to make e orts that improve the performance of online ads. We nd that, compared to the CPC model, the CPA model can better incentivize the publisher to make e orts that can improve the purchase rate. However, the CPA model can cause an adverse selection problem: the winning advertiser tends to have a lower pro t margin under the CPA model than under the CPC model. We identify the conditions under which the CPA model leads to higher publisher (or advertiser) payo s than the CPC model. Whether publishers (or advertisers) prefer the CPA model over the CPC model depends on the advertisers' risk aversion, uncertainty in the product market, and the presence of advertisers with low immediate sales ratios. Our ndings indicate a con ict of interest between publishers and advertisers in their preferences for these two pricing models. We further consider which pricing model o ers greater social welfare.",
"title": ""
},
{
"docid": "991a388d1159667a5b2494ded71c5abe",
"text": "Organizations around the world have called for the responsible development of nanotechnology. The goals of this approach are to emphasize the importance of considering and controlling the potential adverse impacts of nanotechnology in order to develop its capabilities and benefits. A primary area of concern is the potential adverse impact on workers, since they are the first people in society who are exposed to the potential hazards of nanotechnology. Occupational safety and health criteria for defining what constitutes responsible development of nanotechnology are needed. This article presents five criterion actions that should be practiced by decision-makers at the business and societal levels-if nanotechnology is to be developed responsibly. These include (1) anticipate, identify, and track potentially hazardous nanomaterials in the workplace; (2) assess workers' exposures to nanomaterials; (3) assess and communicate hazards and risks to workers; (4) manage occupational safety and health risks; and (5) foster the safe development of nanotechnology and realization of its societal and commercial benefits. All these criteria are necessary for responsible development to occur. Since it is early in the commercialization of nanotechnology, there are still many unknowns and concerns about nanomaterials. Therefore, it is prudent to treat them as potentially hazardous until sufficient toxicology, and exposure data are gathered for nanomaterial-specific hazard and risk assessments. In this emergent period, it is necessary to be clear about the extent of uncertainty and the need for prudent actions.",
"title": ""
},
{
"docid": "8c89db0cd8c5dc666d7d6b244d35326b",
"text": "Cervical cancer, as the fourth most common cause of death from cancer among women, has no symptoms in the early stage. There are few methods to diagnose cervical cancer precisely at present. Support vector machine (SVM) approach is introduced in this paper for cervical cancer diagnosis. Two improved SVM methods, support vector machine-recursive feature elimination and support vector machine-principal component analysis (SVM-PCA), are further proposed to diagnose the malignant cancer samples. The cervical cancer data are represented by 32 risk factors and 4 target variables: Hinselmann, Schiller, Cytology, and Biopsy. All four targets have been diagnosed and classified by the three SVM-based approaches, respectively. Subsequently, we make the comparison among these three methods and compare our ranking result of risk factors with the ground truth. It is shown that SVM-PCA method is superior to the others.",
"title": ""
},
{
"docid": "8fb516c5a15c1d582560a922907ab940",
"text": "Sexual Strategies Theory (SST; Buss and Schmitt 1993) suggests that, typically, men more so than women are more likely to spend proportionately more of their mating effort in short-term mating, lower their standards in short-term compared to long-term mating, feel reproductively constrained, and seek, but certainly not avoid, sex if pregnancy is likely in short-term relationships. A series of 4 survey studies each containing hundreds of college student participants from the western portion of the United States were conducted to test these hypotheses. The findings are inconsistent with SST but are consistent with Attachment Fertility Theory (AFT; Miller et al. 2005) that argues for relatively few evolved gender differences in mating strategies and preferences.",
"title": ""
},
{
"docid": "6de91d6b71ff97c5564dd3e3a42092a0",
"text": "Characteristics of physical movements are indicative of infants' neuro-motor development and brain dysfunction. For instance, infant seizure, a clinical signal of brain dysfunction, could be identified and predicted by monitoring its physical movements. With the advance of wearable sensor technology, including the miniaturization of sensors, and the increasing broad application of micro- and nanotechnology, and smart fabrics in wearable sensor systems, it is now possible to collect, store, and process multimodal signal data of infant movements in a more efficient, more comfortable, and non-intrusive way. This review aims to depict the state-of-the-art of wearable sensor systems for infant movement monitoring. We also discuss its clinical significance and the aspect of system design.",
"title": ""
},
{
"docid": "e19fc03c96b53a2b86fce2b35a6a34f8",
"text": "This paper presents the comparison and optimization of a drive system used in an electric bike with outer and inner rotor permanent magnet brushless motor based on theoretical and experimental analysis of e-bike drive. The analyzed e-bike was equipped with the permanent magnet brushless synchronous motor with mechanical output power in range of 250W. The electric motor together with a planetary gear was mounted in a hub of 26 inch bike wheel. This kind of e-bike can reach maximum speed up to 25km/h. The aim of this paper is to investigate drive system used in the commercial e-bike with outer rotor, and theoretical analysis this same system combined with permanent magnet brushless motor with inner rotor. As a result of work the optimized motor topology with inner rotor is presented.",
"title": ""
},
{
"docid": "b9065d678b3a9aab8d9f98d7367ad7bb",
"text": "Ms. Pac-Man is a challenging, classic arcade game that provides an interesting platform for Artificial Intelligence (AI) research. This paper reports the first Monte-Carlo approach to develop a ghost avoidance module of an intelligent agent that plays the game. Our experimental results show that the look-ahead ability of Monte-Carlo simulation often prevents Ms. Pac-Man being trapped by ghosts and reduces the chance of losing Ms. Pac-Man's life significantly. Our intelligent agent has achieved a high score of around 21,000. It is sometimes capable of clearing the first three stages and playing at the level of a novice human player.",
"title": ""
}
] |
scidocsrr
|
ce008baa704c79310d6286e113393a92
|
Teaching Classification Boundaries to Humans
|
[
{
"docid": "c00470d69400066d11374539052f4a86",
"text": "When individuals learn facts (e.g., foreign language vocabulary) over multiple study sessions, the temporal spacing of study has a significant impact on memory retention. Behavioral experiments have shown a nonmonotonic relationship between spacing and retention: short or long intervals between study sessions yield lower cued-recall accuracy than intermediate intervals. Appropriate spacing of study can double retention on educationally relevant time scales. We introduce a Multiscale Context Model (MCM) that is able to predict the influence of a particular study schedule on retention for specific material. MCM’s prediction is based on empirical data characterizing forgetting of the material following a single study session. MCM is a synthesis of two existing memory models (Staddon, Chelaru, & Higa, 2002; Raaijmakers, 2003). On the surface, these models are unrelated and incompatible, but we show they share a core feature that allows them to be integrated. MCM can determine study schedules that maximize the durability of learning, and has implications for education and training. MCM can be cast either as a neural network with inputs that fluctuate over time, or as a cascade of leaky integrators. MCM is intriguingly similar to a Bayesian multiscale model of memory (Kording, Tenenbaum, & Shadmehr, 2007), yet MCM is better able to account for human declarative memory.",
"title": ""
},
{
"docid": "eb3a212d81fd1d2ebd971a01e011d70d",
"text": "Humans and animals can perform much more complex tasks than they can acquire using pure trial and error learning. This gap is filled by teaching. One important method of instruction is shaping, in which a teacher decomposes a complete task into sub-components, thereby providing an easier path to learning. Despite its importance, shaping has not been substantially studied in the context of computational modeling of cognitive learning. Here we study the shaping of a hierarchical working memory task using an abstract neural network model as the target learner. Shaping significantly boosts the speed of acquisition of the task compared with conventional training, to a degree that increases with the temporal complexity of the task. Further, it leads to internal representations that are more robust to task manipulations such as reversals. We use the model to investigate some of the elements of successful shaping.",
"title": ""
},
{
"docid": "a8164a657a247761147c6012fd5442c9",
"text": "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that typically we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.",
"title": ""
}
] |
[
{
"docid": "fd5efb029ab7f69f73a97f567ac9aa1a",
"text": "Current offshore wind farms (OWFs) design processes are based on a sequential approach which does not guarantee system optimality because it oversimplifies the problem by discarding important interdependencies between design aspects. This article presents a framework to integrate, automate and optimize the design of OWF layouts and the respective electrical infrastructures. The proposed framework optimizes simultaneously different goals (e.g., annual energy delivered and investment cost) which leads to efficient trade-offs during the design phase, e.g., reduction of wake losses vs collection system length. Furthermore, the proposed framework is independent of economic assumptions, meaning that no a priori values such as the interest rate or energy price, are needed. The proposed framework was applied to the Dutch Borssele areas I and II. A wide range of OWF layouts were obtained through the optimization framework. OWFs with similar energy production and investment cost as layouts designed with standard sequential strategies were obtained through the framework, meaning that the proposed framework has the capability to create different OWF layouts that would have been missed by the designers. In conclusion, the proposed multi-objective optimization framework represents a mind shift in design tools for OWFs which allows cost savings in the design and operation phases.",
"title": ""
},
{
"docid": "45c04c80a5e4c852c4e84ba66bd420dd",
"text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.",
"title": ""
},
{
"docid": "dcfbd198020388850ef925baf4f267f8",
"text": "This paper discusses how energy consumption can be significantly reduced in mobile networks by introducing discontinuous transmission (DTX) on the base station side. By introducing DTX on the downlink, or cell DTX, we show that it is possible to achieve significant energy reductions in an LTE network. Cell DTX is most efficient when the traffic load is low in a cell but even when realistic traffic statistics are considered the gains are impressive. The technology potential for a metropolitan area is shown to be 90% reduced energy consumption compared to no use of cell DTX. The paper also discusses different drives for the increased focus on energy efficient network operation and also provides insights on the impact of cell DTX from a life cycle assessment perspective.",
"title": ""
},
{
"docid": "f9b11e55be907175d969cd7e76803caf",
"text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.",
"title": ""
},
{
"docid": "98d1c35aeca5de703cec468b2625dc72",
"text": "Congenital adrenal hyperplasia was described in London by Phillips (1887) who reported four cases of spurious hermaphroditism in one family. Fibiger (1905) noticed that there was enlargement of the adrenal glands in some infants who had died after prolonged vomiting and dehydration. Butler, Ross, and Talbot (1939) reported a case which showed serum electrolyte changes similar to those of Addison's disease. Further developments had to await the synthesis of cortisone. The work ofWilkins, Lewis, Klein, and Rosemberg (1950) showed that cortisone could alleviate the disorder and suppress androgen secretion. Bartter, Albright, Forbes, Leaf, Dempsey, and Carroll (1951) suggested that, in congenital adrenal hyperplasia, there might be a primary impairment of synthesis of cortisol (hydrocortisone, compound F) and a secondary rise of pituitary adrenocorticotrophin (ACTH) production. This was confirmed by Jailer, Louchart, and Cahill (1952) who showed that ACTH caused little increase in the output of cortisol in such cases. In the same year, Snydor, Kelley, Raile, Ely, and Sayers (1953) found an increased level ofACTH in the blood of affected patients. Studies of enzyme systems were carried out. Jailer, Gold, Vande Wiele, and Lieberman (1955) and Frantz, Holub, and Jailer (1960) produced evidence that the most common site for the biosynthetic block was in the C-21 hydroxylating system. Eberlein and Bongiovanni (1955) showed that there was a C-l 1 hydroxylation defect in patients with the hypertensive form of congenital adrenal hyperplasia, and Bongiovanni (1961) and Bongiovanni and Kellenbenz (1962), showed that in some patients there was a further type of enzyme defect, a 3-(-hydroxysteroid dehydrogenase deficiency, an enzyme which is required early in the metabolic pathway. Prader and Siebenmann (1957) described a female infant who had adrenal insufficiency and congenital lipoid hyperplasia of the",
"title": ""
},
{
"docid": "3a75bf4c982d076fce3b4cdcd560881a",
"text": "This project is one of the research topics in Professor William Dally’s group. In this project, we developed a pruning based method to learn both weights and connections for Long Short Term Memory (LSTM). In this method, we discard the unimportant connections in a pretrained LSTM, and make the weight matrix sparse. Then, we retrain the remaining model. After we remaining model is converge, we prune this model again and retrain the remaining model iteratively, until we achieve the desired size of model and performance. This method will save the size of the LSTM as well as prevent overfitting. Our results retrained on NeuralTalk shows that we can discard nearly 90% of the weights without hurting the performance too much. Part of the results in this project will be posted in NIPS 2015.",
"title": ""
},
{
"docid": "8836fddeb496972fa38005fd2f8a4ed4",
"text": "Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the device's environment, however, offers a power source limited by the device's physical survival rather than an adjunct energy store. Energy harvesting's true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.",
"title": ""
},
{
"docid": "1d11b3ddedc72cdcb3002c149ea41316",
"text": "The \\emph{wavelet tree} data structure is a space-efficient technique for rank and select queries that generalizes from binary characters to an arbitrary multicharacter alphabet. It has become a key tool in modern full-text indexing and data compression because of its capabilities in compressing, indexing, and searching. We present a comparative study of its practical performance regarding a wide range of options on the dimensions of different coding schemes and tree shapes. Our results are both theoretical and experimental: (1)~We show that the run-length $\\delta$ coding size of wavelet trees achieves the 0-order empirical entropy size of the original string with leading constant 1, when the string's 0-order empirical entropy is asymptotically less than the logarithm of the alphabet size. This result complements the previous works that are dedicated to analyzing run-length $\\gamma$-encoded wavelet trees. It also reveals the scenarios when run-length $\\delta$ encoding becomes practical. (2)~We introduce a full generic package of wavelet trees for a wide range of options on the dimensions of coding schemes and tree shapes. Our experimental study reveals the practical performance of the various modifications.",
"title": ""
},
{
"docid": "fd3982a8f135908a3450ebdcf34e493c",
"text": "We address the speaker-independent acoustic inversion (AI) problem, also referred to as acoustic-to-articulatory mapping. The scarce availability of multi-speaker articulatory data makes it difficult to learn a mapping which generalizes from a limited number of training speakers and reliably reconstructs the articulatory movements of unseen speakers. In this paper, we propose a Multi-task Learning (MTL)-based approach that explicitly separates the modeling of each training speaker AI peculiarities from the modeling of AI characteristics that are shared by all speakers. Our approach stems from the well known Regularized MTL approach and extends it to feed-forward deep neural networks (DNNs). Given multiple training speakers, we learn for each an acoustic-to-articulatory mapping represented by a DNN. Then, through an iterative procedure, we search for a canonical speaker-independent DNN that is “similar” to all speaker-dependent DNNs. The degree of similarity is controlled by a regularization parameter. We report experiments on the University of Wisconsin X-ray Microbeam Database under different training/testing experimental settings. The results obtained indicate that our MTL-trained canonical DNN largely outperforms a standardly trained (i.e., single task learning-based) speaker independent DNN.",
"title": ""
},
{
"docid": "87fefd773ea10a006dbc9b76f4f1e4c1",
"text": "An underwater robotic assistant could help a human diver by illuminating work areas, fetching tools from the surface, or monitoring the diver's activity for abnormal behavior. However, in order for basic Underwater Human-Robot Interaction (UHRI) to be successful, the robotic assistant has to first be able to detect and track the diver. This paper discusses the detection and tracking of a diver with a high-frequency forward-looking sonar. The first step in the diver detection involves utilizing classical 2D image processing techniques to segment moving objects in the sonar image. The moving objects are then passed through a blob detection algorithm, and then the blob clusters are processed by the cluster classification process. Cluster classification is accomplished by matching observed cluster trajectories with trained Hidden Markov Models (HMM), which results in a cluster being classified as either a diver or clutter. Real-world results show that a moving diver can be autonomously distinguished from stationary objects in a noisy sonar image and tracked.",
"title": ""
},
{
"docid": "bc4d9587ba33464d74302045336ddc38",
"text": "Deep learning is a popular technique in modern online and offline services. Deep neural network based learning systems have made groundbreaking progress in model size, training and inference speed, and expressive power in recent years, but to tailor the model to specific problems and exploit data and problem structures is still an ongoing research topic. We look into two types of deep ‘‘multi-’’ objective learning problems: multi-view learning, referring to learning from data represented by multiple distinct feature sets, and multi-label learning, referring to learning from data instances belonging to multiple class labels that are not mutually exclusive. Research endeavors of both problems attempt to base on existing successful deep architectures and make changes of layers, regularization terms or even build hybrid systems to meet the problem constraints. In this report we first explain the original artificial neural network (ANN) with the backpropagation learning algorithm, and also its deep variants, e.g. deep belief network (DBN), convolutional neural network (CNN) and recurrent neural network (RNN). Next we present a survey of some multi-view and multi-label learning frameworks based on deep neural networks. At last we introduce some applications of deep multi-view and multi-label learning, including e-commerce item categorization, deep semantic hashing, dense image captioning, and our preliminary work on x-ray scattering image classification.",
"title": ""
},
{
"docid": "eb8d1663cf6117d76a6b61de38b55797",
"text": "Many security experts would agree that, had it not been for mobile configurations, the synthesis of online algorithms might never have occurred. In fact, few computational biologists would disagree with the evaluation of von Neumann machines. We construct a peer-to-peer tool for harnessing Smalltalk, which we call TalmaAment.",
"title": ""
},
{
"docid": "65b6887ebd51ee33b103954365a2d1dd",
"text": "Given a food image, can a fine-grained object recognition engine tell \"which restaurant which dish\" the food belongs to? Such ultra-fine grained image recognition is the key for many applications like search by images, but it is very challenging because it needs to discern subtle difference between classes while dealing with the scarcity of training data. Fortunately, the ultra-fine granularity naturally brings rich relationships among object classes. This paper proposes a novel approach to exploit the rich relationships through bipartite-graph labels (BGL). We show how to model BGL in an overall convolutional neural networks and the resulting system can be optimized through back-propagation. We also show that it is computationally efficient in inference thanks to the bipartite structure. To facilitate the study, we construct a new food benchmark dataset, which consists of 37,885 food images collected from 6 restaurants and totally 975 menus. Experimental results on this new food and three other datasets demonstrate BGL advances previous works in fine-grained object recognition. An online demo is available at http: //www.f-zhou.com/fg_demo/.",
"title": ""
},
{
"docid": "3c514740d7f8ce78f9afbaca92dc3b1c",
"text": "In the Brazil nut problem (BNP), hard spheres with larger diameters rise to the top. There are various explanations (percolation, reorganization, convection), but a broad understanding or control of this effect is by no means achieved. A theory is presented for the crossover from BNP to the reverse Brazil nut problem based on a competition between the percolation effect and the condensation of hard spheres. The crossover condition is determined, and theoretical predictions are compared to molecular dynamics simulations in two and three dimensions.",
"title": ""
},
{
"docid": "6316963035a6a7bf1e44c7f32d322737",
"text": "InGaAs-InP modified charge compensated uni- traveling carrier photodiodes with both absorbing and nonabsorbing depleted region are demonstrated. The fiber-coupled external quantum efficiency was 60% (responsivity at 1550 nm = 0.75 A/W). A 40-mum-diameter photodiode achieved 14-GHz bandwidth and 25-dBm RF output power and a 20-mum-diameter photodiode exhibited 30-GHz bandwidth and 15.5-dBm RF output power. The saturation current-bandwidth products are 1820 mA ldr GHz and 1560 mA GHz for the 40-mum-diameter and 40-mum-diameter devices, respectively.",
"title": ""
},
{
"docid": "02dc406798fa3207c0860682f76a70c6",
"text": "This paper proposes automatic machine learning approach (AutoML) of deep neural networks (DNNs) using multi-objective evolutionary algorithms (MOEAs) for the accuracy and run-time speed simultaneously. Traditional methods for optimizing DNNs have used Bayesian reasoning or reinforcement learning to improve the performance. Recently, evolutionary approaches for high accuracy has been adopted using a lot of GPUs. However, real-world applications require rapid inference to deploy on embedded devices while maintaining high accuracy. To consider both accuracy and speed at the same time, we propose Neuro-Evolution with Multiobjective Optimization (NEMO) algorithm. Experimental results show that proposed NEMO finds faster and more accurate DNNs than hand-crafted DNN architectures. The proposed technique is verified for the image classification problems such as MNIST, CIFAR-10 and human status recognition.",
"title": ""
},
{
"docid": "e7b3656170f6e6302d9b765d820bb0a9",
"text": "Due to the fast development of social media on the Web, Twitter has become one of the major platforms for people to express themselves. Because of the wide adoption of Twitter, events like breaking news and release of popular videos can easily catch people’s attention and spread rapidly on Twitter, and the number of relevant tweets approximately reflects the impact of an event. Event identification and analysis on Twitter has thus become an important task. Recently the Recurrent Chinese Restaurant Process (RCRP) has been successfully used for event identification from news streams and news-centric social media streams. However, these models cannot be directly applied to Twitter based on our preliminary experiments mainly for two reasons: (1) Events emerge and die out fast on Twitter, while existing models ignore this burstiness property. (2) Most Twitter posts are personal interest oriented while only a small fraction is event related. Motivated by these challenges, we propose a new nonparametric model which considers burstiness. We further combine this model with traditional topic models to identify both events and topics simultaneously. Our quantitative evaluation provides sufficient evidence that our model can accurately detect meaningful events. Our qualitative evaluation also shows interesting analysis for events on Twitter.",
"title": ""
},
{
"docid": "37fc66892e1cf8c446fe4028f8d8f19c",
"text": "A survey of the literature reveals that image processing tools aimed at supplementing the art historian's toolbox are currently in the earliest stages of development. To jump-start the development of such methods, the Van Gogh and Kroller-Muller museums in The Netherlands agreed to make a data set of 101 high-resolution gray-scale scans of paintings within their collections available to groups of image processing researchers from several different universities. This article describes the approaches to brushwork analysis and artist identification developed by three research groups, within the framework of this data set.",
"title": ""
},
{
"docid": "f84902c86a652b8e4ba3fa590b0877da",
"text": "Software testing is one of the most important parts of software development lifecycle. Among various types of software testing approaches structural testing is widely used. Structural testing can be improved largely by traversing all possible code paths of the software. Genetic algorithm is the most used search technique to automate path testing and test case generation. Recently, different novel search based optimization techniques such as Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Artificial Immune System (AIS), Particle Swarm Optimization (PSO) have been applied to generate optimal path to complete software coverage. In this paper, ant colony optimization (ACO) based algorithm has been proposed which will generate set of optimal paths and prioritize the paths. Additionally, the approach generates test data sequence within the domain to use as inputs of the generated paths. Proposed approach guarantees full software coverage with minimum redundancy. This paper also demonstrates the proposed approach applying it in a program module.",
"title": ""
},
{
"docid": "c02a1c89692d88671f4be454345f3fa3",
"text": "In this study, the resonant analysis and modeling of the microstrip-fed stepped-impedance (SI) slot antenna are presented by utilizing the transmission-line and lumped-element circuit topologies. This study analyzes the SI-slot antenna and systematically summarizes its frequency response characteristics, such as the resonance condition, spurious response, and equivalent circuit. Design formulas with respect to the impedance ratio of the SI slot antenna were analytically derived. The antenna designers can predict the resonant modes of the SI slot antenna without utilizing expensive EM-simulation software.",
"title": ""
}
] |
scidocsrr
|
513b29a82b1ae613c661e5f0bc20eaa5
|
A Self-Adaptive Parameter Selection Trajectory Prediction Approach via Hidden Markov Models
|
[
{
"docid": "6968d5646db3941b06d3763033cb8d45",
"text": "Path prediction is useful in a wide range of applications. Most of the existing solutions, however, are based on eager learning methods where models and patterns are extracted from historical trajectories and then used for future prediction. Since such approaches are committed to a set of statistically significant models or patterns, problems can arise in dynamic environments where the underlying models change quickly or where the regions are not covered with statistically significant models or patterns.\n We propose a \"semi-lazy\" approach to path prediction that builds prediction models on the fly using dynamically selected reference trajectories. Such an approach has several advantages. First, the target trajectories to be predicted are known before the models are built, which allows us to construct models that are deemed relevant to the target trajectories. Second, unlike the lazy learning approaches, we use sophisticated learning algorithms to derive accurate prediction models with acceptable delay based on a small number of selected reference trajectories. Finally, our approach can be continuously self-correcting since we can dynamically re-construct new models if the predicted movements do not match the actual ones.\n Our prediction model can construct a probabilistic path whose probability of occurrence is larger than a threshold and which is furthest ahead in term of time. Users can control the confidence of the path prediction by setting a probability threshold. We conducted a comprehensive experimental study on real-world and synthetic datasets to show the effectiveness and efficiency of our approach.",
"title": ""
}
] |
[
{
"docid": "155c535c78e75b016d13ffa892f54926",
"text": "Modern servers have become heterogeneous, often combining multi-core CPUs with many-core GPGPUs. Such heterogeneous architectures have the potential to improve the performance of data-intensive stream processing applications, but they are not supported by current relational stream processing engines. For an engine to exploit a heterogeneous architecture, it must execute streaming SQL queries with sufficient data-parallelism to fully utilise all available heterogeneous processors, and decide how to use each in the most effective way. It must do this while respecting the semantics of streaming SQL queries, in particular with regard to window handling.\n We describe Saber, a hybrid high-performance relational stream processing engine for CPUs and GPGPUs. Saber executes window-based streaming SQL queries in a data-parallel fashion using all available CPU and GPGPU cores. Instead of statically assigning query operators to heterogeneous processors, Saber employs a new adaptive heterogeneous lookahead scheduling strategy, which increases the share of queries executing on the processor that yields the highest performance. To hide data movement costs, Saber pipelines the transfer of stream data between CPU and GPGPU memory. Our experimental comparison against state-of-the-art engines shows that Saber increases processing throughput while maintaining low latency for a wide range of streaming SQL queries with both small and large window sizes.",
"title": ""
},
{
"docid": "e4d86871669074b385f8ea36968106c0",
"text": "Verbal redundancy arises from the concurrent presentation of text and verbatim speech. To inform theories of multimedia learning that guide the design of educational materials, a meta-analysis was conducted to investigate the effects of spoken-only, written-only, and spoken–written presentations on learning retention and transfer. After an extensive search for experimental studies meeting specified inclusion criteria, data from 57 independent studies were extracted. Most of the research participants were postsecondary students. Overall, this meta-analysis revealed that outcomes comparing spoken–written and written-only presentations did not differ, but students who learned from spoken–written presentations outperformed those who learned from spoken-only presentations. This effect was dependent on learners’ prior knowledge, pacing of presentation, and inclusion of animation or diagrams. Specifically, the advantages of spoken–written presentations over spoken-only presentations were found for low prior knowledge learners, system-paced learning materials, and picture-free materials. In comparison with verbatim, spoken–written presentations, presentations displaying key terms extracted from spoken narrations were associated with better learning outcomes and accounted for much of the advantage of spoken–written over spoken-only presentations. These findings have significant implications for the design of multimedia materials.",
"title": ""
},
{
"docid": "98571cb7f32b389683e8a9e70bd87339",
"text": "We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",
"title": ""
},
{
"docid": "39d15901cd5fbd1629d64a165a94c5f5",
"text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.",
"title": ""
},
{
"docid": "20adf89d9301cdaf64d8bf684886de92",
"text": "A standard planar Kernel Density Estimation (KDE) aims to produce a smooth density surface of spatial point events over a 2-D geographic space. However the planar KDE may not be suited for characterizing certain point events, such as traffic accidents, which usually occur inside a 1-D linear space, the roadway network. This paper presents a novel network KDE approach to estimating the density of such spatial point events. One key feature of the new approach is that the network space is represented with basic linear units of equal network length, termed lixel (linear pixel), and related network topology. The use of lixel not only facilitates the systematic selection of a set of regularly spaced locations along a network for density estimation, but also makes the practical application of the network KDE feasible by significantly improving the computation efficiency. The approach is implemented in the ESRI ArcGIS environment and tested with the year 2005 traffic accident data and a road network in the Bowling Green, Kentucky area. The test results indicate that the new network KDE is more appropriate than standard planar KDE for density estimation of traffic accidents, since the latter covers space beyond the event context (network space) and is likely to overestimate the density values. The study also investigates the impacts on density calculation from two kernel functions, lixel lengths, and search bandwidths. It is found that the kernel function is least important in structuring the density pattern over network space, whereas the lixel length critically impacts the local variation details of the spatial density pattern. The search bandwidth imposes the highest influence by controlling the smoothness of the spatial pattern, showing local effects at a narrow bandwidth and revealing \" hot spots \" at larger or global scales with a wider bandwidth. More significantly, the idea of representing a linear network by a network system of equal-length lixels may potentially 3 lead the way to developing a suite of other network related spatial analysis and modeling methods.",
"title": ""
},
{
"docid": "ec9f21d457ad64f81057249335e8f472",
"text": "The Proactor pattern [1] describes how to structure applications and systems that effectively utilize asynchronous mechanisms supported by operating systems. When an application invokes an asynchronous operation, the OS performs the operation on behalf of the application. This allows the application to have multiple operations running simultaneously without requiring the application to have a corresponding number of threads. Therefore, the Proactor pattern simplifies concurrent programming and improves performance by requiring fewer threads and leveraging OS support for asynchronous operations. The Adaptive Communications Environment (ACE) [2] has implemented a Proactor framework that encapsulates I/O Completion Ports of Windows NT operating system. This ACE Proactor abstraction provides an OO interface to the standard C APIs supported by Windows NT. We ported this Proactor framework to Unix platforms that support POSIX4 asynchronous I/O calls and real-time signals. This paper describes the design and implementation of this new Portable Proactor framework and explains how the design and the implementation have been made so that the framework can be extensible, scalable and efficient. We explain how our design took care of keeping the old interfaces of the framework intact, still making the design highly extensible and efficient. The source code for this implementation can be acquired from the ACE website at www.cs.wustl.edu/ schmidt/ACE.html . 1 The Proactor Pattern",
"title": ""
},
{
"docid": "afb0ca2ca4c9ba6402bff498f23f4c55",
"text": "We consider the problem of assigning software processes (or tasks) to hardware processors in distributed robotics environments. We introduce the notion of a task variant, which supports the adaptation of software to specific hardware configurations. Task variants facilitate the trade-off of functional quality versus the requisite capacity and type of target execution processors. We formalise the problem of assigning task variants to processors as a mathematical model that incorporates typical constraints found in robotics applications; the model is a constrained form of a multi-objective, multi-dimensional, multiple-choice knapsack problem. We propose and evaluate three different solution methods to the problem: constraint programming, a constructive greedy heuristic and a local search metaheuristic. Furthermore, we demonstrate the use of task variants in a real instance of a distributed interactive multi-agent navigation system, showing that our best solution method (constraint programming) improves the system’s quality of service, as compared to the local search metaheuristic, the greedy heuristic and a randomised solution, by an average of 16%, 41% and 56% respectively.",
"title": ""
},
{
"docid": "2affffd57677d58df6fc63cc4a83da5d",
"text": "Dealing with failure is easy: Work hard to improve. Success is also easy to handle: You've solved the wrong problem. Work hard to improve.",
"title": ""
},
{
"docid": "0a23995317063e773c3ac69cfd6b8e70",
"text": "This paper proposes a temporal tracking algorithm based on Random Forest that uses depth images to estimate and track the 3D pose of a rigid object in real-time. Compared to the state of the art aimed at the same goal, our algorithm holds important attributes such as high robustness against holes and occlusion, low computational cost of both learning and tracking stages, and low memory consumption. These are obtained (a) by a novel formulation of the learning strategy, based on a dense sampling of the camera viewpoints and learning independent trees from a single image for each camera view, as well as, (b) by an insightful occlusion handling strategy that enforces the forest to recognize the object's local and global structures. Due to these attributes, we report state-of-the-art tracking accuracy on benchmark datasets, and accomplish remarkable scalability with the number of targets, being able to simultaneously track the pose of over a hundred objects at 30~fps with an off-the-shelf CPU. In addition, the fast learning time enables us to extend our algorithm as a robust online tracker for model-free 3D objects under different viewpoints and appearance changes as demonstrated by the experiments.",
"title": ""
},
{
"docid": "4d52c04ad923ae51ed9f71fe06a9cf6f",
"text": "In this paper an Inclined Planes Optimization algorithm, is used to optimize the performance of the multilayer perceptron. Indeed, the performance of the neural network depends on its parameters such as the number of neurons in the hidden layer and the connection weights. So far, most research has been done in the field of training the neural network. In this paper, a new algorithm optimization is presented in optimal architecture for data classification. Neural network training is done by backpropagation (BP) algorithm and optimization the architecture of neural network is considered as independent variables in the algorithm. The results in three classification problems have shown that a neural network resulting from these methods have low complexity and high accuracy when compared with results of Particle Swarm Optimization and Gravitational Search Algorithm.",
"title": ""
},
{
"docid": "901ff68d346e67b812fa03a66d64f9c2",
"text": "A typed lambda calculus with categorical type constructors is introduced. It has a uniform category theoretic mechanism to declare new types. Its type structure includes categorical objects like products and coproducts as well as recursive types like natural numbers and lists. It also allows duals of recursive types, i.e. lazy types, like infinite lists. It has generalized iterators for recursive types and duals of iterators for lazy types. We will give reduction rules for this simply typed lambda calculus and show that they are strongly normalizing even though it has infinite things like infinite lists.",
"title": ""
},
{
"docid": "6628bc475c2bf69c790be99bc3c0ae40",
"text": "BACKGROUND\nThe accurate quantification of antigens at low concentrations over a wide dynamic range is needed for identifying biomarkers associated with disease and detecting protein interactions in high-throughput microarrays used in proteomics. Here we report the development of an ultrasensitive quantitative assay format called immunoliposome polymerase chain reaction (ILPCR) that fulfills these requirements. This method uses a liposome, with reporter DNA encapsulated inside and biotin-labeled polyethylene glycol (PEG) phospholipid conjugates incorporated into the outer surface of the liposome, as a detection reagent. The antigenic target is immobilized in the well of a microplate by a capture antibody and the liposome detection reagent is then coupled to a biotin-labeled second antibody through a NeutrAvidin bridge. The liposome is ruptured to release the reporter DNA, which serves as a surrogate to quantify the protein target using real-time PCR.\n\n\nRESULTS\nA liposome detection reagent was prepared, which consisted of a population of liposomes ~120 nm in diameter with each liposome possessing ~800 accessible biotin receptors and ~220 encapsulated reporters. This liposome detection reagent was used in an assay to quantify the concentration of carcinoembryonic antigen (CEA) in human serum. This ILPCR assay exhibited a linear dose-response curve from 10-10 M to 10-16 M CEA. Within this range the assay coefficient of variance was <6 % for repeatability and <2 % for reproducibility. The assay detection limit was 13 fg/mL, which is 1,500-times more sensitive than current clinical assays for CEA. An ILPCR assay to quantify HIV-1 p24 core protein in buffer was also developed.\n\n\nCONCLUSIONS\nThe ILPCR assay has several advantages over other immuno-PCR methods. The reporter DNA and biotin-labeled PEG phospholipids spontaneously incorporate into the liposomes as they form, simplifying preparation of the detection reagent. Encapsulation of the reporter inside the liposomes allows nonspecific DNA in the assay medium to be degraded with DNase I prior to quantification of the encapsulated reporter by PCR, which reduces false-positive results and improves quantitative accuracy. The ability to encapsulate multiple reporters per liposome also helps overcome the effect of polymerase inhibitors present in biological specimens. Finally, the biotin-labeled liposome detection reagent can be coupled through a NeutrAvidin bridge to a multitude of biotin-labeled probes, making ILPCR a highly generic assay system.",
"title": ""
},
{
"docid": "e5f90c30d546fe22a25305afefeaff8c",
"text": "H2O2 has been found to be required for the activity of the main microbial enzymes responsible for lignin oxidative cleavage, peroxidases. Along with other small radicals, it is implicated in the early attack of plant biomass by fungi. Among the few extracellular H2O2-generating enzymes known are the glyoxal oxidases (GLOX). GLOX is a copper-containing enzyme, sharing high similarity at the level of active site structure and chemistry with galactose oxidase. Genes encoding GLOX enzymes are widely distributed among wood-degrading fungi especially white-rot degraders, plant pathogenic and symbiotic fungi. GLOX has also been identified in plants. Although widely distributed, only few examples of characterized GLOX exist. The first characterized fungal GLOX was isolated from Phanerochaete chrysosporium. The GLOX from Utilago maydis has a role in filamentous growth and pathogenicity. More recently, two other glyoxal oxidases from the fungus Pycnoporus cinnabarinus were also characterized. In plants, GLOX from Vitis pseudoreticulata was found to be implicated in grapevine defence mechanisms. Fungal GLOX were found to be activated by peroxidases in vitro suggesting a synergistic and regulatory relationship between these enzymes. The substrates oxidized by GLOX are mainly aldehydes generated during lignin and carbohydrates degradation. The reactions catalysed by this enzyme such as the oxidation of toxic molecules and the production of valuable compounds (organic acids) makes GLOX a promising target for biotechnological applications. This aspect on GLOX remains new and needs to be investigated.",
"title": ""
},
{
"docid": "0b8c51f823cb55cbccfae098e98f28b3",
"text": "In this study, we investigate whether the “out of body” vibrotactile illusion known as funneling could be applied to enrich and thereby improve the interaction performance on a tablet-sized media device. First, a series of pilot tests was taken to determine the appropriate operational conditions and parameters (such as the tablet size, holding position, minimal required vibration amplitude, and the effect of matching visual feedback) for a two-dimensional (2D) illusory tactile rendering method. Two main experiments were then conducted to validate the basic applicability and effectiveness of the rendering method, and to further demonstrate how the illusory tactile feedback could be deployed in an interactive application and actually improve user performance. Our results showed that for a tablet-sized device (e.g., iPad mini and iPad), illusory perception was possible (localization performance of up to 85%) using a rectilinear grid with a resolution of 5 $$\\times $$ × 7 (grid size: 2.5 cm) with matching visual feedback. Furthermore, the illusory feedback was found to be a significant factor in improving the user performance in a 2D object search/attention task.",
"title": ""
},
{
"docid": "9d60842315ad481ac55755160a581d74",
"text": "This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.",
"title": ""
},
{
"docid": "05bc0aa39909125e0350cbe5bac656ac",
"text": "This paper describes an antenna array configuration for the implementation in a UWB monopulse radar. The measurement results of the gain in the sum and difference mode are presented. Next the transformation of the monopulse technique into the time domain by the evaluation of the impulse response is shown. A look-up table with very high dynamic of over 25 dB and flat characteristic is obtained. The unambiguous range of sensing is approx. 40° in the angular direction. This novel combination of UWB technology and the monopulse radar principle allows for very precise sensing, where UWB assures high precision in the range direction and monopulse principle in the angular direction.",
"title": ""
},
{
"docid": "dbff2130c480634608cddd8a9fea59cb",
"text": "The presence of a physician seems to be beneficial for pre-hospital cardiopulmonary resuscitation (CPR) of patients with out-of-hospital cardiac arrest. However, the effectiveness of a physician's presence during CPR before hospital arrival has not been established. We conducted a prospective, non-randomized, observational study using national data from out-of-hospital cardiac arrests between 2005 and 2010 in Japan. We performed a propensity analysis and examined the association between a physician's presence during an ambulance car ride and short- and long-term survival from out-of-hospital cardiac arrest. Specifically, a full non-parsimonious logistic regression model was fitted with the physician presence in the ambulance as the dependent variable; the independent variables included all study variables except for endpoint variables plus dummy variables for the 47 prefectures in Japan (i.e., 46 variables). In total, 619,928 out-of-hospital cardiac arrest cases that met the inclusion criteria were analyzed. Among propensity-matched patients, a positive association was observed between a physician's presence during an ambulance car ride and return of spontaneous circulation (ROSC) before hospital arrival, 1-month survival, and 1-month survival with minimal neurological or physical impairment (ROSC: OR = 1.84, 95% CI 1.63-2.07, p = 0.00 in adjusted for propensity and all covariates); 1-month survival: OR = 1.29, 95% CI 1.04-1.61, p = 0.02 in adjusted for propensity and all covariates); cerebral performance category (1 or 2): OR = 1.54, 95% CI 1.03-2.29, p = 0.04 in adjusted for propensity and all covariates); and overall performance category (1 or 2): OR = 1.50, 95% CI 1.01-2.24, p = 0.05 in adjusted for propensity and all covariates). A prospective observational study using national data from out-of-hospital cardiac arrests shows that a physician's presence during an ambulance car ride was independently associated with increased short- and long-term survival.",
"title": ""
},
{
"docid": "0701f4d74179857b736ebe2c7cdb78b7",
"text": "Modern computer networks generate significant volume of behavioural system logs on a daily basis. Such networks comprise many computers with Internet connectivity, and many users who access the Web and utilise Cloud services make use of numerous devices connected to the network on an ad-hoc basis. Measuring the risk of cyber attacks and identifying the most recent modus-operandi of cyber criminals on large computer networks can be difficult due to the wide range of services and applications running within the network, the multiple vulnerabilities associated with each application, the severity associated with each vulnerability, and the ever-changing attack vector of cyber criminals. In this paper we propose a framework to represent these features, enabling real-time network enumeration and traffic analysis to be carried out, in order to produce quantified measures of risk at specific points in time. We validate the approach using data from a University network, with a data collection consisting of 462,787 instances representing threats measured over a 144 hour period. Our analysis can be generalised to a variety of other contexts. © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "7b2ef4e81c8827389eeb025ae686210e",
"text": "This paper presents a novel framework for generating texture mosaics with convolutional neural networks. Our method is called GANosaic and performs optimization in the latent noise space of a generative texture model, which allows the transformation of a content image into a mosaic exhibiting the visual properties of the underlying texture manifold. To represent that manifold, we use a state-of-the-art generative adversarial method for texture synthesis [1], which can learn expressive texture representations from data and produce mosaic images with very high resolution. This fully convolutional model generates smooth (without any visible borders) mosaic images which morph and blend different textures locally. In addition, we develop a new type of differentiable statistical regularization appropriate for optimization over the prior noise space of the PSGAN model.",
"title": ""
},
{
"docid": "247eebd69a651f6f116f41fdf885ae39",
"text": "RFID identification is a new technology that will become ubiquitous as RFID tags will be applied to every-day items in order to yield great productivity gains or “smart” applications for users. However, this pervasive use of RFID tags opens up the possibility for various attacks violating user privacy. In this work we present an RFID authentication protocol that enforces user privacy and protects against tag cloning. We designed our protocol with both tag-to-reader and reader-to-tag authentication in mind; unless both types of authentication are applied, any protocol can be shown to be prone to either cloning or privacy attacks. Our scheme is based on the use of a secret shared between tag and database that is refreshed to avoid tag tracing. However, this is done in such a way so that efficiency of identification is not sacrificed. Additionally, our protocol is very simple and it can be implemented easily with the use of standard cryptographic hash functions. In analyzing our protocol, we identify several attacks that can be applied to RFID protocols and we demonstrate the security of our scheme. Furthermore, we show how forward privacy is guaranteed; messages seen today will still be valid in the future, even after the tag has been compromised.",
"title": ""
}
] |
scidocsrr
|
d95b019642e554ae6eca32db23325a5a
|
Training Recurrent Answering Units with Joint Loss Minimization for VQA
|
[
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
},
{
"docid": "8b998b9f8ea6cfe5f80a5b3a1b87f807",
"text": "We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
},
{
"docid": "54d242cf31eaa27823217d34ea3b5c0a",
"text": "In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA) task. Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for image QA, with the performances significantly outperforming the state-of-the-art.",
"title": ""
}
] |
[
{
"docid": "cbae4d5eb347a8136f34fb370d28f46b",
"text": "Available online 18 November 2013",
"title": ""
},
{
"docid": "8e9afa1b77331b3247d7a2f7c6f127b3",
"text": "The epithelial lining of the gastrointestinal tract serves as the interface for digestion and absorption of nutrients and water and as a defensive barrier. The defensive functions of the intestinal epithelium are remarkable considering that the gut lumen is home to trillions of resident bacteria, fungi and protozoa (collectively, the intestinal microbiota) that must be prevented from translocation across the epithelial barrier. Imbalances in the relationship between the intestinal microbiota and the host lead to the manifestation of diseases that range from disorders of motility and sensation (IBS) and intestinal inflammation (IBD) to behavioural and metabolic disorders, including autism and obesity. The latest discoveries shed light on the sophisticated intracellular, intercellular and interkingdom signalling mechanisms of host defence that involve epithelial and enteroendocrine cells, the enteric nervous system and the immune system. Together, they maintain homeostasis by integrating luminal signals, including those derived from the microbiota, to regulate the physiology of the gastrointestinal tract in health and disease. Therapeutic strategies are being developed that target these signalling systems to improve the resilience of the gut and treat the symptoms of gastrointestinal disease. In this Review, the authors summarize how various interactions at the gastrointestinal epithelium regulate gut physiology. They also discuss how neuroimmunophysiology has advanced the understanding of gastrointestinal pathophysiology with the potential to reveal novel therapies for disorders such as IBS and IBD. The gastrointestinal epithelium allows the absorption of nutrients, water and immune surveillance while simultaneously limiting the translocation of potentially harmful antigens and commensal and pathogenic microorganisms.Disturbances in the barrier function of the gastrointestinal tract lead to the development or exacerbation of disease that can manifest locally in the gut wall or involve distant organs including the brain.Intestinal barrier function is regulated by multidirectional interactions between epithelial (enteroendocrine, tuft, goblet and Paneth) cells and the enteric nervous and immune systems.The intestinal microbiota is a key element in sophisticated intracellular, intercellular and interkingdom signalling systems that regulate intestinal barrier function.Therapeutic strategies are being developed that target these signalling systems to increase the resilience of the gastrointestinal tract and limit disturbances in barrier function. The gastrointestinal epithelium allows the absorption of nutrients, water and immune surveillance while simultaneously limiting the translocation of potentially harmful antigens and commensal and pathogenic microorganisms. Disturbances in the barrier function of the gastrointestinal tract lead to the development or exacerbation of disease that can manifest locally in the gut wall or involve distant organs including the brain. Intestinal barrier function is regulated by multidirectional interactions between epithelial (enteroendocrine, tuft, goblet and Paneth) cells and the enteric nervous and immune systems. The intestinal microbiota is a key element in sophisticated intracellular, intercellular and interkingdom signalling systems that regulate intestinal barrier function. Therapeutic strategies are being developed that target these signalling systems to increase the resilience of the gastrointestinal tract and limit disturbances in barrier function.",
"title": ""
},
{
"docid": "4236e1b86150a9557b518b789418f048",
"text": "Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.",
"title": ""
},
{
"docid": "a05b50b3b5bf6504a9e35dbadac9764b",
"text": "UNLABELLED\n\n\n\nBACKGROUND\nThe Avogadro project has developed an advanced molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It offers flexible, high quality rendering, and a powerful plugin architecture. Typical uses include building molecular structures, formatting input files, and analyzing output of a wide variety of computational chemistry packages. By using the CML file format as its native document type, Avogadro seeks to enhance the semantic accessibility of chemical data types.\n\n\nRESULTS\nThe work presented here details the Avogadro library, which is a framework providing a code library and application programming interface (API) with three-dimensional visualization capabilities; and has direct applications to research and education in the fields of chemistry, physics, materials science, and biology. The Avogadro application provides a rich graphical interface using dynamically loaded plugins through the library itself. The application and library can each be extended by implementing a plugin module in C++ or Python to explore different visualization techniques, build/manipulate molecular structures, and interact with other programs. We describe some example extensions, one which uses a genetic algorithm to find stable crystal structures, and one which interfaces with the PackMol program to create packed, solvated structures for molecular dynamics simulations. The 1.0 release series of Avogadro is the main focus of the results discussed here.\n\n\nCONCLUSIONS\nAvogadro offers a semantic chemical builder and platform for visualization and analysis. For users, it offers an easy-to-use builder, integrated support for downloading from common databases such as PubChem and the Protein Data Bank, extracting chemical data from a wide variety of formats, including computational chemistry output, and native, semantic support for the CML file format. For developers, it can be easily extended via a powerful plugin mechanism to support new features in organic chemistry, inorganic complexes, drug design, materials, biomolecules, and simulations. Avogadro is freely available under an open-source license from http://avogadro.openmolecules.net.",
"title": ""
},
{
"docid": "2181397b2f808737f191aa999022502b",
"text": "In recent years, the As-Rigid-As-Possible (ARAP) shape deformation and shape interpolation techniques gained popularity, and the ARAP energy was successfully used in other applications as well. We improve the ARAP animation technique in two aspects. First, we introduce a new ARAP-type energy, named SR-ARAP, which has a consistent discretization for surfaces (triangle meshes). The quality of our new surface deformation scheme competes with the quality of the volumetric ARAP deformation (for tetrahedral meshes). Second, we propose a new ARAP shape interpolation method that is superior to prior art also based on the ARAP energy. This method is compatible with our new SR-ARAP energy, as well as with the ARAP volume energy.",
"title": ""
},
{
"docid": "c49ffcb45cc0a7377d9cbdcf6dc07057",
"text": "Dermoscopy is an in vivo method for the early diagnosis of malignant melanoma and the differential diagnosis of pigmented lesions of the skin. It has been shown to increase diagnostic accuracy over clinical visual inspection in the hands of experienced physicians. This article is a review of the principles of dermoscopy as well as recent technological developments.",
"title": ""
},
{
"docid": "9f9302cf8560b65bed7688f5339a865c",
"text": "Understanding short texts is crucial to many applications, but challenges abound. First, short texts do not always observe the syntax of a written language. As a result, traditional natural language processing tools, ranging from part-of-speech tagging to dependency parsing, cannot be easily applied. Second, short texts usually do not contain sufficient statistical signals to support many state-of-the-art approaches for text mining such as topic modeling. Third, short texts are more ambiguous and noisy, and are generated in an enormous volume, which further increases the difficulty to handle them. We argue that semantic knowledge is required in order to better understand short texts. In this work, we build a prototype system for short text understanding which exploits semantic knowledge provided by a well-known knowledgebase and automatically harvested from a web corpus. Our knowledge-intensive approaches disrupt traditional methods for tasks such as text segmentation, part-of-speech tagging, and concept labeling, in the sense that we focus on semantics in all these tasks. We conduct a comprehensive performance evaluation on real-life data. The results show that semantic knowledge is indispensable for short text understanding, and our knowledge-intensive approaches are both effective and efficient in discovering semantics of short texts.",
"title": ""
},
{
"docid": "1fa90715d1087ab94f5305eb05d76a50",
"text": "Gartner’s Magic Quadrant for IT Event Correlation and Analysis (ECA), 2009 (see Figure 1) evaluates vendors’ ability to execute and their completeness of vision relative to a defined set of evaluation criteria regarding current and future market requirements. A Magic Quadrant should not be the only criterion for selecting a vendor, because the right solution for a given situation can be in any quadrant, depending on the specific needs of the enterprise. Enterprises considering the purchase of an ECA product should develop their own list of evaluation criteria and functional requirements in the categories of event collection/ consolidation, processing/correlation and presentation. Large enterprises should consider a multitier event management hierarchy, pushing some event processing and correlation out to the managed IT element at the bottom of the hierarchy. These enterprises should use specialized event management tools in the middle, and should place a “manager of managers” or a business service management (BSM) product on top.",
"title": ""
},
{
"docid": "9fff08cf60bb5f6ec538080719aa8224",
"text": "This research represents the runner BIB number recognition system to develop image processing study which solves problems and increases efficiency about runner image management in running fairs. The runner BIB number recognition system processes runner image to recognize BIB number and time when runner appears in media. The information from processing has collected to applicative later. BIB number position is on BIB tag which attach on runner body. To recognize BIB number, the system detects runner position first. This process emphasize on runner face detection in images following to concept of researcher then find BIB number in body-thigh area of runner. The system recognizes BIB number from BIB tag which represents in media. This processing presents 0.80 in precision value, 0.81 in recall value and F-measure is 0.80. The results display the runner BIB number recognition system has developed with high efficiency and can be applied for runner online communities in actual situation. The runner BIB number recognition system decreases problems about runner image processing and increases comfortable for runners when find images from running fairs. Moreover, the system can be applied in commercial to increase benefits in running business.",
"title": ""
},
{
"docid": "f8b9a8254bd580be40290343334edbc0",
"text": "BACKGROUND\nMulligan's mobilisation techniques are thought to increase the range of movement (ROM) in patients with low back pain. The primary aim of this study was to investigate the application of the Mulligan's Sustained Natural Apophyseal Glide (SNAG) technique on lumbar flexion ROM. The secondary aim was to measure the intra- and inter-day reliability of lumbar ROM employing the same procedure.\n\n\nMETHODS\n49 asymptomatic volunteers participated in this double-blinded study. Subjects were randomly assigned to receive either SNAG mobilisation (n = 25), or a sham mobilisation (n = 24). The SNAG technique was applied at the L3and L4 spinal levels with active flexion in sitting by an experienced manual therapist. Three sets of 10 repetitions at each of the two spinal levels were performed. The sham mobilisation was similar to the SNAG but did not apply the appropriate direction or force. Lumbar ROM was measured by a three dimensional electronic goniometer (Zebris CMS20), before and after each technique. For the reliability, five measurements in two different days (one week apart) were performed in 20 healthy subjects.\n\n\nRESULTS\nWhen both interventions were compared, independent t tests yielded no statistically significant results in ROM between groups (p = 0.673). Furthermore no significant within group differences were observed: SNAG (p = 0.842), sham (p = 0.169). Intra- and inter-day reliability of flexion measurements was high (ICC(1,1) > 0.82, SEM < 4.0 degrees , SDD<16.3%) indicating acceptable clinical applicability.\n\n\nCONCLUSION\nWhile the Zebris proved to be a reliable device for measuring lumbar flexion ROM, SNAG mobilisation did not demonstrate significant differences in flexion ROM when compared to sham mobilisation.\n\n\nTRIAL REGISTRATION\nCurrent Controlled Trials NCT00678093.",
"title": ""
},
{
"docid": "29749091f6ccdc0c2697c9faf3682c90",
"text": "In traditional video conferencing systems, it is impossible for users to have eye contact when looking at the conversation partner’s face displayed on the screen, due to the disparity between the locations of the camera and the screen. In this work, we implemented a gaze correction system that can automatically maintain eye contact by replacing the eyes of the user with the direct looking eyes (looking directly into the camera) captured in the initialization stage. Our real-time system has good robustness against different lighting conditions and head poses, and it provides visually convincing and natural results while relying only on a single webcam that can be positioned almost anywhere around the",
"title": ""
},
{
"docid": "4506bc1be6e7b42abc34d79dc426688a",
"text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.",
"title": ""
},
{
"docid": "423385a69f57e9517b6aa95bb3d59aef",
"text": "Although organizational learning theory and practice have been clarified by practitioners and scholars over the past several years, there is much to be explored regarding interactions between organizational learning culture and employee learning and performance outcomes. This study examined the relationship of organizational learning culture, job satisfaction, and organizational outcome variables with a sample of information technology (IT) employees in the United States. It found that learning organizational culture is associated with IT employee job satisfaction and motivation to transfer learning. Turnover intention was found to be negatively influenced by organizational learning culture and job satisfaction. Suggestions for future study of learning organizational culture in association with job satisfaction and performance-related outcomes are discussed.",
"title": ""
},
{
"docid": "3b9f17e8720b4513d18a2fc3d8f54700",
"text": "The purpose of this study was to examine the effects of amino acid supplementation on muscular strength, power, and high-intensity endurance during short-term resistance training overreaching. Seventeen resistance-trained men were randomly assigned to either an amino acid (AA) or placebo (P) group and underwent 4 weeks of total-body resistance training consisting of two 2-week phases of overreaching (phase 1: 3 x 8-12 repetitions maximum [RM], 8 exercises; phase 2: 5 x 3-5 RM, 5 exercises). Muscle strength, power, and high-intensity endurance were determined before (T1) and at the end of each training week (T2-T5). One repetition maximum squat and bench press decreased at T2 in P (5.2 and 3.4 kg, respectively) but not in AA, and significant increases in 1 RM squat and bench press were observed at T3-T5 in both groups. A decrease in the ballistic bench press peak power was observed at T3 in P but not AA. The fatigue index during the 20-repetition jump squat assessment did not change in the P group at T3 and T5 (fatigue index = 18.6 and 18.3%, respectively) whereas a trend for reduction was observed in the AA group (p = 0.06) at T3 (12.8%) but not T5 (15.2%; p = 0.12). These results indicate that the initial impact of high-volume resistance training overreaching reduces muscle strength and power, and it appears that these reductions are attenuated with amino acid supplementation. In addition, an initial high-volume, moderate-intensity phase of overreaching followed by a higher intensity, moderate-volume phase appears to be very effective for enhancing muscle strength in resistance-trained men.",
"title": ""
},
{
"docid": "300b599e2e3cc3b63bc38276f9621a16",
"text": "Swarm intelligence (SI) is based on collective behavior of selforganized systems. Typical swarm intelligence schemes include Particle Swarm Optimization (PSO), Ant Colony System (ACS), Stochastic Diffusion Search (SDS), Bacteria Foraging (BF), the Artificial Bee Colony (ABC), and so on. Besides the applications to conventional optimization problems, SI can be used in controlling robots and unmanned vehicles, predicting social behaviors, enhancing the telecommunication and computer networks, etc. Indeed, the use of swarm optimization can be applied to a variety of fields in engineering and social sciences. In this paper, we review some popular algorithms in the field of swarm intelligence for problems of optimization. The overview and experiments of PSO, ACS, and ABC are given. Enhanced versions of these are also introduced. In addition, some comparisons are made between these algorithms.",
"title": ""
},
{
"docid": "4681e8f07225e305adfc66cd1b48deb8",
"text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.",
"title": ""
},
{
"docid": "2d8baa9a78e5e20fd20ace55724e2aec",
"text": "To determine the relationship between fatigue and post-activation potentiation, we examined the effects of sub-maximal continuous running on neuromuscular function tests, as well as on the squat jump and counter movement jump in endurance athletes. The height of the squat jump and counter movement jump and the estimate of the fast twitch fiber recruiting capabilities were assessed in seven male middle distance runners before and after 40 min of continuous running at an intensity corresponding to the individual lactate threshold. The same test was then repeated after three weeks of specific aerobic training. Since the three variables were strongly correlated, only the estimate of the fast twitch fiber was considered for the results. The subjects showed a significant improvement in the fast twitch fiber recruitment percentage after the 40 min run. Our data show that submaximal physical exercise determined a change in fast twitch muscle fiber recruitment patterns observed when subjects performed vertical jumps; however, this recruitment capacity was proportional to the subjects' individual fast twitch muscle fiber profiles measured before the 40 min run. The results of the jump tests did not change significantly after the three-week training period. These results suggest that pre-fatigue methods, through sub-maximal exercises, could be used to take advantage of explosive capacity in middle-distance runners.",
"title": ""
},
{
"docid": "cb363cd47b5cdb3c9364a51d487de7cd",
"text": "Crowdsourcing has been part of the IR toolbox as a cheap and fast mechanism to obtain labels for system development and evaluation. Successful deployment of crowdsourcing at scale involves adjusting many variables, a very important one being the number of workers needed per human intelligence task (HIT). We consider the crowdsourcing task of learning the answer to simple multiple-choice HITs, which are representative of many relevance experiments. In order to provide statistically significant results, one often needs to ask multiple workers to answer the same HIT. A stopping rule is an algorithm that, given a HIT, decides for any given set of worker answers to stop and output an answer or iterate and ask one more worker. In contrast to other solutions that try to estimate worker performance and answer at the same time, our approach assumes the historical performance of a worker is known and tries to estimate the HIT difficulty and answer at the same time. The difficulty of the HIT decides how much weight to give to each worker's answer. In this paper we investigate how to devise better stopping rules given workers' performance quality scores. We suggest adaptive exploration as a promising approach for scalable and automatic creation of ground truth. We conduct a data analysis on an industrial crowdsourcing platform, and use the observations from this analysis to design new stopping rules that use the workers' quality scores in a non-trivial manner. We then perform a number of experiments using real-world datasets and simulated data, showing that our algorithm performs better than other approaches.",
"title": ""
},
{
"docid": "32d79366936e301c44ae4ac11784e9d8",
"text": "A vast literature describes transformational leadership in terms of leader having charismatic and inspiring personality, stimulating followers, and providing them with individualized consideration. A considerable empirical support exists for transformation leadership in terms of its positive effect on followers with respect to criteria like effectiveness, extra role behaviour and organizational learning. This study aims to explore the effect of transformational leadership characteristics on followers’ job satisfaction. Survey method was utilized to collect the data from the respondents. The study reveals that individualized consideration and intellectual stimulation affect followers’ job satisfaction. However, intellectual stimulation is positively related with job satisfaction and individualized consideration is negatively related with job satisfaction. Leader’s charisma or inspiration was found to be having no affect on the job satisfaction. The three aspects of transformational leadership were tested against job satisfaction through structural equation modeling using Amos.",
"title": ""
},
{
"docid": "2dbc68492e54d61446dac7880db71fdd",
"text": "Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.",
"title": ""
}
] |
scidocsrr
|
dbfc8f22fb8adebf301b21beb7de9a63
|
Integrating Human-Computer Interaction Development into SDLC: A Methodology
|
[
{
"docid": "ad059332e36849857c9bf1a52d5b0255",
"text": "Interaction Design Beyond Human Computer Interaction instructions guide, service manual guide and maintenance manual guide for the products. Before employing this manual, service or maintenance guide you should know detail regarding your products cause this manual for expert only. We hope ford alternator wiring diagram internal regulator and yet another manual of these lists a good choice for your to repair, fix and solve your product or service or device problems don't try an oversight.",
"title": ""
}
] |
[
{
"docid": "83a13b090260a464064a3c884a75ad91",
"text": "While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called Word Mover’s Distance (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the Word Mover’s Embedding (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.",
"title": ""
},
{
"docid": "fd8f5dc4264464cd8f978872d58aaf19",
"text": "OBJECTIVES\nTo determine the capacity of black soldier fly larvae (BSFL) (Hermetia illucens) to convert fresh human faeces into larval biomass under different feeding regimes, and to determine how effective BSFL are as a means of human faecal waste management.\n\n\nMETHODS\nBlack soldier fly larvae were fed fresh human faeces. The frequency of feeding, number of larvae and feeding ratio were altered to determine their effects on larval growth, prepupal weight, waste reduction, bioconversion and feed conversion rate (FCR).\n\n\nRESULTS\nThe larvae that were fed a single lump amount of faeces developed into significantly larger larvae and prepupae than those fed incrementally every 2 days; however, the development into pre-pupae took longer. The highest waste reduction was found in the group containing the most larvae, with no difference between feeding regimes. At an estimated 90% pupation rate, the highest bioconversion (16-22%) and lowest, most efficient FCR (2.0-3.3) occurred in groups that contained 10 and 100 larvae, when fed both the lump amount and incremental regime.\n\n\nCONCLUSION\nThe prepupal weight, bioconversion and FCR results surpass those from previous studies into BSFL management of swine, chicken manure and municipal organic waste. This suggests that the use of BSFL could provide a solution to the health problems associated with poor sanitation and inadequate human waste management in developing countries.",
"title": ""
},
{
"docid": "cd57aad5ef81a71e616542dcf6cc9e07",
"text": "Recent CNN-based object detection methods have drastically improved their performances but still use a single classifier as opposed to ”multiple experts” in categorizing objects. The main motivation of introducing multi-experts is twofold: i) to allow different experts to specialize in different fundamental object shape priors and ii) to better capture the appearance variations caused by different poses and viewing angles. The proposed approach, referred to as multi-expert Region-based CNN (ME R-CNN), consists of three experts each responsible for objects with particular shapes: horizontally elongated, square-like, and vertically elongated. Each expert is a network with multiple fully connected layers and all the experts are preceded by a shared network which consists of multiple convolutional layers. On top of using selective search which provides a compact, yet effective set of region of interests (RoIs) for object detection, we augmented the set by also employing the exhaustive search for training. Incorporating the exhaustive search can provide complementary advantages: i) it captures the multitude of neighboring RoIs missed by the selective search, and thus ii) provide significantly larger amount of training examples to achieve the enhanced accuracy.",
"title": ""
},
{
"docid": "5314538c00de2bdbdeb0998eb39ab255",
"text": "TOPIC\nEvidence-based group therapy in an inpatient setting that provides an integrated treatment approach for both trauma and addiction in female adolescents.\n\n\nPURPOSE\nThe purpose of this evidence-based practice (EBP) project was to implement and assess the impact of an integrated group therapy approach for both posttraumatic stress disorder (PTSD) and substance use disorder (SUD) in adolescent females as part of a residential treatment program.\n\n\nSOURCES\nThe Iowa Model of EBP guided this EBP project. Judith Herman's three-stage model of trauma recovery and the Skills Training in Affective and Interpersonal Regulation (STAIR) model served as the theoretical framework for the group therapy curriculum. Two programs, Seeking Safety, by Lisa Najavits and VOICES, by Stephanie Covington, provided a guide for group topics and activities.\n\n\nCONCLUSIONS\nPatients that participated in Turning the Tides© group therapy curriculum reported a decrease in overall PTSD symptoms and decreased functional impairment scores, based on the Child PTSD Symptoms Scale. However, there was a statistically significant increase in the use of as needed medications following the completion of group therapy. Postgroup evaluations from patients indicated a genuine desire to engage in the group therapy as well as an increased sense of trust with facilitators. Implications for psychiatric nursing include the delivery of safe, quality patient care as evidenced by positive improvement in patient outcomes.",
"title": ""
},
{
"docid": "fe0c8969c666b6074d2bc5cc49546b78",
"text": "We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system. We call the scheme speaker-invariant training (SIT). In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to minimize the senone (tied triphone state) classification loss, and simultaneously mini-maximize the speaker classification loss. A speaker-invariant and senone-discriminative deep feature is learned through this adversarial multi-task learning. With SIT, a canonical DNN acoustic model with significantly reduced variance in its output probabilities is learned with no explicit speaker-independent (SI) transformations or speaker-specific representations used in training or testing. Evaluated on the CHiME-3 dataset, the SIT achieves 4.99% relative word error rate (WER) improvement over the conventional SI acoustic model. With additional unsupervised speaker adaptation, the speaker-adapted (SA) SIT model achieves 4.86% relative WER gain over the SA SI acoustic model.",
"title": ""
},
{
"docid": "209e48cb7576bb56ccab6d88af98b154",
"text": "Reading fluency has attracted the attention of reading researchers and educators since the early 1970s and has become a priority issue in English as a first language (L1) settings. It has also become a critical issue in English as a second or foreign language (L2) settings because the lack of fluency is considered a major obstacle to developing independent readers with good comprehension skills. Repeated Reading (RR) was originally devised by Samuels (1979) in order to translate Automaticity Theory (LaBerge & Samuels, 1974) into a pedagogical approach for developing English L1 readers’ fluency. Extensive research has been conducted to show the positive effects of RR in English L1 settings. A growing number of L2 reading researchers have demonstrated that RR may be a promising approach for building fluency and comprehension in L2 settings. However, while L1 research has demonstrated a robust correlation between improved reading fluency and enhanced comprehension, L2 fluency research has not yet shown such a strong correlation. In addition, most studies on reading fluency in L2 settings have used quantitative approaches and only a few of them have explored the “inside of L2 readers' brain,” that is, what is actually happening while they engage in RR. The present study attempts to reveal the inner process of L2 reading fluency development through RR for an advanced-level L2 reader who is articulate in describing her metacognitive processes. Using a diary study approach comprising more than 70 RR sessions over the course of 14 weeks, the current study investigated an L2 reader with good comprehension skills Taguchi et al.: Assisted repeated reading 31 Reading in a Foreign Language 24(1) engaging in RR. This study was designed to investigate specifically how her reading fluency developed and how her comprehension changed during the course of the treatment. Based on the study findings, some issues are discussed for better RR program implementation.",
"title": ""
},
{
"docid": "e7522c776e1219196aa52147834b6f61",
"text": "Machine learning deals with the issue of how to build programs that improve their performance at some task through experience. Machine learning algorithms have proven to be of great practical value in a variety of application domains. They are particularly useful for (a) poorly understood problem domains where littl e knowledge exists for the humans to develop effective algorithms; (b) domains where there are large databases containing valuable implicit regularities to be discovered; or (c) domains where programs must adapt to changing conditions. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicabilit y of some frequently utili zed machine learning algorithms. We then provide formulations of some software development tasks using learning algorithms. Finally, a brief summary is given of the existing work.",
"title": ""
},
{
"docid": "43cd94df4a686b89ab6ca5e2782f5a54",
"text": "Relational databases scattered over the web are generally opaque to regular web crawling tools. To address this concern, many RDB-to-RDF approaches have been proposed over the last years. In this paper, we propose a detailed review of seventeen RDB-to-RDF initiatives, considering end-to-end projects that delivered operational tools. The different tools are classified along three major axes: mapping description language, mapping implementation and data retrieval method. We analyse the motivations, commonalities and differences between existing approaches. The expressiveness of existing mapping languages is not always sufficient to produce semantically rich data and make it usable, interoperable and linkable. We therefore briefly present various strategies investigated in the literature to produce additional knowledge. Finally, we show that R2RML, the W3C recommendation for describing RDB to RDF mappings, may not apply to all needs in the wide scope of RDB to RDF translation applications, leaving space for future extensions.",
"title": ""
},
{
"docid": "89263084f29469d1c363da55c600a971",
"text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.",
"title": ""
},
{
"docid": "9a22d9dfa4f5c64597891f8fb3626d50",
"text": "In this paper we present a system that automatically estimates the quality of machine translated segments of e-commerce data without relying on reference translations. Such approach can be used to estimate the quality of machine translated text in scenarios in which references are not available. Quality estimation (QE) can be applied to select translations to be postedited, choose the best translation from a pool of machine translation (MT) outputs, or help in the process of revision of translations, among other applications. Our approach is based on supervised machine learning algorithms that are used to train models that predict post-editing effort. The post-editing effort is measured according to the translation error rate (TER) between machine translated segments against their human post-edits. The predictions are computed at the segment level and can be easily extended to any kind of text ranging from item titles to item descriptions. In addition, our approach can be applied to different kinds of e-commerce data (e.g. different categories of products). Our models explore linguistic information regarding the complexity of the source sentence, the fluency of the translation in the target language and the adequacy of the translation with respect to its source sentence. In particular, we show that the use of named entity recognition systems as one source of linguistic information substantially improves the models’ performance. In order to evaluate the efficiency of our approach, we evaluate the quality scores assigned by the QE system (predicted TER) against the human posteditions (real TER) using the Pearson correlation coefficient.",
"title": ""
},
{
"docid": "19c8893f9e27e48c9d31b759735936ec",
"text": "Advanced driver assistance systems (ADAS) can be significantly improved with effective driver action prediction (DAP). Predicting driver actions early and accurately can help mitigate the effects of potentially unsafe driving behaviors and avoid possible accidents. In this paper, we formulate driver action prediction as a timeseries anomaly prediction problem. While the anomaly (driver actions of interest) detection might be trivial in this context, finding patterns that consistently precede an anomaly requires searching for or extracting features across multi-modal sensory inputs. We present such a driver action prediction system, including a real-time data acquisition, processing and learning framework for predicting future or impending driver action. The proposed system incorporates camera-based knowledge of the driving environment and the driver themselves, in addition to traditional vehicle dynamics. It then uses a deep bidirectional recurrent neural network (DBRNN) to learn the correlation between sensory inputs and impending driver behavior achieving accurate and high horizon action prediction. The proposed system performs better than other existing systems on driver action prediction tasks and can accurately predict key driver actions including acceleration, braking, lane change and turning at durations of 5sec before the action is executed by the driver. Keywords— timeseries modeling, driving assistant system, driver action prediction, driver intent estimation, deep recurrent neural network",
"title": ""
},
{
"docid": "080032ded41edee2a26320e3b2afb123",
"text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.",
"title": ""
},
{
"docid": "2b3e2570e9ecd86be9300220fa78d63d",
"text": "We evaluate the prediction accuracy of models designed using different classification methods depending on the technique used to select variables, and we study the relationship between the structure of the models and their ability to correctly predict financial failure. We show that a neural network based model using a set of variables selected with a criterion that it is adapted to the network leads to better results than a set chosen with criteria used in the financial literature. We also show that the way in which a set of variables may represent the financial profiles of healthy companies plays a role in Type I error reduction.",
"title": ""
},
{
"docid": "c5ae50d955561b1bfdf738610dae44bd",
"text": "In this paper, multimodal learning for facial expression recognition (FER) is proposed. The multimodal learning method makes the first attempt to learn the joint representation by considering the texture and landmark modality of facial images, which are complementary with each other. In order to learn the representation of each modality and the correlation and interaction between different modalities, the structured regularization (SR) is employed to enforce and learn the modality-specific sparsity and density of each modality, respectively. By introducing SR, the comprehensiveness of the facial expression is fully taken into consideration, which can not only handle the subtle expression but also perform robustly to different input of facial images. With the proposed multimodal learning network, the joint representation learning from multimodal inputs will be more suitable for FER. Experimental results on the CKþ and NVIE databases demonstrate the superiority of our proposed method. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "350daaeb965ac6a1383ec96f4d34e0ba",
"text": "This paper proposes a new automatic approach for the detection of SQL Injection and XPath Injection vulnerabilities, two of the most common and most critical types of vulnerabilities in web services. Although there are tools that allow testing web applications against security vulnerabilities, previous research shows that the effectiveness of those tools in web services environments is very poor. In our approach a representative workload is used to exercise the web service and a large set of SQL/XPath Injection attacks are applied to disclose vulnerabilities. Vulnerabilities are detected by comparing the structure of the SQL/XPath commands issued in the presence of attacks to the ones previously learned when running the workload in the absence of attacks. Experimental evaluation shows that our approach performs much better than known tools (including commercial ones), achieving extremely high detection coverage while maintaining the false positives rate very low.",
"title": ""
},
{
"docid": "2d3adb98f6b1b4e161d84314958960e5",
"text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.",
"title": ""
},
{
"docid": "817b1ec160974f41129e0dadd3cdaa27",
"text": "Crude glycerin, the main by-product of biodiesel production, can replace dietary energy sources, such as corn. The objective of this study was to evaluate the inclusion of up to 30% of crude glycerin in dry matter (DM) of the total diets, and its effects on meat quality parameters of feedlot Nellore bulls. Thirty animals (227.7 ± 23.8 kg body weight; 18 months old) were housed in individual pens and fed 5 experimental diets, containing 0, 7.5, 15, 22.5 or 30% crude glycerin (DM basis). After 103 d (21 d adaptation) animals were slaughtered and the Longissimus muscle was collected. The characteristics assessed were chemical composition, fatty acid profile, cholesterol, shear force, pH, color, water-holding capacity, cooking loss and sensory properties. The increasing inclusion of crude glycerin in the diets did not affect the chemical composition of the Longissimus muscle (P > 0.10). A quadratic effect was observed when levels of crude glycerin were increased, on the concentration of pentadecanoic, palmitoleic and eicosenoic fatty acids in meat (P < 0.05), and on the activity of the delta-9 desaturase 16 and delta-9 desaturase 18 enzymes (P < 0.05). The addition of crude glycerin increased the gamma linolenic fatty acid concentration (P < 0.01), and altered the monounsaturated fatty acids in Longissimus muscle of animals (Pquad. < 0.05). Crude glycerin decreased cholesterol content in meat (P < 0.05), and promoted higher flavor score and greasy intensity perception of the meat (P < 0.01). The inclusion of up to 30% crude glycerin in Nellore cattle bulls`diets (DM basis) improves meat cholesterol and sensory attributes, such as flavor, without affecting significantly the physical traits, the main fatty acid concentrations and the chemical composition.",
"title": ""
},
{
"docid": "750abc9e51aed62305187d7103e3f267",
"text": "This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set ofguidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. Theseare demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographicliterature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches tolegend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamismand design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphicand The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specificneeds, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements offact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence ofthis work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization.",
"title": ""
},
{
"docid": "71cb3b40ff7097244e16f2378e30786e",
"text": "Predicting stock market movements is a well-known problem of interest. Now-a-days social media is perfectly representing the public sentiment and opinion about current events. Especially, Twitter has attracted a lot of attention from researchers for studying the public sentiments. Stock market prediction on the basis of public sentiments expressed on Twitter has been an intriguing field of research. Previous studies have concluded that the aggregate public mood collected from Twitter may well be correlated with Dow Jones Industrial Average Index (DJIA). The thesis of this work is to observe how well the changes in stock prices of a company, the rises and falls, are correlated with the public opinions being expressed in tweets about that company. Understanding author's opinion from a piece of text is the objective of sentiment analysis. The present paper have employed two different textual representations, Word2vec and N-gram, for analyzing the public sentiments in tweets. In this paper, we have applied sentiment analysis and supervised machine learning principles to the tweets extracted from Twitter and analyze the correlation between stock market movements of a company and sentiments in tweets. In an elaborate way, positive news and tweets in social media about a company would definitely encourage people to invest in the stocks of that company and as a result the stock price of that company would increase. At the end of the paper, it is shown that a strong correlation exists between the rise and falls in stock prices with the public sentiments in tweets.",
"title": ""
},
{
"docid": "ddc2675904b26e1c023d6f605751251d",
"text": "The impacts of high technology industries have been growing increasingly to technological innovations and global economic developments, while the concerns in sustainability are calling for facilitating green materials and cleaner production in the industrial value chains. Today’s manufacturing companies are not striving for individual capacities but for the effective working with green supply chains. However, in addition to environmental and social objectives, cost and economic feasibility has become one of the most critical success factors for improving supply chain management with green component procurement collaboration, especially for the electronics OEM (original equipment manufacturing) companies whose procurement costs often make up a very high proportion of final product prices. This paper presents a case study from the systems perspective by using System Dynamics simulation analysis and statistical validations with empirical data. Empirical data were collected from Taiwanese manufacturing chains—among the world’s largest manufacturing clusters of high technology components and products—and their global green suppliers to examine the benefits of green component procurement collaborations in terms of shared costs and improved shipping time performance. Two different supply chain collaboration models, from multi-layer ceramic capacitor (MLCC) and universal serial bus 3.0 (USB 3.0) cable procurements, were benchmarked and statistically validated. The results suggest that the practices of collaborative planning for procurement quantity and accurate fulfillment by suppliers are significantly related to cost effectiveness and shipping time efficiency. Although the price negotiation of upstream raw materials for the collaborative suppliers has no statistically significant benefit to the shipping time efficiency, the shared cost reduction of component procurement is significantly positive for supply chain collaboration among green manufacturers. Managerial implications toward sustainable supply chain management were also discussed.",
"title": ""
}
] |
scidocsrr
|
ec943c96ae901006d23e2af49a7284cc
|
Modeling Semantic Relevance for Question-Answer Pairs in Web Social Communities
|
[
{
"docid": "68693c88cb62ce28514344d15e9a6f09",
"text": "New types of document collections are being developed by various web services. The service providers keep track of non-textual features such as click counts. In this paper, we present a framework to use non-textual features to predict the quality of documents. We also show our quality measure can be successfully incorporated into the language modeling-based retrieval model. We test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions. Experimental results using our quality measure show a significant improvement over our baseline.",
"title": ""
},
{
"docid": "1ce647f5e36c07745c512ed856a9d517",
"text": "This paper describes a discussion-bot that provides answers to students' discussion board questions in an unobtrusive and human-like way. Using information retrieval and natural language processing techniques, the discussion-bot identifies the questioner's interest, mines suitable answers from an annotated corpus of 1236 archived threaded discussions and 279 course documents and chooses an appropriate response. A novel modeling approach was designed for the analysis of archived threaded discussions to facilitate answer extraction. We compare a self-out and an all-in evaluation of the mined answers. The results show that the discussion-bot can begin to meet students' learning requests. We discuss directions that might be taken to increase the effectiveness of the question matching and answer extraction algorithms. The research takes place in the context of an undergraduate computer science course.",
"title": ""
}
] |
[
{
"docid": "8988a648262b396bf20489eb92f32110",
"text": "Hyaluronic acid (HA), the main component of extracellular matrix, is considered one of the key players in the tissue regeneration process. It has been proven to modulate via specific HA receptors, inflammation, cellular migration, and angiogenesis, which are the main phases of wound healing. Studies have revealed that most HA properties depend on its molecular size. High molecular weight HA displays anti-inflammatory and immunosuppressive properties, whereas low molecular weight HA is a potent proinflammatory molecule. In this review, the authors summarize the role of HA polymers of different molecular weight in tissue regeneration and provide a short overview of main cellular receptors involved in HA signaling. In addition, the role of HA in 2 major steps of wound healing is examined: inflammation and the angiogenesis process. Finally, the antioxidative properties of HA are discussed and its possible clinical implication presented.",
"title": ""
},
{
"docid": "97578b3a8f5f34c96e7888f273d4494f",
"text": "We analyze the use, advantages, and drawbacks of graph kernels in chemoin-formatics, including a comparison of kernel-based approaches with other methodology, as well as examples of applications. Kernel-based machine learning [1], now widely applied in chemoinformatics, delivers state-of-the-art performance [2] in tasks like classification and regression. Molecular graph kernels [3] are a recent development where kernels are defined directly on the molecular structure graph. This allows the adaptation of methods from graph theory to structure graphs and their direct use with kernel learning algorithms. The main advantage of kernel learning, the so-called “kernel trick”, allows for a systematic, computationally feasible, and often globally optimal search for non-linear patterns, as well as the direct use of non-numerical inputs such as strings and graphs. A drawback is that solutions are expressed indirectly in terms of similarity to training samples, and runtimes that are typically quadratic or cubic in the number of training samples. Graph kernels [3] are positive semidefinite functions defined directly on graphs. The most important types are based on random walks, subgraph patterns, optimal assignments, and graphlets. Molecular structure graphs have strong properties that can be exploited [4], e.g., they are undirected, have no self-loops and no multiple edges, are connected (except for salts), annotated, often planar in the graph-theoretic sense, and their vertex degree is bounded by a small constant. In many applications, they are small. Many graph kernels are generalpurpose, some are suitable for structure graphs, and a few have been explicitly designed for them. We present three exemplary applications of the iterative similarity optimal assignment kernel [5], which was designed for the comparison of small structure graphs: The discovery of novel agonists of the peroxisome proliferator-activated receptor g [6] (ligand-based virtual screening), the estimation of acid dissociation constants [7] (quantitative structure-property relationships), and molecular de novo design [8].",
"title": ""
},
{
"docid": "9eb0d79f9c13f30f53fb7214b337880d",
"text": "Many real world problems can be solved with Artificial Neural Networks in the areas of pattern recognition, signal processing and medical diagnosis. Most of the medical data set is seldom complete. Artificial Neural Networks require complete set of data for an accurate classification. This paper dwells on the various missing value techniques to improve the classification accuracy. The proposed system also investigates the impact on preprocessing during the classification. A classifier was applied to Pima Indian Diabetes Dataset and the results were improved tremendously when using certain combination of preprocessing techniques. The experimental system achieves an excellent classification accuracy of 99% which is best than before.",
"title": ""
},
{
"docid": "7604942913928dfb0e0ef486eccbcf8b",
"text": "We connect two scenarios in structured learning: adapting a parser trained on one corpus to another annotation style, and projecting syntactic annotations from one language to another. We propose quasisynchronous grammar (QG) features for these structured learning tasks. That is, we score a aligned pair of source and target trees based on local features of the trees and the alignment. Our quasi-synchronous model assigns positive probability to any alignment of any trees, in contrast to a synchronous grammar, which would insist on some form of structural parallelism. In monolingual dependency parser adaptation, we achieve high accuracy in translating among multiple annotation styles for the same sentence. On the more difficult problem of cross-lingual parser projection, we learn a dependency parser for a target language by using bilingual text, an English parser, and automatic word alignments. Our experiments show that unsupervised QG projection improves on parses trained using only highprecision projected annotations and far outperforms, by more than 35% absolute dependency accuracy, learning an unsupervised parser from raw target-language text alone. When a few target-language parse trees are available, projection gives a boost equivalent to doubling the number of target-language trees. ∗The first author would like to thank the Center for Intelligent Information Retrieval at UMass Amherst. We would also like to thank Noah Smith and Rebecca Hwa for helpful discussions and the anonymous reviewers for their suggestions for improving the paper.",
"title": ""
},
{
"docid": "479e962b8ed5d1b8f03280b209c27249",
"text": "A feedforward network is proposed which lends itself to cost-effective implementations in digital hardware and has a fast forward-pass capability. It differs from the conventional model in restricting its synapses to the set {−1, 0, 1} while allowing unrestricted offsets. Simulation results on the ‘onset of diabetes’ data set and a handwritten numeral recognition database indicate that the new network, despite having strong constraints on its synapses, has a generalization performance similar to that of its conventional counterpart. I. Hardware Implementation Ease of hardware implementation is the key feature that distinguishes the feedforward network from competing statistical and machine learning techniques. The most distinctive characteristic of the graph of that network is its homogeneous modularity. Because of its modular architecture, the natural implementation of this network is a parallel one, whether in software or in hardware. The digital, electronic implementation holds considerable interest – the modular architecture of the feedforward network is well matched with VLSI design tools and therefore lends itself to cost-effective mass production. There is, however, a hitch which makes this union between the feedforward network and digital hardware far from ideal: the network parameters (weights) and its internal functions (dot product, activation functions) are inherently analog. It is too much to expect a network trained in an analog (or high-resolution digital) environment to behave satisfactorily when transplanted into typically low-resolution hardware. Use of the digital approximation of a continuous activation function, and/or range-limiting of weights should, in general, lead to an unsatisfactory approximation. The solution to this problem may lie in a bottom-up approach – instead of trying to fit a trained, but inherently analog network in digital hardware, train the network in such a way that it is suitable for direct digital implementation after training. This approach is the basis of the network proposed here. This network, with synapses from {−1, 0, 1} and continuous offsets, can be formed without using a conventional multiplier. This reduction in complexity, plus the fact that all synapses require no more than a single bit each for storage, makes these networks very attractive. It is possible that the severity of the {−1, 0, 1} restric1Offsets are also known as thresholds as well as biases. 2A zero-valued synapse indicates the absence of a synapse! tion may weaken the approximation capability of this network, however our experiments on classification tasks indicate otherwise. Comfort is also provided by a result on approximation in C(R) [4]. That result, the Multiplier-Free Network (MFN) existence theorem, guarantees that networks with input-layer synapses from the set {−1, 1}, no output-layer synapses, unrestricted offsets, and a single hidden layer of neurons requiring only sign adjustment, addition, and hyperbolic tangent activation functions, can approximate all functions of one variable with any desired accuracy. The constraints placed upon the network weights may result in an increase in the necessary number of hidden neurons required to achieve a given degree of accuracy on most learning tasks. It should also be noted that the hardware implementation benefits are valid only when the MFN has been trained, as the learning task still requires high-resolution arithmetic. This makes the MFN unsuitable for in-situ learning. Moreover, high-resolution offsets and activation function are required during training and for the trained network. II. Approximation in C(R) Consider the function f̂ :",
"title": ""
},
{
"docid": "49b550eb7f99baef1f9accd9da9a26f4",
"text": "Answer selection is an essential step in a question answering (QA) system. Traditional methods for this task mainly focus on developing linguistic features that are limited in practice. With the great success of deep learning method in distributed text representation, deep learning-based answer selection approaches have been well investigated, which mainly employ only one neural network, i.e., convolutional neural network (CNN) or long short term memory (LSTM), leading to failures in extracting some rich sentence features. Thus, in this paper, we propose a collaborative learning-based answer selection model (QA-CL), where we deploy a parallel training architecture to collaboratively learn the initial word vector matrix of the sentence by CNN and bidirectional LSTM (BiLSTM) at the same time. In addition, we extend our model by incorporating the sentence embedding generated by the QA-CL model into a joint distributed sentence representation using a strong unsupervised baseline weight removal (WR), i.e., the QA-CLWR model. We evaluate our proposals on a popular QA dataset, InsuranceQA. The experimental results indicate that our proposed answer selection methods can produce a better performance compared with several strong baselines. Finally, we investigate the models’ performance with respect to different question types and find that question types with a medium number of questions have a better and more stable performance than those types with too large or too small number of questions.",
"title": ""
},
{
"docid": "b447aec2deaa67788560efe1d136be31",
"text": "This paper addresses the design, construction and control issues of a novel biomimetic robotic dolphin equipped with mechanical flippers, based on an engineered propulsive model. The robotic dolphin is modeled as a three-segment organism composed of a rigid anterior body, a flexible rear body and an oscillating fluke. The dorsoventral movement of the tail produces the thrust and bending of the anterior body in the horizontal plane enables turning maneuvers. A dualmicrocontroller structure is adopted to drive the oscillating multi-link rear body and the mechanical flippers. Experimental results primarily confirm the effectiveness of the dolphin-like movement in propulsion and maneuvering.",
"title": ""
},
{
"docid": "a4037343fa0df586946d8034b0bf8a5b",
"text": "Security researchers are applying software reliability models to vulnerability data, in an attempt to model the vulnerability discovery process. I show that most current work on these vulnerability discovery models (VDMs) is theoretically unsound. I propose a standard set of definitions relevant to measuring characteristics of vulnerabilities and their discovery process. I then describe the theoretical requirements of VDMs and highlight the shortcomings of existing work, particularly the assumption that vulnerability discovery is an independent process.",
"title": ""
},
{
"docid": "8e794530be184686a49e5ced6ac6521d",
"text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.",
"title": ""
},
{
"docid": "83dec7aa3435effc3040dfb08cb5754a",
"text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.",
"title": ""
},
{
"docid": "9a5f5df096ad76798791e7bebd6f8c93",
"text": "Organisational Communication, in today’s organizations has not only become far more complex and varied but has become an important factor for overall organizational functioning and success. The way the organization communicates with its employees is reflected in morale, motivation and performance of the employees. The objective of the present paper is to explore the interrelationship between communication and motivation and its overall impact on employee performance. The paper focuses on the fact that communication in the workplace can take many forms and has a lasting effect on employee motivation. If employees feel that communication from management is effective, it can lead to feelings of job satisfaction, commitment to the organisation and increased trust in the workplace. This study was conducted through a comprehensive review and critical analysis of the research and literature focused upon the objectives of the paper. It also enumerates the results of a study of organizational communication and motivational practices followed at a large manufacturing company, Vanaz Engineers Ltd., based at Pune, to support the hypothesis propounded in the paper.",
"title": ""
},
{
"docid": "29152062efc341bf3ce55d41cf13bdcf",
"text": "In this report we discuss the findings of a Web-based questionnaire aimed at discovering both patterns of use of videoconferencing systems within HP and the reasons people give for either not using, or for using such systems. The primary motivation was to understand these issues for the purpose of designing new kinds of technology to support remote work rather than as an investigation into HP’s internal processes. The questionnaire, filled out via the Web by 4532 people across HP, showed that most participants (68%) had not taken part in a videoconference within the last 3 years, and only 3% of the sample were frequent users. Of those who had used videoconference systems, the main benefits were perceived to be the ability to: see people they had never met before, see facial expressions and gestures, and follow conversations with multiple participants more easily. The main problems that users of videoconference technology perceived were: the high overhead of setting up and planning videoconferencing meetings, a lack of a widespread base of users, the perception that videoconference technology did not add value over existing communication tools, and quality and reliability issues. Non-users indicated that the main barriers were lack of access to videoconference facilities and tools and a perception that they did not need to use this tool because other tools were satisfactory. The findings from this study in a real work setting are related to findings in the research literature, and implications for system design and research are identified.",
"title": ""
},
{
"docid": "61eb4d0961242bd1d1e59d889a84f89d",
"text": "Understanding and forecasting the health of an online community is of great value to its owners and managers who have vested interests in its longevity and success. Nevertheless, the association between community evolution and the behavioural patterns and trends of its members is not clearly understood, which hinders our ability of making accurate predictions of whether a community is flourishing or diminishing. In this paper we use statistical analysis, combined with a semantic model and rules for representing and computing behaviour in online communities. We apply this model on a number of forum communities from Boards.ie to categorise behaviour of community members over time, and report on how different behaviour compositions correlate with positive and negative community growth in these forums.",
"title": ""
},
{
"docid": "d7fd9c273c0b26a309b84e0d99143557",
"text": "Remote sensing is one of the most common ways to extract relevant information about Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR), and material content (multispectral and hyperspectral) of the objects in the image. Once considered together their complementarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion), among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the data fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications, and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges?",
"title": ""
},
{
"docid": "49e2963e84967100deee8fc810e053ba",
"text": "We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.",
"title": ""
},
{
"docid": "a3421349059058a0c62105951e46435e",
"text": "It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.",
"title": ""
},
{
"docid": "ee3d837390e1f53181cfb393a0af3cc6",
"text": "The telecommunications industry is highly competitive, which means that the mobile providers need a business intelligence model that can be used to achieve an optimal level of churners, as well as a minimal level of cost in marketing activities. Machine learning applications can be used to provide guidance on marketing strategies. Furthermore, data mining techniques can be used in the process of customer segmentation. The purpose of this paper is to provide a detailed analysis of the C.5 algorithm, within naive Bayesian modelling for the task of segmenting telecommunication customers behavioural profiling according to their billing and socio-demographic aspects. Results have been experimentally implemented.",
"title": ""
},
{
"docid": "9ecd46e90ccd1db7daef14dd63fea8ee",
"text": "HISTORY AND EXAMINATION — A 13-year-old Caucasian boy (BMI 26.4 kg/m) presented with 3 weeks’ history of polyuria, polydipsia, and weight loss. His serum glucose (26.8 mmol/l), HbA1c (9.4%, normal 3.2–5.5) and fructosamine (628 mol/l, normal 205–285) levels were highly elevated (Fig. 1), and urinalysis showed glucosuria ( ) and ketonuria ( ) . He was HLA-DRB1* 0101,*0901, DRB4*01, DQA1*0101,03, and DQB1*0303,0501. Plasma Cpeptide, determined at a blood glucose of 17.0 mmol/l, was low (0.18 nmol/l). His previous history was unremarkable, and he did not take any medication. The patient received standard treatment with insulin, fluid, and electrolyte replacement and diabetes education. After an uneventful clinical course he was discharged on multiple-injection insulin therapy (total 0.9 units kg 1 day ) after 10 days. Subsequently, insulin doses were gradually reduced to 0.3 units kg 1 day , and insulin treatment was completely stopped after 11 months. Without further treatment, HbA1c and fasting glucose levels remained normal throughout the entire follow-up of currently 4.5 years. During oral glucose tolerance testing performed 48 months after diagnosis, he had normal fasting and 2-h levels of glucose (3.7 and 5.6 mmol/l, respectively), insulin (60.5 and 217.9 pmol/l, respectively), and C-peptide (0.36 and 0.99 nmol/l, respectively). His insulin sensitivity, as determined by insulin sensitivity index (composite) and homeostasis model assessment, was normal, and BMI remained unchanged. Serum autoantibodies to GAD65, insulin autoantibody-2, insulin, and islet cell antibodies were initially positive but showed a progressive decline or loss during follow-up. INVESTIGATION — T-cell antigen recognition and cytokine profiles were studied using a library of 21 preproinsulin (PPI) peptides (2). In the patient’s peripheral blood mononuclear cells (PBMCs), a high cumulative interleukin (IL)-10) secretion (201 pg/ml) was observed in response to PPI peptides, with predominant recognition of PPI44–60 and PPI49–65, while interferon (IFN)secretion was undetectable. In contrast, in PBMCs from a cohort of 12 type 1 diabetic patients without long-term remission (2), there was a dominant IFNresponse but low IL-10 secretion to PPI. Analysis of CD4 T–helper cell subsets revealed that IL-10 secretion was mostly attributable to the patient’s naı̈ve/recently activated CD45RA cells, while a strong IFNresponse was observed in CD45RA cells. CD45RA T-cells have been associated with regulatory T-cell function in diabetes, potentially capable of suppressing",
"title": ""
},
{
"docid": "15ddb8cb5e82e0efde197908420bb8d0",
"text": "In recent years, there has been much interest in learning Bayesian networks from data. Learning such models is desirable simply because there is a wide array of off-the-shelf tools that can apply the learned models as expert systems, diagnosis engines, and decision support systems. Practitioners also claim that adaptive Bayesian networks have advantages in their own right as a non-parametric method for density estimation, data analysis, pattern classification, and modeling. Among the reasons cited we find: their semantic clarity and understandability by humans, the ease of acquisition and incorporation of prior knowledge, the ease of integration with optimal decision-making methods, the possibility of causal interpretation of learned models, and the automatic handling of noisy and missing data. In spite of these claims, and the initial success reported recently, methods that learn Bayesian networks have yet to make the impact that other techniques such as neural networks and hidden Markov models have made in applications such as pattern and speech recognition. In this paper, we challenge the research community to identify and characterize domains where induction of Bayesian networks makes the critical difference, and to quantify the factors that are responsible for that difference. In addition to formalizing the challenge, we identify research problems whose solution is, in our view, crucial for meeting this challenge.",
"title": ""
}
] |
scidocsrr
|
9cea38e40a32bc16ea9583ee046812b8
|
Gamification as a Disruptive Factor in Software Process Improvement Initiatives
|
[
{
"docid": "95b825ee3290572189ba8d6957b6a307",
"text": "This paper proposes a working definition of the term gamification as the use of game design elements in non-game contexts. This definition is related to similar concepts such as serious games, serious gaming, playful interaction, and game-based technologies. Origins Gamification as a term originated in the digital media industry. The first documented uses dates back to 2008, but gamification only entered widespread adoption in the second half of 2010, when several industry players and conferences popularized it. It is also—still—a heavily contested term; even its entry into Wikipedia has been contested. Within the video game and digital media industry, discontent with some interpretations have already led designers to coin different terms for their own practice (e.g., gameful design) to distance themselves from recent negative connotations [13]. Until now, there has been hardly any academic attempt at a definition of gamification. Current uses of the word seem to fluctuate between two major ideas. The first is the increasing societal adoption and institutionalization of video games and the influence games and game elements have in shaping our everyday life and interactions. Game designer Jesse Schell summarized this as the trend towards a Gamepocalypse, \" when Copyright is held by the author/owner(s).",
"title": ""
},
{
"docid": "398f4bed0a54a0127fab16d5a07bfef1",
"text": "Gamification design is considered as the predictor of collaborative storytelling websites' success. Although aforementioned studies have mentioned a broad range of factors that may influence gamification, they neither depicted the actual design features nor relative attractiveness among them. This study aims to identify attractive gamification features for collaborative storytelling websites. We first constructed a hierarchical system structure of gamification design of collaborative storytelling websites and conducted a focus group interview with eighteen frequent users to identify 35gamification features. After that, this study determined the relative attractiveness of these gamification features by administrating an online survey to 6333 collaborative storytelling websites users. The results indicated that the top 10 most attractive gamification features could account for more than 50% of attractiveness among these 35 gamification features. The feature of unpredictable time pressure is important to website users, yet not revealed in previous relevant studies. Implications of the findings were discussed.",
"title": ""
},
{
"docid": "e5638848a3844d7edf7dae7115233771",
"text": "Interest in gamification is growing steadily. But as the underlying mechanisms of gamification are not well understood yet, a closer examination of a gamified activity's meaning and individual game design elements may provide more insights. We examine the effects of points -- a basic element of gamification, -- and meaningful framing -- acknowledging participants' contribution to a scientific cause, -- on intrinsic motivation and performance in an online image annotation task. Based on these findings, we discuss implications and opportunities for future research on gamification.",
"title": ""
}
] |
[
{
"docid": "fd392f5198794df04c70da6bc7fe2f0d",
"text": "Performance tuning in modern database systems requires a lot of expertise, is very time consuming and often misdirected. Tuning attempts often lack a methodology that has a holistic view of the database. The absence of historical diagnostic information to investigate performance issues at first occurrence exacerbates the whole tuning process often requiring that problems be reproduced before they can be correctly diagnosed. In this paper we describe how Oracle overcomes these challenges and provides a way to perform automatic performance diagnosis and tuning. We define a new measure called ‘Database Time’ that provides a common currency to gauge the performance impact of any resource or activity in the database. We explain how the Automatic Database Diagnostic Monitor (ADDM) automatically diagnoses the bottlenecks affecting the total database throughput and provides actionable recommendations to alleviate them. We also describe the types of performance measurements that are required to perform an ADDM analysis. Finally we show how ADDM plays a central role within Oracle 10g’s manageability framework to self-manage a database and provide a comprehensive tuning solution.",
"title": ""
},
{
"docid": "7974d0299ffcca73bb425fb72f463429",
"text": "The development of human gut microbiota begins as soon as the neonate leaves the protective environment of the uterus (or maybe in-utero) and is exposed to innumerable microorganisms from the mother as well as the surrounding environment. Concurrently, the host responses to these microbes during early life manifest during the development of an otherwise hitherto immature immune system. The human gut microbiome, which comprises an extremely diverse and complex community of microorganisms inhabiting the intestinal tract, keeps on fluctuating during different stages of life. While these deviations are largely natural, inevitable and benign, recent studies show that unsolicited perturbations in gut microbiota configuration could have strong impact on several features of host health and disease. Our microbiota undergoes the most prominent deviations during infancy and old age and, interestingly, our immune health is also in its weakest and most unstable state during these two critical stages of life, indicating that our microbiota and health develop and age hand-in-hand. However, the mechanisms underlying these interactions are only now beginning to be revealed. The present review summarizes the evidences related to the age-associated changes in intestinal microbiota and vice-versa, mechanisms involved in this bi-directional relationship, and the prospective for development of microbiota-based interventions such as probiotics for healthy aging.",
"title": ""
},
{
"docid": "ba56eda278458a7580e7c9356416f31a",
"text": "Objective:To study whether a cue-based clinical pathway for oral feeding initiation and advancement of premature infants would result in earlier achievement of full oral feeding.Study Design:Age of achievement of full oral intake was compared for two groups of preterm infants; a prospective study group vs historic cohort controls. Study infants had oral feedings managed by nurses using a clinical pathway that relied on infant behavioral readiness signs to initiate and advance oral feedings. Controls had oral feedings managed by physician orders.Result:Fifty-one infants (n=28 study and n=23 control) were studied. Gender distribution, gestational age, birth weight and ventilator days were not different between groups. Study infants reached full oral feedings 6 days earlier than controls (36±1 3/7 weeks of postmenstrual age (PMA) vs 36 6/7±1 4/7 weeks of PMA, P=0.02).Conclusion:The cue-based clinical pathway for oral feeding initiation and advancement of premature infants resulted in earlier achievement of full oral feeding.",
"title": ""
},
{
"docid": "18a86d2660d01974530549081b796482",
"text": "The strive for efficient and cost-effective photovoltaic (PV) systems motivated the power electronic design developed here. The work resulted in a dc–dc converter for module integration and distributed maximum power point tracking (MPPT) with a novel adaptive control scheme. The latter is essential for the combined features of high energy efficiency and high power quality over a wide range of operating conditions. The switching frequency is optimally modulated as a function of solar irradiance for power conversion efficiency maximization. With the rise of irradiance, the frequency is reduced to reach the conversion efficiency target. A search algorithm is developed to determine the optimal switching frequency step. Reducing the switching frequency may, however, compromise MPPT efficiency. Furthermore, it leads to increased ripple content. Therefore, to achieve a uniform high power quality under all conditions, interleaved converter cells are adaptively activated. The overall cost is kept low by selecting components that allow for implementing the functions at low cost. Simulation results show the high value of the module integrated converter for dc standalone and microgrid applications. A 400-W prototype was implemented at 0.14 Euro/W. Testing showed efficiencies above 95 %, taking into account all losses from power conversion, MPPT, and measurement and control circuitry.",
"title": ""
},
{
"docid": "b8bcd83f033587533d7502c54a2b67da",
"text": "The development of structural health monitoring (SHM) technology has evolved for over fifteen years in Hong Kong since the implementation of the “Wind And Structural Health Monitoring System (WASHMS)” on the suspension Tsing Ma Bridge in 1997. Five cable-supported bridges in Hong Kong, namely the Tsing Ma (suspension) Bridge, the Kap Shui Mun (cable-stayed) Bridge, the Ting Kau (cable-stayed) Bridge, the Western Corridor (cable-stayed) Bridge, and the Stonecutters (cable-stayed) Bridge, have been instrumented with sophisticated long-term SHM systems. These SHM systems mainly focus on the tracing of structural behavior and condition of the long-span bridges over their lifetime. Recently, a structural health monitoring and maintenance management system (SHM&MMS) has been designed and will be implemented on twenty-one sea-crossing viaduct bridges with a total length of 9,283 km in the Hong Kong Link Road (HKLR) of the Hong Kong – Zhuhai – Macao Bridge of which the construction commenced in mid-2012. The SHM&MMS gives more emphasis on durability monitoring of the reinforced concrete viaduct bridges in marine environment and integration of the SHM system and bridge maintenance management system. It is targeted to realize the transition from traditional corrective and preventive maintenance to condition-based maintenance (CBM) of in-service bridges. The CBM uses real-time and continuous monitoring data and monitoring-derived information on the condition of bridges (including structural performance and deterioration mechanisms) to identify when the actual maintenance is necessary and how cost-effective maintenance can be conducted. This paper outlines how to incorporate SHM technology into bridge maintenance strategy to realize CBM management of bridges.",
"title": ""
},
{
"docid": "81ee018bb1719d7963f057649c1c410e",
"text": "Solar and wind are the most promising among the renewable energy sources. The objective of this paper is to investigate the optimal techno-economically feasible model to mitigate a large scale demand in decentralized scenario. Solar, and wind energy potential in a hybrid scenario is closely assessed with an intermittent support from diesel generator and battery banks. Based on the general usage of appliances in a residential demand for 200 families that comprises of 1000 people a suitable load pattern is developed for this study. Demand in northern Europe predominantly constitutes of heating requirement in the form of water and space heating. The proposed model demonstrates convincing reliability in terms of techno-economical prospective. EnergyPRO, HOMER, and Balmorel software environments were used to simulate the models, results are assessed in the paper.",
"title": ""
},
{
"docid": "afbd0ecad829246ed7d6e1ebcebf5815",
"text": "Battery thermal management system (BTMS) is essential for electric-vehicle (EV) and hybrid-vehicle (HV) battery packs to operate effectively in all climates. Lithium-ion (Li-ion) batteries offer many advantages to the EV such as high power and high specific energy. However, temperature affects their performance, safety, and productive life. This paper is about the design and evaluation of a BTMS based on the Peltier effect heat pumps. The discharge efficiency of a 60-Ah prismatic Li-ion pouch cell was measured under different rates and different ambient temperature values. The obtained results were used to design a solid-state BTMS based on Peltier thermoelectric coolers (TECs). The proposed BTMS is then modeled and evaluated at constant current discharge in the laboratory. In addition, The BTMS was installed in an EV that was driven in the US06 cycle. The thermal response and the energy consumption of the proposed BTMS were satisfactory.",
"title": ""
},
{
"docid": "40fef2ba4ae0ecd99644cf26ed8fa37f",
"text": "Plant has plenty use in foodstuff, medicine and industry. And it is also vitally important for environmental protection. However, it is an important and difficult task to recognize plant species on earth. Designing a convenient and automatic recognition system of plants is necessary and useful since it can facilitate fast classifying plants, and understanding and managing them. In this paper, a leaf database from different plants is firstly constructed. Then, a new classification method, referred to as move median centers (MMC) hypersphere classifier, for the leaf database based on digital morphological feature is proposed. The proposed method is more robust than the one based on contour features since those significant curvature points are hard to find. Finally, the efficiency and effectiveness of the proposed method in recognizing different plants is demonstrated by experiments. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "226d8e68f0519ddfc9e288c9151b65f0",
"text": "Vector space embeddings can be used as a tool for learning semantic relationships from unstructured text documents. Among others, earlier work has shown how in a vector space of entities (e.g. different movies) fine-grained semantic relationships can be identified with directions (e.g. more violent than). In this paper, we use stacked denoising auto-encoders to obtain a sequence of entity embeddings that model increasingly abstract relationships. After identifying directions that model salient properties of entities in each of these vector spaces, we induce symbolic rules that relate specific properties to more general ones. We provide illustrative examples to demonstrate the potential of this ap-",
"title": ""
},
{
"docid": "d29b90dbce6f4dd7c2a3480239def8f9",
"text": "This paper presents a design of permanent magnet machines (PM), such as the permanent magnet axial flux generator for wind turbine generated direct current voltage base on performance requirements. However recent developments in rare earth permanent magnet materials and power electronic devices has awakened interest in alternative generator topologies that can be used to produce direct voltage from wind energy using rectifier circuit convert alternating current to direct current. In preliminary tests the input mechanical energy to drive the rotor of the propose generator. This paper propose a generator which can change mechanical energy into electrical energy with the generator that contains bar magnets move relative generated flux magnetic offset winding coils in stator component. The results show that the direct current output power versus rotor speed of generator in various applications. These benefits present the axial flux permanent magnet generator with generated direct voltage at rated power 1500 W.",
"title": ""
},
{
"docid": "702d38b3ddfd2d0a2f506acbad561f63",
"text": "Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.",
"title": ""
},
{
"docid": "9c8f6dddcb9bb099eea4433534cb40da",
"text": "There has been an increasing interest in the applications of polarimctric n~icrowavc radiometers for ocean wind remote sensing. Aircraft and spaceborne radiometers have found significant wind direction signals in sea surface brightness temperatures, in addition to their sensitivities on wind speeds. However, it is not yet understood what physical scattering mechanisms produce the observed wind direction dependence. To this encl, polari]nctric microwave emissions from wind-generated sea surfaces are investigated with a polarimctric two-scale scattering model of sea surfaces, which relates the directional wind-wave spectrum to passive microwave signatures of sea surfaces. T)leoretical azimuthal modulations are found to agree well with experimental observations foI all Stokes paranletcrs from nearnadir to 65° incidence angles. The up/downwind asymmetries of brightness temperatures are interpreted usiIlg the hydrodynamic modulation. The contributions of Bragg scattering by short waves, geometric optics scattering by long waves and sea foam are examined. The geometric optics scattering mechanism underestimates the directicmal signals in the first three Stokes paranletcrs, and most importantly it predicts no signals in the fourth Stokes parameter (V), in disagreement with experimental datfi. In contrast, the Bragg scattering and and contributes to most of the wind direction signals from the two-scale model correctly predicts the phase changes of tl}e up/crosswind asymmetries in 7j U from middle to high incidence angles. The accuracy of the Bragg scattering theory for radiometric emission from water ripples is corroborated by the numerical Monte Carlo simulation of rough surface scattering. ‘I’his theoretical interpretation indicates the potential use of ]Jolarimctric brightness temperatures for retrieving the directional wave spectrum of capillary waves.",
"title": ""
},
{
"docid": "63905ede4b4ed90bcf70cd557517392f",
"text": "A CMOS dB-linear variable gain amplifier (VGA) with a novel I/Q tuning loop for dc-offset cancellation is presented. The CMOS dB-linear VGA provides a variable gain of 60 dB while maintaining its 3-dB bandwidth greater than 2.5 MHz. A novel exponential circuit is proposed to obtain the dB-linear gain control characteristics. Nonideal effects on dB linearity are analyzed and the methods for improvement are suggested. A varying-bandwidth LPF is employed to achieve fast settling. The chip is fabricated in a 0.35m CMOS technology and the measurement results demonstrate the good dB linearity of the proposed VGA and show that the tuning loop can effectively remove dc offset and suppress I/Q mismatch effects simultaneously.",
"title": ""
},
{
"docid": "fcba75f01ef1b311d5c4ecb4cf952620",
"text": "With the increasing interest in large-scale, high-resolution and real-time geographic information system (GIS) applications and spatial big data processing, traditional GIS is not efficient enough to handle the required loads due to limited computational capabilities.Various attempts have been made to adopt high performance computation techniques from different applications, such as designs of advanced architectures, strategies of data partition and direct parallelization method of spatial analysis algorithm, to address such challenges. This paper surveys the current state of parallel GIS with respect to parallel GIS architectures, parallel processing strategies, and relevant topics. We present the general evolution of the GIS architecture which includes main two parallel GIS architectures based on high performance computing cluster and Hadoop cluster. Then we summarize the current spatial data partition strategies, key methods to realize parallel GIS in the view of data decomposition and progress of the special parallel GIS algorithms. We use the parallel processing of GRASS as a case study. We also identify key problems and future potential research directions of parallel GIS.",
"title": ""
},
{
"docid": "42a81e39b411ba4613ff22090097548c",
"text": "We present a neural network method for review rating prediction in this paper. Existing neural network methods for sentiment prediction typically only capture the semantics of texts, but ignore the user who expresses the sentiment. This is not desirable for review rating prediction as each user has an influence on how to interpret the textual content of a review. For example, the same word (e.g. “good”) might indicate different sentiment strengths when written by different users. We address this issue by developing a new neural network that takes user information into account. The intuition is to factor in user-specific modification to the meaning of a certain word. Specifically, we extend the lexical semantic composition models and introduce a userword composition vector model (UWCVM), which effectively captures how user acts as a function affecting the continuous word representation. We integrate UWCVM into a supervised learning framework for review rating prediction, and conduct experiments on two benchmark review datasets. Experimental results demonstrate the effectiveness of our method. It shows superior performances over several strong baseline methods.",
"title": ""
},
{
"docid": "af82ea560b98535f3726be82a2d23536",
"text": "Influence Maximization is an extensively-studied problem that targets at selecting a set of initial seed nodes in the Online Social Networks (OSNs) to spread the influence as widely as possible. However, it remains an open challenge to design fast and accurate algorithms to find solutions in large-scale OSNs. Prior Monte-Carlo-simulation-based methods are slow and not scalable, while other heuristic algorithms do not have any theoretical guarantee and they have been shown to produce poor solutions for quite some cases. In this paper, we propose hop-based algorithms that can easily scale to millions of nodes and billions of edges. Unlike previous heuristics, our proposed hop-based approaches can provide certain theoretical guarantees. Experimental evaluations with real OSN datasets demonstrate the efficiency and effectiveness of our algorithms.",
"title": ""
},
{
"docid": "f7797c2392419c0bd46908e86bcab61b",
"text": "Data in the maritime domain is growing at an unprecedented rate, e.g., terabytes of oceanographic data are collected every month, and petabytes of data are already publicly available. Big data from heterogeneous sources such as sensors, buoys, vessels, and satellites could potentially fuel a large number of interesting applications for environmental protection, security, fault prediction, shipping routes optimization, and energy production. However, because of several challenges related to big data and the high heterogeneity of the data sources, such applications are still underdeveloped and fragmented. In this paper, we analyze challenges and requirements related to big maritime data applications and propose a scalable data management solution. A big data architecture meeting these requirements is described, and examples of its implementation in concrete scenarios are provided. The related data value chain and use cases in the context of a European project, BigDataOcean, are also described.",
"title": ""
},
{
"docid": "ee820c65fd029b5ba1c4afdfe0126800",
"text": "In this paper, a new centrality called local Fiedler vector centrality (LFVC) is proposed to analyze the connectivity structure of a graph. It is associated with the sensitivity of algebraic connectivity to node or edge removals and features distributed computations via the associated graph Laplacian matrix. We prove that LFVC can be related to a monotonic submodular set function that guarantees that greedy node or edge removals come within a factor 1-1/e of the optimal non-greedy batch removal strategy. Due to the close relationship between graph topology and community structure, we use LFVC to detect deep and overlapping communities on real-world social network datasets. The results offer new insights on community detection by discovering new significant communities and key members in the network. Notably, LFVC is also shown to significantly outperform other well-known centralities for community detection.",
"title": ""
},
{
"docid": "b4dd76179734fb43e74c9c1daef15bbf",
"text": "Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.",
"title": ""
}
] |
scidocsrr
|
12f3c0269c9ea7caf257cbe9af75ef17
|
Image Alignment and Stitching
|
[
{
"docid": "0e4ab7e416ec8293865d8c12b8ba34c4",
"text": "Estimation techniques in computer vision applications must estimate accurate model parameters despite small-scale noise in the data, occasional large-scale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are least-median of squares (LMS) [P. J. Rousseeuw, J. Amer. Statist. Assoc., 79 (1984), pp. 871–880] and M-estimators [Robust Statistics: The Approach Based on Influence Functions, F. R. Hampel et al., John Wiley, 1986; Robust Statistics, P. J. Huber, John Wiley, 1981]. LMS handles large fractions of outliers, up to the theoretical limit of 50% for estimators invariant to affine changes to the data, but has low statistical efficiency. M-estimators have higher statistical efficiency but tolerate much lower percentages of outliers unless properly initialized. While robust estimators have been used in a variety of computer vision applications, three are considered here. In analysis of range images—images containing depth or X, Y , Z measurements at each pixel instead of intensity measurements—robust estimators have been used successfully to estimate surface model parameters in small image regions. In stereo and motion analysis, they have been used to estimate parameters of what is called the “fundamental matrix,” which characterizes the relative imaging geometry of two cameras imaging the same scene. Recently, robust estimators have been applied to estimating a quadratic image-to-image transformation model necessary to create a composite, “mosaic image” from a series of images of the human retina. In each case, a straightforward application of standard robust estimators is insufficient, and carefully developed extensions are used to solve the problem.",
"title": ""
}
] |
[
{
"docid": "6b53dc83581b3832c39d9f5675d182e3",
"text": "Single image layer separation aims to divide the observed image into two independent components according to special task requirements and has been widely used in many vision and multimedia applications. Because this task is fundamentally ill-posed, most existing approaches tend to design complex priors on the separated layers. However, the cost function with complex prior regularization is hard to optimize. The performance is also compromised by fixed iteration schemes and less data fitting ability. More importantly, it is also challenging to design a unified framework to separate image layers for different applications. To partially mitigate the above limitations, we develop a flexible optimization unrolling technique to incorporate deep architectures into iterations for adaptive image layer separation. Specifically, we first design a general energy model with implicit priors and adopt the widely used alternating direction method of multiplier (ADMM) to establish our basic iteration scheme. By unrolling with residual convolution architectures, we successfully obtain a simple, flexible, and data-dependent image separation method. Extensive experiments on the tasks of rain streak removal and reflection removal validate the effectiveness of our approach.",
"title": ""
},
{
"docid": "0551e9faef769350102a404fa0b61dc1",
"text": "Lignocellulosic biomass is a complex biopolymer that is primary composed of cellulose, hemicellulose, and lignin. The presence of cellulose in biomass is able to depolymerise into nanodimension biomaterial, with exceptional mechanical properties for biocomposites, pharmaceutical carriers, and electronic substrate's application. However, the entangled biomass ultrastructure consists of inherent properties, such as strong lignin layers, low cellulose accessibility to chemicals, and high cellulose crystallinity, which inhibit the digestibility of the biomass for cellulose extraction. This situation offers both challenges and promises for the biomass biorefinery development to utilize the cellulose from lignocellulosic biomass. Thus, multistep biorefinery processes are necessary to ensure the deconstruction of noncellulosic content in lignocellulosic biomass, while maintaining cellulose product for further hydrolysis into nanocellulose material. In this review, we discuss the molecular structure basis for biomass recalcitrance, reengineering process of lignocellulosic biomass into nanocellulose via chemical, and novel catalytic approaches. Furthermore, review on catalyst design to overcome key barriers regarding the natural resistance of biomass will be presented herein.",
"title": ""
},
{
"docid": "3f4c1474f79a4d3b179d2a8391719d5f",
"text": "An unresolved challenge for all kind of temporal data is the reliable anomaly detection, especially when adaptability is required in the case of non-stationary time series or when the nature of future anomalies is unknown or only vaguely defined. Most of the current anomaly detection algorithms follow the general idea to classify an anomaly as a significant deviation from the prediction. In this paper we present a comparative study where several online anomaly detection algorithms are compared on the large Yahoo Webscope S5 anomaly benchmark. We show that a relatively Simple Online Regression Anomaly Detector (SORAD) is quite successful compared to other anomaly detectors. We discuss the importance of several adaptive and online elements of the algorithm and their influence on the overall anomaly detection accuracy.",
"title": ""
},
{
"docid": "4d247ab4978439f3ff2e6141202297f1",
"text": "1 Professor and Head, Department of Aeronautical and Astronautical Engineering, Associate Fellow AIAA. 2 Professor, Department of Electrical and Computer Engineering and the Coordinated Science Laboratory. 3 Associate Professor, Department of Aeronautical and Astronautical Engineering, Senior Member AIAA. 4 Associate Professor, Dept. of Aero. and Astro. Eng. and the Coordinated Science Lab., Member AIAA. 5 Graduate Research Assistant, Dept. of Elec. and Comp. Eng. and the Coordinated Science Laboratory. 6 Assistant Professor, Industrial and Systems Engineering, The Ohio State University.",
"title": ""
},
{
"docid": "d35082d022280d25eea3e98596b70839",
"text": "OVERVIEW 795 DEFINING PROPERTIES OF THE BIOECOLOGICAL MODEL 796 Proposition I 797 Proposition II 798 FROM THEORY TO RESEARCH DESIGN: OPERATIONALIZING THE BIOECOLOGICAL MODEL 799 Developmental Science in the Discovery Mode 801 Different Paths to Different Outcomes: Dysfunction versus Competence 803 The Role of Experiments in the Bioecological Model 808 HOW DO PERSON CHARACTERISTICS INFLUENCE LATER DEVELOPMENT? 810 Force Characteristics as Shapers of Development 810 Resource Characteristics of the Person as Shapers of Development 812 Demand Characteristics of the Person as Developmental Inf luences 812 THE ROLE OF FOCUS OF ATTENTION IN PROXIMAL PROCESSES 813 PROXIMAL PROCESSES IN SOLO ACTIVITIES WITH OBJECTS AND SYMBOLS 814 THE MICROSYSTEM MAGNIFIED: ACTIVITIES, RELATIONSHIPS, AND ROLES 814 Effects of the Physical Environment on Psychological Development 814 The Mother-Infant Dyad as a Context of Development 815 BEYOND THE MICROSYSTEM 817 The Expanding Ecological Universe 818 Nature-Nurture Reconceptualized: A Bioecological Interpretation 819 TIME IN THE BIOECOLOGICAL MODEL: MICRO-, MESO-, AND MACROCHRONOLOGICAL SYSTEMS 820 FROM RESEARCH TO REALITY 822 THE BIOECOLOGICAL MODEL: A DEVELOPMENTAL ASSESSMENT 824 REFERENCES 825",
"title": ""
},
{
"docid": "cc63fa999bed5abf05a465ae7313c053",
"text": "In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of vision-based state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.",
"title": ""
},
{
"docid": "2f41ff2d68fa75ef5e91695d19684fbb",
"text": "Wireless Sensor Networking is one of the most promising technologies that have applications ranging from health care to tactical military. Although Wireless Sensor Networks (WSNs) have appealing features (e.g., low installation cost, unattended network operation), due to the lack of a physical line of defense (i.e., there are no gateways or switches to monitor the information flow), the security of such networks is a big concern, especially for the applications where confidentiality has prime importance. Therefore, in order to operate WSNs in a secure way, any kind of intrusions should be detected before attackers can harm the network (i.e., sensor nodes) and/or information destination (i.e., data sink or base station). In this article, a survey of the state-of-the-art in Intrusion Detection Systems (IDSs) that are proposed for WSNs is presented. Firstly, detailed information about IDSs is provided. Secondly, a brief survey of IDSs proposed for Mobile Ad-Hoc Networks (MANETs) is presented and applicability of those systems to WSNs are discussed. Thirdly, IDSs proposed for WSNs are presented. This is followed by the analysis and comparison of each scheme along with their advantages and disadvantages. Finally, guidelines on IDSs that are potentially applicable to WSNs are provided. Our survey is concluded by highlighting open research issues in the field.",
"title": ""
},
{
"docid": "57c0f9c629e4fdcbb0a4ca2d4f93322f",
"text": "Chronic exertional compartment syndrome and medial tibial stress syndrome are uncommon conditions that affect long-distance runners or players involved in team sports that require extensive running. We report 2 cases of bilateral chronic exertional compartment syndrome, with medial tibial stress syndrome in identical twins diagnosed with the use of a Kodiag monitor (B. Braun Medical, Sheffield, United Kingdom) fulfilling the modified diagnostic criteria for chronic exertional compartment syndrome as described by Pedowitz et al, which includes: (1) pre-exercise compartment pressure level >15 mm Hg; (2) 1 minute post-exercise pressure >30 mm Hg; and (3) 5 minutes post-exercise pressure >20 mm Hg in the presence of clinical features. Both patients were treated with bilateral anterior fasciotomies through minimal incision and deep posterior fasciotomies with tibial periosteal stripping performed through longer anteromedial incisions under direct vision followed by intensive physiotherapy resulting in complete symptomatic recovery. The etiology of chronic exertional compartment syndrome is not fully understood, but it is postulated abnormal increases in intramuscular pressure during exercise impair local perfusion, causing ischemic muscle pain. No familial predisposition has been reported to date. However, some authors have found that no significant difference exists in the relative perfusion, in patients, diagnosed with chronic exertional compartment syndrome. Magnetic resonance images of affected compartments have indicated that the pain is not due to ischemia, but rather from a disproportionate oxygen supply versus demand. We believe this is the first report of chronic exertional compartment syndrome with medial tibial stress syndrome in twins, raising the question of whether there is a genetic predisposition to the causation of these conditions.",
"title": ""
},
{
"docid": "7883bbf8857d65712b96601486ba40e8",
"text": "In this paper we study the use of convolutional neural networks (convnets) for the task of pedestrian detection. Despite their recent diverse successes, convnets historically underperform compared to other pedestrian detectors. We deliberately omit explicitly modelling the problem into the network (e.g. parts or occlusion modelling) and show that we can reach competitive performance without bells and whistles. In a wide range of experiments we analyse small and big convnets, their architectural choices, parameters, and the influence of different training data, including pretraining on surrogate tasks. We present the best convnet detectors on the Caltech and KITTI dataset. On Caltech our convnets reach top performance both for the Caltech1x and Caltech10x training setup. Using additional data at training time our strongest convnet model is competitive even to detectors that use additional data (optical flow) at test time.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "f02587ac75edc7a7880131a4db077bb2",
"text": "Single-unit recordings in monkeys have revealed neurons in the lateral prefrontal cortex that increase their firing during a delay between the presentation of information and its later use in behavior. Based on monkey lesion and neurophysiology studies, it has been proposed that a dorsal region of lateral prefrontal cortex is necessary for temporary storage of spatial information whereas a more ventral region is necessary for the maintenance of nonspatial information. Functional neuroimaging studies, however, have not clearly demonstrated such a division in humans. We present here an analysis of all reported human functional neuroimaging studies plotted onto a standardized brain. This analysis did not find evidence for a dorsal/ventral subdivision of prefrontal cortex depending on the type of material held in working memory, but a hemispheric organization was suggested (i.e., left-nonspatial; right-spatial). We also performed functional MRI studies in 16 normal subjects during two tasks designed to probe either nonspatial or spatial working memory, respectively. A group and subgroup analysis revealed similarly located activation in right middle frontal gyrus (Brodmann's area 46) in both spatial and nonspatial [working memory-control] subtractions. Based on another model of prefrontal organization [M. Petrides, Frontal lobes and behavior, Cur. Opin. Neurobiol., 4 (1994) 207-211], a reconsideration of the previous imaging literature data suggested that a dorsal/ventral subdivision of prefrontal cortex may depend upon the type of processing performed upon the information held in working memory.",
"title": ""
},
{
"docid": "7704eb15f3c576e2575e18613ce312df",
"text": "Objects for detection usually have distinct characteristics in different sub-regions and different aspect ratios. However, in prevalent two-stage object detection methods, Region-of-Interest (RoI) features are extracted by RoI pooling with little emphasis on these translation-variant feature components. We present feature selective networks to reform the feature representations of RoIs by exploiting their disparities among sub-regions and aspect ratios. Our network produces the sub-region attention bank and aspect ratio attention bank for the whole image. The RoI-based sub-region attention map and aspect ratio attention map are selectively pooled from the banks, and then used to refine the original RoI features for RoI classification. Equipped with a lightweight detection subnetwork, our network gets a consistent boost in detection performance based on general ConvNet backbones (ResNet-101, GoogLeNet and VGG-16). Without bells and whistles, our detectors equipped with ResNet-101 achieve more than 3% mAP improvement compared to counterparts on PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO datasets.",
"title": ""
},
{
"docid": "cae17f5fd60221c404ba3d3224889539",
"text": "We analyze marijuana use by college undergraduates before and after legalization of recreational marijuana. Using survey data from the National College Health Assessment, we show that students at Washington State University experienced a significant increase in marijuana use after legalization. This increase is larger than would be predicted by national trends. The change is strongest among females, Black students, and Hispanic students. The increase for underage students is as much as for legal-age students. We find no corresponding changes in the consumption of tobacco, alcohol, or other drugs.",
"title": ""
},
{
"docid": "c1c3b9393dd375b241f69f3f3cbf5acd",
"text": "The purpose of trust and reputation systems is to strengthen the quality of markets and communities by providing an incentive for good behaviour and quality services, and by sanctioning bad behaviour and low quality services. However, trust and reputation systems will only be able to produce this effect when they are sufficiently robust against strategic manipulation or direct attacks. Currently, robustness analysis of TRSs is mostly done through simple simulated scenarios implemented by the TRS designers themselves, and this can not be considered as reliable evidence for how these systems would perform in a realistic environment. In order to set robustness requirements it is important to know how important robustness really is in a particular community or market. This paper discusses research challenges for trust and reputation systems, and proposes a research agenda for developing sound and reliable robustness principles and mechanisms for trust and reputation systems.",
"title": ""
},
{
"docid": "3066ab508d8f844ae5c40ee748692bb6",
"text": "Serious sequelae of youth depression, plus recent concerns over medication safety, prompt growing interest in the effects of youth psychotherapy. In previous meta-analyses, effect sizes (ESs) have averaged .99, well above conventional standards for a large effect and well above mean ES for other conditions. The authors applied rigorous analytic methods to the largest study sample to date and found a mean ES of .34, not superior but significantly inferior to mean ES for other conditions. Cognitive treatments (e.g., cognitive-behavioral therapy) fared no better than noncognitive approaches. Effects showed both generality (anxiety was reduced) and specificity (externalizing problems were not), plus short- but not long-term holding power. Youth depression treatments appear to produce effects that are significant but modest in their strength, breadth, and durability.",
"title": ""
},
{
"docid": "4bc73a7e6a6975ba77349cac62a96c18",
"text": "BACKGROUND\nIn May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear.\n\n\nMETHODS\nThe study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play.\n\n\nRESULTS\nOn average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found.\n\n\nCONCLUSIONS\nThe present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.",
"title": ""
},
{
"docid": "69dc1947a79cf56d049dea434bdcb540",
"text": "ASCE Standard 7, ‘‘Minimum Design Loads for Buildings and Other Structures,’’ has contained provisions for load combinations and load factors suitable for load and resistance factor design since its 1982 edition. Research in wind engineering in the intervening years has raised questions regarding the wind load factor 1.3 and load combinations in which the wind load appears in ASCE 7-95. This paper presents revised statistical models of wind load parameters based on more recent research and a Delphi, and reassesses the wind load combinations in ASCE Standard 7 probabilistically. The current approach to specifying wind loads in ASCE 7 does not lead to uniform reliability in inland and hurricane-prone regions of the country. It is recommended that the factor accounting for wind directionality effects should be separated from the load factor and presented in a separate table in the wind load section, that the wind load factor should be increased from 1.3 to approximately 1.5 or 1.6 to achieve reliability consistent with designs governed by gravity load combinations, and that the exposure classification procedure in ASCE Standard 7 should be revised to reduce the current high error rate in assigning exposures.",
"title": ""
},
{
"docid": "94fbd5c6f1347bb04ab8d9f6e768f8df",
"text": "(3) because ‖(xa,va)‖2 ≤ L and ηt only has a finite variance. For the first term on the right-hand side in Eq (2), if the regularization parameter λ1 is sufficiently large, the Hessian matrix of the loss function specified in the paper is positive definite at the optimizer based on the property of alternating least square (Uschmajew 2012). The estimation of Θ and va is thus locally q-linearly convergent to the optimizer. This indicates that for every 1 > 0, we have, ‖v̂a,t+1 − v a‖2 ≤ (q1 + 1)‖v̂a,t − v a‖2 (4) where 0 < q1 < 1. As a conclusion, we have for any δ > 0, with probability at least 1− δ,",
"title": ""
},
{
"docid": "6f6cd699a625748522e5e10b6e310e69",
"text": "Research on organizational justice has focused primarily on the receivers of just and unjust treatment. Little is known about why managers adhere to or violate rules of justice in the first place. The authors introduce a model for understanding justice rule adherence and violation. They identify both cognitive motives and affective motives that explain why managers adhere to and violate justice rules. They also draw distinctions among the justice rules by specifying which rules offer managers more or less discretion in their execution. They then describe how motives and discretion interact to influence justice-relevant actions. Finally, the authors incorporate managers' emotional reactions to consider how their actions may change over time. Implications of the model for theory, research, and practice are discussed.",
"title": ""
},
{
"docid": "2f7ba7501fcf379b643867c7d5a9d7bf",
"text": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow-minimum-cut theorem.",
"title": ""
}
] |
scidocsrr
|
b9eeaa7f8e98d31a1f4399366e20d380
|
Jype - a program visualization and programming exercise tool for Python
|
[
{
"docid": "05c82f9599b431baa584dd1e6d7dfc3e",
"text": "It is a common conception that CS1 is a very difficult course and that failure rates are high. However, until now there has only been anecdotal evidence for this claim. This article reports on a survey among institutions around the world regarding failure rates in introductory programming courses. The article describes the design of the survey and the results. The number of institutions answering the call for data was unfortunately rather low, so it is difficult to make firm conclusions. It is our hope that this article can be the starting point for a systematic collection of data in order to find solid proof of the actual failure and pass rates of CS1.",
"title": ""
}
] |
[
{
"docid": "f0db74061a2befca317f9333a0712ab9",
"text": "This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.",
"title": ""
},
{
"docid": "25f39a66710db781f4354f0da5974d61",
"text": "With the rapid development of economy in China over the past decade, air pollution has become an increasingly serious problem in major cities and caused grave public health concerns in China. Recently, a number of studies have dealt with air quality and air pollution. Among them, some attempt to predict and monitor the air quality from different sources of information, ranging from deployed physical sensors to social media. These methods are either too expensive or unreliable, prompting us to search for a novel and effective way to sense the air quality. In this study, we propose to employ the state of the art in computer vision techniques to analyze photos that can be easily acquired from online social media. Next, we establish the correlation between the haze level computed directly from photos with the official PM 2.5 record of the taken city at the taken time. Our experiments based on both synthetic and real photos have shown the promise of this image-based approach to estimating and monitoring air pollution.",
"title": ""
},
{
"docid": "d6e565c0123049b9e11692b713674ccf",
"text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.",
"title": ""
},
{
"docid": "b9b08d97b084d0b73f7aba409dbda67c",
"text": "This paper presents a 1V low-voltage high speed frequency divider-by-2 which is fabricated in a standard 0.18μm TSMC RF CMOS process. Employing parallel current switching topology, the 2:1 frequency divider operates up to 6.5GHz while consuming 4.64mA current with test buffers at a supply voltage of 1V, and the chip area of the core circuit is 0.065×0.055mm2.",
"title": ""
},
{
"docid": "9a291c4683d56fa2cbfbbba349b8a336",
"text": "Biometric technologies focus on the verification and identification of humans using their possessed biological (anatomical, physiological and behavioral) properties. T his paper overview current advances in biometric technologies , as well as some aspects of system integration, privacy and security and accompanying problems such as testing and evaluation.",
"title": ""
},
{
"docid": "64a3fec90138f6786dd8257a5ecd73e4",
"text": "Unlabeled high-dimensional text-image web news data are produced every day, presenting new challenges to unsupervised feature selection on multi-view data. State-of-the-art multi-view unsupervised feature selection methods learn pseudo class labels by spectral analysis, which is sensitive to the choice of similarity metric for each view. For text-image data, the raw text itself contains more discriminative information than similarity graph which loses information during construction, and thus the text feature can be directly used for label learning, avoiding information loss as in spectral analysis. We propose a new multi-view unsupervised feature selection method in which image local learning regularized orthogonal nonnegative matrix factorization is used to learn pseudo labels and simultaneously robust joint $l_{2,1}$-norm minimization is performed to select discriminative features. Cross-view consensus on pseudo labels can be obtained as much as possible. We systematically evaluate the proposed method in multi-view text-image web news datasets. Our extensive experiments on web news datasets crawled from two major US media channels: CNN and FOXNews demonstrate the efficacy of the new method over state-of-the-art multi-view and single-view unsupervised feature selection methods.",
"title": ""
},
{
"docid": "bc06c989afd9f2e5cfe788e5d3455748",
"text": "The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.",
"title": ""
},
{
"docid": "350a756b9c14fa08d6a15fdbe89530ec",
"text": "Both the tasks of multi-person human pose estimation and pose tracking in videos are quite challenging. Existing methods can be categorized into two groups: top-down and bottom-up approaches. In this paper, following the top-down approach, we aim to build a strong baseline system with three modules: human candidate detector, singleperson pose estimator and human pose tracker. Firstly, we choose a generic object detector among state-of-the-art methods to detect human candidates. Then, cascaded pyramid network is used to estimate the corresponding human pose. Finally, we use a flow-based pose tracker to render keypoint-association across frames, i.e., assigning each human candidate a unique and temporally-consistent id, for the multi-target pose tracking purpose. We conduct extensive ablative experiments to validate various choices of models and configurations. We take part in two ECCV’18 PoseTrack challenges: pose estimation and pose tracking.",
"title": ""
},
{
"docid": "b606e856aacf69fe82f7f23cb4a118d1",
"text": "Multiple-input multiple output (MIMO) communication architecture has recently emerged as a new paradigm for wireless communications in rich multipath environment, which has spectral efficiencies far beyond those offered by conventional techniques. The channel capacity of the MIMO architecture in independent Rayleigh channels scales linearly as the number of antennas. However, the correlation of a real-world wireless channel may result in a substantial degradation of the MIMO architecture performance. In this letter, we investigate the MIMO channel capacity in correlated channels using the exponential correlation matrix model. We prove that, for this model, an increase in correlation is equivalent to a decrease in signal-to-noise ratio (SNR). For example, r=0.7 is the same as 3-dB decrease in SNR.",
"title": ""
},
{
"docid": "024265b0b1872dd89d875dd5d3df5b78",
"text": "In this paper, we present a novel system to analyze human body motions for action recognition task from two sets of features using RGBD videos. The Bag-of-Features approach is used for recognizing human action by extracting local spatialtemporal features and shape invariant features from all video frames. These feature vectors are computed in four steps: Firstly, detecting all interest keypoints from RGB video frames using Speed-Up Robust Features and filters motion points using Motion History Image and Optical Flow, then aligned these motion points to the depth frame sequences. Secondly, using a Histogram of orientation gradient descriptor for computing the features vector around these points from both RGB and depth channels, then combined these feature values in one RGBD feature vector. Thirdly, computing Hu-Moment shape features from RGBD frames, fourthly, combining the HOG features with Hu-moments features in one feature vector for each video action. Finally, the k-means clustering and the multi-class K-Nearest Neighbor is used for the classification task. This system is invariant to scale, rotation, translation, and illumination. All tested are utilized on a dataset that is available to the public and used often in the community. By using this new feature combination method improves performance on actions with low movement and reach recognition rates superior to other publications of the dataset. Keywords—RGBD Videos; Feature Extraction; k-means Clustering; KNN (K-Nearest Neighbor)",
"title": ""
},
{
"docid": "af7b908727301da9b231b5a554e71143",
"text": "The social psychological literature on threat and defense is fragmented. Groups of researchers have focused on distinct threats, such as mortality, uncertainty, uncontrollability, or meaninglessness, and have developed separate theoretical frameworks for explaining the observed reactions. In the current chapter, we attempt to integrate old and new research, proposing both a taxonomy of variation and a common motivational process underlying people’s reactions to threats. Following various kinds of threats, people often turn to abstract conceptions of reality—they invest more extremely in belief systems and worldviews, social identities, goals, and ideals. We suggest that there are common motivational processes that underlie the similar reactions to all of these diverse kinds of threats. We propose that (1) all of the threats present people with discrepancies that immediately activate basic neural processes related to anxiety. (2) Some categories of defenses are more proximal and symptom-focused, and result directly from anxious arousal and heightened attentional vigilance associated with anxious states. (3) Other kinds of defenses operatemore distally andmute anxiety by activating approach-oriented states. (4) Depending on the salient dispositional and situational affordances, these distal, approach-oriented reactions vary in the extent to which they (a) resolve the original discrepancy or are merely palliative; (b) are concrete or abstract; (c) are personal or social. We present results from social neuroscience and standard social psychological experiments that converge on a general process model of threat and defense. Various “threats,” such as personal uncertainty, mortality salience, loss of control, perceptual surprises, and goal conflicts, cause people to heighten commitment to their goals, ideals, social relations, identifications, ideologies, and worldviews. Why do such seemingly unrelated threats lead to this similar set of diverse reactions? We and others have investigated phenomena such as the ones listed above for many years under different theories of threat and defense. In this chapter, we describe how our various research programs converge to provide an integrative general model of threat and defense processes. Although different approaches have offered different conceptual frameworks to understand threat and defense, a shared process model seems possible if we look at these phenomena from both social psychological and neural perspectives. Defensive reactions to threat follow a specific time course and can be mapped onto neural, experiential, and behavioral correlates. We propose that all threats involve the experience of a discrepancy. This discrepancy subsequently activates neural processes related to anxiety, driving a variety of proximal defenses related to attentional vigilance and avoidance motivation. Subsequent distal defenses then serve to activate neural processes related to approach motivation that downregulate the neural processes related to anxiety. We argue that depending on individual traits and salient associations and norms, people use an array of defensive strategies to activate these sanguine, approach-oriented states. In this chapter, we temporarily set aside the long-standing debate about the way different threats might affect different psychological needs (symbolic immortality, control, self-worth, certainty, self-integrity, meaning, etc.) and how different kinds of defenses might restore them. Instead, we build on the simple hypothesis that discrepancies arouse anxiety and thereby motivate diverse phenomena that activate approach-related states that relieve the anxiety. 1. THEORIES EXPLAINING PEOPLE’S DEFENSIVE REACTIONS TO THREAT Social psychological research on threat and defense first proliferated with cognitive dissonance theory (CDT; Festinger, 1957), which focused on the aversive arousal arising from discrepant experiences that conflict with relevant cognitions (e.g., smoking despite knowledge of its dangers; engaging in counter-attitudinal behavior). Conflicting thoughts and actions are still considered the basis of dissonance arousal (Gawronski, 2012; Harmon-Jones & Mills, 1999). In the current threat and defense literature, cognitive dissonance themes persist across the various theoretical perspectives and form a central element in our integrative model. Specifically, we hold that any experience that is discrepant with prevailing cognitions or motivations arouses anxious vigilance and motivates efforts to reduce this arousal by means of reactive thoughts and behaviors. In the first part of this chapter, before explicating our general process model, we will provide perspective by reviewing some prominent theories that have tried to account for diverse defensive reactions to threats. 1.1. Theories focusing on need for certainty, self-esteem, and social identity A variety of social psychological theories evolved from CDT to focus on uncertainty-related threats. Like CDT, these certainty theories emphasize the need to supplant aversive, “nonfitting cognitions” with consonant ones, and focus on need for cognitive clarity and consistency. Lay epistemic theory (Kruglanski & Webster, 1996), self-verification theory (Swann & Read, 1981), and theories of uncertainty management (Van den Bos, Poortvliet, Maas, Miedema, & Van den Ham, 2005), compensatory conviction (McGregor, Zanna, Holmes, & Spencer, 2001), and uncertainty reduction (Hogg, 2007) emphasize that this need for self-relevant clarity and cognitive closure is bolstered by consensual social validation and identification. When faced with uncertainty about themselves or their environment, people defensively restore certainty, often in unrelated domains with the confidence-inducing help of social consensus and group identification (Hogg, 2007; Kruglanski, Pierro, Mannetti, & De Grada, 2006). For example, personal uncertainty threats increase in-group identification, in-group bias, defense of cultural worldviews, and exaggerated consensus estimates (Hogg, Sherman, Dierselhuis, Maitner, & Moffitt, 2007; McGregor, Nail, Marigold, & Kang, 2005; McGregor et al., 2001; Van den Bos, 2009). At around the same time as consistency theories were proliferating, another family of theories, rooted in neo-analytic ideas of ego-defense (Freud, 1967; Horney, 1945), gained popularity. These theories focus on self-worth and ego-needs. They emphasize self-esteem as the fundamental resource that people protect with compensatory defenses and include theories of egocentricity (Beauregard & Dunning, 1998; Dunning & Hayes, 1996; Tesser, 2000), self-evaluation maintenance (Sedikides, 1993; Tesser, 1988), and the totalitarian ego (Greenwald, 1980). Consensual social validation and identification was also often viewed as playing an important role in the maintenance of self-esteem through others, for example, basking in reflected glory (Cialdini et al., 1976), or being part of a winning team (Sherman & Kim, 2005). The close linkage (Baumgardner, 1990; Campbell, 1990) and substitutability of self-clarity and self-esteem was taken by self-affirmation theory (Steele, 1988) as evidence for a more general motive for self-integrity—a sense of the “moral and adaptive adequacy of the self.” If an experience undermines self-viability for whatever reason, then defensive compensatory efforts will be recruited in any available domain of clarity or worth, even relating to group memberships (Fein & Spencer, 1997), to restore a positive",
"title": ""
},
{
"docid": "9be5c8c53ca6a10316f2100d3acc0c5b",
"text": "Bioreactors provide a rapid and efficient plant propagation system for many agricultural and forestry species, utilizing liquid media to avoid intensive manual handling. Large-scale liquid cultures have been used for micropropagation through organogenesis or somatic embryogenesis pathways. Various types of bioreactors with gas-sparged mixing are suitable for the production of clusters of buds, meristems or protocorms. A simple glass bubble-column bioreactor for the proliferation of ornamental and vegetable crop species resulted in biomass increase of 3 to 6-fold in 3–4 weeks. An internal loop bioreactor was used for asparagus, celery and cucumber embryogenic cultures. However, as the biomass increased, the mixing and circulation were not optimal and growth was reduced. A disposable pre-sterilized plastic bioreactor (2–5-l volume) was used for the proliferation of meristematic clusters of several ornamental, vegetable and woody plant species. The plastic bioreactor induced minimal shearing and foaming, resulting in an increase in biomass as compared to the glass bubble-column bioreactor. A major issue related to the use of liquid media in bioreactors is hyperhydricity, that is, morphogenic malformation. Liquid cultures impose stress signals that are expressed in developmental aberrations. Submerged tissues exhibit oxidative stress, with elevated concentrations of reactive oxygen species associated with a change in antioxidant enzyme activity. These changes affect the anatomy and physiology of the plants and their survival. Malformation was controlled by adding growth retardants to decrease rapid proliferation. Growth retardants ancymidol or paclobutrazol reduced water uptake during cell proliferation, decreased vacuolation and intercellular spaces, shortened the stems and inhibited leaf expansion, inducing the formation of clusters. Using a two-stage bioreactor process, the medium was changed in the second stage to a medium lacking growth retardants to induce development of the meristematic clusters into buds or somatic embryos. Cluster biomass increased 10–15-fold during a period of 25–30 days depending on the species. Potato bud clusters cultured in 1.5 1 of medium in a 2-l capacity bioreactor, increased during 10–30 days. Poplar in vitro roots regenerated buds in the presence of thidiazuron (TDZ); the biomass increased 12-fold in 30 days. Bioreactor-regenerated clusters were separated with a manual cutter, producing small propagule units that formed shoots and initiated roots. Clusters of buds or meristematic nodules with reduced shoots, as well as arrested leaf growth, had less distortion and were optimal for automated cutting and dispensing. In tuber-, bulb- and corm-producing plants, growth retardants and elevated sucrose concentrations in the media were found to enhance storage organ formation, providing a better propagule for transplanting or storage. Bioreactor-cultures have several advantages compared with agar-based cultures, with a better control of the contact of the plant tissue with the culture medium, and optimal nutrient and growth regulator supply, as well as aeration and medium circulation, the filtration of the medium and the scaling-up of the cultures. Micropropagation in bioreactors for optimal plant production will depend on a better understanding of plant responses to signals from the microenvironment and on specific culture manipulation to control the morphogenesis of plants in liquid cultures.",
"title": ""
},
{
"docid": "7d25c646a8ce7aa862fba7088b8ea915",
"text": "Neuro-dynamic programming (NDP for short) is a relatively new class of dynamic programming methods for control and sequential decision making under uncertainty. These methods have the potential of dealing with problems that for a long time were thought to be intractable due to either a large state space or the lack of an accurate model. They combine ideas from the fields of neural networks, artificial intelligence, cognitive science, simulation, and approximation theory. We will delineate the major conceptual issues, survey a number of recent developments, describe some computational experience, and address a number of open questions. We consider systems where decisions are made in stages. The outcome of each decision is not fully predictable but can be anticipated to some extent before the next decision is made. Each decision results in some immediate cost but also affects the context in which future decisions are to be made and therefore affects the cost incurred in future stages. Dynamic programming (DP for short) provides a mathematical formalization of the tradeoff between immediate and future costs. Generally, in DP formulations there is a discrete-time dynamic system whose state evolves according to given transition probabilities that depend on a decision/control u. In particular, if we are in state i and we choose decision u, we move to state j with given probability pij(u). Simultaneously with this transition, we incur a cost g(i, u, j). In comparing, however, the available decisions u, it is not enough to look at the magnitude of the cost g(i, u, j); we must also take into account how desirable the next state j is. We thus need a way to rank or rate states j. This is done by using the optimal cost (over all remaining stages) starting from state j, which is denoted by J∗(j). These costs can be shown to",
"title": ""
},
{
"docid": "bc7c5ab8ec28e9a5917fc94b776b468a",
"text": "Reasonable house price prediction is a meaningful task, and the house clustering is an important process in the prediction. In this paper, we propose the method of Multi-Scale Affinity Propagation(MSAP) aggregating the house appropriately by the landmark and the facility. Then in each cluster, using Linear Regression model with Normal Noise(LRNN) predicts the reasonable price, which is verified by the increasing number of the renting reviews. Experiments show that the precision of the reasonable price prediction improved greatly via the method of MSAP.",
"title": ""
},
{
"docid": "bb0dce17b5810ebd7173ea35545c3bf6",
"text": "Five studies demonstrated that highly guilt-prone people may avoid forming interdependent partnerships with others whom they perceive to be more competent than themselves, as benefitting a partner less than the partner benefits one's self could trigger feelings of guilt. Highly guilt-prone people who lacked expertise in a domain were less willing than were those low in guilt proneness who lacked expertise in that domain to create outcome-interdependent relationships with people who possessed domain-specific expertise. These highly guilt-prone people were more likely than others both to opt to be paid on their performance alone (Studies 1, 3, 4, and 5) and to opt to be paid on the basis of the average of their performance and that of others whose competence was more similar to their own (Studies 2 and 5). Guilt proneness did not predict people's willingness to form outcome-interdependent relationships with potential partners who lacked domain-specific expertise (Studies 4 and 5). It also did not predict people's willingness to form relationships when poor individual performance would not negatively affect partner outcomes (Study 4). Guilt proneness therefore predicts whether, and with whom, people develop interdependent relationships. The findings also demonstrate that highly guilt-prone people sacrifice financial gain out of concern about how their actions would influence others' welfare. As such, the findings demonstrate a novel way in which guilt proneness limits free-riding and therefore reduces the incidence of potentially unethical behavior. Lastly, the findings demonstrate that people who lack competence may not always seek out competence in others when choosing partners.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "736a7f4cad46138f350fda904d5de624",
"text": "In the last decades, the development of new technologies applied to lipidomics has revitalized the analysis of lipid profile alterations and the understanding of the underlying molecular mechanisms of lipid metabolism, together with their involvement in the occurrence of human disease. Of particular interest is the study of omega-3 and omega-6 long chain polyunsaturated fatty acids (LC-PUFAs), notably EPA (eicosapentaenoic acid, 20:5n-3), DHA (docosahexaenoic acid, 22:6n-3), and ARA (arachidonic acid, 20:4n-6), and their transformation into bioactive lipid mediators. In this sense, new families of PUFA-derived lipid mediators, including resolvins derived from EPA and DHA, and protectins and maresins derived from DHA, are being increasingly investigated because of their active role in the \"return to homeostasis\" process and resolution of inflammation. Recent findings reviewed in the present study highlight that the omega-6 fatty acid ARA appears increased, and omega-3 EPA and DHA decreased in most cancer tissues compared to normal ones, and that increments in omega-3 LC-PUFAs consumption and an omega-6/omega-3 ratio of 2-4:1, are associated with a reduced risk of breast, prostate, colon and renal cancers. Along with their lipid-lowering properties, omega-3 LC-PUFAs also exert cardioprotective functions, such as reducing platelet aggregation and inflammation, and controlling the presence of DHA in our body, especially in our liver and brain, which is crucial for optimal brain functionality. Considering that DHA is the principal omega-3 FA in cortical gray matter, the importance of DHA intake and its derived lipid mediators have been recently reported in patients with major depressive and bipolar disorders, Alzheimer disease, Parkinson's disease, and amyotrophic lateral sclerosis. The present study reviews the relationships between major diseases occurring today in the Western world and LC-PUFAs. More specifically this review focuses on the dietary omega-3 LC-PUFAs and the omega-6/omega-3 balance, in a wide range of inflammation disorders, including autoimmune diseases. This review suggests that the current recommendations of consumption and/or supplementation of omega-3 FAs are specific to particular groups of age and physiological status, and still need more fine tuning for overall human health and well being.",
"title": ""
},
{
"docid": "c2ca77f97c728fd2c16e20741e96838d",
"text": "Many decisions require a context-dependent mapping from sensory evidence to action. The capacity for flexible information processing of this sort is thought to depend on a cognitive control system in frontoparietal cortex, but the costs and limitations of control entail that its engagement should be minimized. Here, we show that humans reduce demands on control by exploiting statistical structure in their environment. Using a context-dependent perceptual discrimination task and model-based analyses of behavioral and neuroimaging data, we found that predictions about task context facilitated decision making and that a quantitative measure of context prediction error accounted for graded engagement of the frontoparietal control network. Within this network, multivariate analyses further showed that context prediction error enhanced the representation of task context. These results indicate that decision making is adaptively tuned by experience to minimize costs while maintaining flexibility.",
"title": ""
},
{
"docid": "699e0a10b29fad7d259cd781457462c4",
"text": "Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual metaphor to depict software structure changes with techniques that emphasize both following unchanged code as well as detecting and highlighting important events such as code drift, splits, merges, insertions and deletions. The method is illustrated with the analysis of a real-world C++ code system.",
"title": ""
},
{
"docid": "f35d0517baed5b84246eb31b071ee6a6",
"text": "Cyclooxygenase (COX)-2 is the major constitutively expressed COX isoform in the newborn brain. COX-2 derived prostanoids and reactive oxygen species appear to play a major role in the mechanism of perinatal hypoxic-ischemic injury in the newborn piglet, an accepted animal model of the human term neonate. The study aimed to quantitatively determine COX-2 immunopositive neurons in different brain regions in piglets under normoxic conditions (n=15), and 4 hours after 10 min asphyxia (n=11). Asphyxia did not induce significant changes in neuronal COX-2 expression of any studied brain areas. In contrast, there was a marked regional difference in all experimental groups. Thus, significant difference was observed between fronto-parietal and temporo-occipital regions: 59±4% and 67±3% versus 41±2%* and 31±3%* respectively (mean±SEM, data are pooled from all subjects, n=26, *p<0.05, vs. fronto-parietal region). In the hippocampus, COX-2 immunopositivity was rare (highest expression in CA1 region: 14±2%). The studied subcortical areas showed negligible COX-2 staining. Our findings suggest that asphyxia does not significantly alter the pattern of neuronal COX-2 expression in the early reventilation period. Furthermore, based on the striking differences observed in cortical neuronal COX-2 distribution, the contribution of COX-2 mediated neuronal injury after asphyxia may also show region-specific differences.",
"title": ""
}
] |
scidocsrr
|
1a968f82c5282ad629199ab5e76beed5
|
Operating Systems for Internet of Things: A Comparative Study
|
[
{
"docid": "9bcc81095c32ea39de23217983d33ddc",
"text": "The Internet of Things (IoT) is characterized by heterogeneous devices. They range from very lightweight sensors powered by 8-bit microcontrollers (MCUs) to devices equipped with more powerful, but energy-efficient 32-bit processors. Neither a traditional operating system (OS) currently running on Internet hosts, nor typical OS for sensor networks are capable to fulfill the diverse requirements of such a wide range of devices. To leverage the IoT, redundant development should be avoided and maintenance costs should be reduced. In this paper we revisit the requirements for an OS in the IoT. We introduce RIOT OS, an OS that explicitly considers devices with minimal resources but eases development across a wide range of devices. RIOT OS allows for standard C and C++ programming, provides multi-threading as well as real-time capabilities, and needs only a minimum of 1.5 kB of RAM.",
"title": ""
},
{
"docid": "233c63982527a264b91dfb885361b657",
"text": "One unfortunate consequence of the success story of wireless sensor networks (WSNs) in separate research communities is an evergrowing gap between theory and practice. Even though there is a increasing number of algorithmic methods for WSNs, the vast majority has never been tried in practice; conversely, many practical challenges are still awaiting efficient algorithmic solutions. The main cause for this discrepancy is the fact that programming sensor nodes still happens at a very technical level. We remedy the situation by introducing Wiselib, our algorithm library that allows for simple implementations of algorithms onto a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, allowing to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time. The Wiselib runs on different host operating systems, such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. For any algorithm, the Wiselib provides data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks. In this paper, we describe the building blocks of the Wiselib, and analyze the overhead. We demonstrate the effectiveness of our approach by showing how routing algorithms can be implemented. We also report on results from experiments with real sensor-node hardware.",
"title": ""
}
] |
[
{
"docid": "8d258bac9030dae406fff2c13ae0db43",
"text": "This paper investigates the validity of Kleinberg’s axioms for clustering functions with respect to the quite popular clustering algorithm called k-means.We suggest that the reason why this algorithm does not fit Kleinberg’s axiomatic system stems from missing match between informal intuitions and formal formulations of the axioms. While Kleinberg’s axioms have been discussed heavily in the past, we concentrate here on the case predominantly relevant for k-means algorithm, that is behavior embedded in Euclidean space. We point at some contradictions and counter intuitiveness aspects of this axiomatic set within R that were evidently not discussed so far. Our results suggest that apparently without defining clearly what kind of clusters we expect we will not be able to construct a valid axiomatic system. In particular we look at the shape and the gaps between the clusters. Finally we demonstrate that there exist several ways to reconcile the formulation of the axioms with their intended meaning and that under this reformulation the axioms stop to be contradictory and the real-world k-means algorithm conforms to this axiomatic system.",
"title": ""
},
{
"docid": "525ebcdc836093d304a7771a4f4a26aa",
"text": "With the accelerated development of Internet-of-Things (IoT), wireless sensor networks (WSNs) are gaining importance in the continued advancement of information and communication technologies, and have been connected and integrated with the Internet in vast industrial applications. However, given the fact that most wireless sensor devices are resource constrained and operate on batteries, the communication overhead and power consumption are therefore important issues for WSNs design. In order to efficiently manage these wireless sensor devices in a unified manner, the industrial authorities should be able to provide a network infrastructure supporting various WSN applications and services that facilitate the management of sensor-equipped real-world entities. This paper presents an overview of industrial ecosystem, technical architecture, industrial device management standards, and our latest research activity in developing a WSN management system. The key approach to enable efficient and reliable management of WSN within such an infrastructure is a cross-layer design of lightweight and cloud-based RESTful Web service.",
"title": ""
},
{
"docid": "11d551da8299c7da76fbeb22b533c7f1",
"text": "The use of brushless permanent magnet DC drive motors in racing motorcycles is discussed in this paper. The application requirements are highlighted and the characteristics of the load demand and drive converter outlined. The possible topologies of the machine are investigated and a design for a internal permanent magnet is developed. This is a 6-pole machine with 18 stator slots and coils of one stator tooth pitch. The performance predictions are put forward and these are obtained from design software. Cooling is vital for these machines and this is briefly discussed.",
"title": ""
},
{
"docid": "1348ee3316643f4269311b602b71d499",
"text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.",
"title": ""
},
{
"docid": "719c1b6ad0d945b68b34abceb1ed8e3b",
"text": "This editorial provides a behavioral science view on gamification and health behavior change, describes its principles and mechanisms, and reviews some of the evidence for its efficacy. Furthermore, this editorial explores the relation between gamification and behavior change frameworks used in the health sciences and shows how gamification principles are closely related to principles that have been proven to work in health behavior change technology. Finally, this editorial provides criteria that can be used to assess when gamification provides a potentially promising framework for digital health interventions.",
"title": ""
},
{
"docid": "a531694dba7fc479b43d0725bc68de15",
"text": "This paper gives an introduction to the essential challenges of software engineering and requirements that software has to fulfill in the domain of automation. Besides, the functional characteristics, specific constraints and circumstances are considered for deriving requirements concerning usability, the technical process, the automation functions, used platform and the well-established models, which are described in detail. On the other hand, challenges result from the circumstances at different points in the single phases of the life cycle of the automated system. The requirements for life-cycle-management, tools and the changeability during runtime are described in detail.",
"title": ""
},
{
"docid": "ad918df13aaa2e78c92a7626699f1ecc",
"text": "Machine learning techniques, namely convolutional neural networks (CNN) and regression forests, have recently shown great promise in performing 6-DoF localization of monocular images. However, in most cases imagesequences, rather only single images, are readily available. To this extent, none of the proposed learning-based approaches exploit the valuable constraint of temporal smoothness, often leading to situations where the per-frame error is larger than the camera motion. In this paper we propose a recurrent model for performing 6-DoF localization of video-clips. We find that, even by considering only short sequences (20 frames), the pose estimates are smoothed and the localization error can be drastically reduced. Finally, we consider means of obtaining probabilistic pose estimates from our model. We evaluate our method on openly-available real-world autonomous driving and indoor localization datasets.",
"title": ""
},
{
"docid": "f499ea5160d1e787a51b456ee01c3814",
"text": "In this paper, a tri band compact octagonal fractal monopole MIMO antenna is presented. The proposed antenna is microstrip line fed and its structure is based on fractal geometry where the resonance frequency of antenna is lowered by applying iteration techniques. The simulated bandwidth of the antenna are 2.3706GHz to 2.45GHz, 3.398GHz to 3.677GHz and 4.9352GHz to 5.8988GHz (S11 <; -10 dB), covering the bands of WLAN and WiMAX. The characteristics of small size, nearly omnidirectional radiation pattern and moderate gain make the proposed MIMO antenna entirely applicable to WLAN and WiMAX applications. The proposed antenna has compact size of 50 mm × 50 mm. Details of the proposed antenna design and performance are presented and discussed.",
"title": ""
},
{
"docid": "6bd9fc02c8e26e64cecb13dab1a93352",
"text": "Kohlberg, who was born in 1927, grew up in Bronxville, New York, and attended the Andover Academy in Massachusetts, a private high school for bright and usually wealthy students. He did not go immediately to college, but instead went to help the Israeli cause, in which he was made the Second Engineer on an old freighter carrying refugees from parts of Europe to Israel. After this, in 1948, he enrolled at the University of Chicago, where he scored so high on admission tests that he had to take only a few courses to earn his bachelor's degree. This he did in one year. He stayed on at Chicago for graduate work in psychology, at first thinking he would become a clinical psychologist. However, he soon became interested in Piaget and began interviewing children and adolescents on moral issues. The result was his doctoral dissertation (1958a), the first rendition of his new stage theory.",
"title": ""
},
{
"docid": "02c904c320db3a6e0fc9310f077f5d08",
"text": "Rejuvenative procedures of the face are increasing in numbers, and a plethora of different therapeutic options are available today. Every procedure should aim for the patient's safety first and then for natural and long-lasting results. The face is one of the most complex regions in the human body and research continuously reveals new insights into the complex interplay of the different participating structures. Bone, ligaments, muscles, fat, and skin are the key players in the layered arrangement of the face.Aging occurs in all involved facial structures but the onset and the speed of age-related changes differ between each specific structure, between each individual, and between different ethnic groups. Therefore, knowledge of age-related anatomy is crucial for a physician's work when trying to restore a youthful face.This review focuses on the current understanding of the anatomy of the human face and tries to elucidate the morphological changes during aging of bone, ligaments, muscles, and fat, and their role in rejuvenative procedures.",
"title": ""
},
{
"docid": "80305580204e9a3399c4e7cd28f4adcb",
"text": "A series of three studies were conducted to generate, develop, and validate the Attitudes toward Transgender Men and Women (ATTMW) scale. In Study 1, 120 American adults responded to an open-ended questionnaire probing various dimensions of their perceptions of transgender individuals and identity. Qualitative thematic analysis generated 200 items based on their responses. In Study 2, 238 American adults completed a questionnaire consisting of the generated items. Exploratory factor analysis (EFA) revealed two non-identical 12-item subscales (ATTM and ATTW) of the full 24-item scale. In Study 3, 150 undergraduate students completed a survey containing the ATTMW and a number of validity-testing variables. Confirmatory factor analysis (CFA) verified the single-factor structures of the ATTM and ATTW subscales, and the convergent, discriminant, predictive, and concurrent validities of the ATTMW were also established. Together, our results demonstrate that the ATTMW is a reliable and valid measure of attitudes toward transgender individuals.",
"title": ""
},
{
"docid": "e60d699411055bf31316d468226b7914",
"text": "Tabular data is difficult to analyze and to search through, yielding for new tools and interfaces that would allow even non tech-savvy users to gain insights from open datasets without resorting to specialized data analysis tools and without having to fully understand the dataset structure. The goal of our demonstration is to showcase answering natural language questions from tabular data, and to discuss related system configuration and model training aspects. Our prototype is publicly available and open-sourced (see demo )",
"title": ""
},
{
"docid": "b2d256cd40e67e3eadd3f5d613ad32fa",
"text": "Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservice model is a pressing issue. There are constantly developing new methods of testing both individual microservices and cloud applications at a whole. This article presents our vision of a framework for the validation of the microservice cloud applications, providing an integrated approach for the implementation of various testing methods of such applications, from basic unit tests to continuous stability testing.",
"title": ""
},
{
"docid": "90224ff86d94c82e5d9b5bc8164fcc2e",
"text": "Reading Comprehension (RC) of text is one of the fundamental tasks in natural language processing. In recent years, several end-to-end neural network models have been proposed to solve RC tasks. However, most of these models suffer in reasoning over long documents. In this work, we propose a novel Memory Augmented Machine Comprehension Network (MAMCN) to address long-range dependencies present in machine reading comprehension. We perform extensive experiments to evaluate proposed method with the renowned benchmark datasets such as SQuAD, QUASAR-T, and TriviaQA. We achieve the state of the art performance on both the document-level (QUASAR-T, TriviaQA) and paragraph-level (SQuAD) datasets compared to all the previously published approaches.",
"title": ""
},
{
"docid": "4681e8f07225e305adfc66cd1b48deb8",
"text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.",
"title": ""
},
{
"docid": "06ba0cd00209a7f4f200395b1662003e",
"text": "Changes in human DNA methylation patterns are an important feature of cancer development and progression and a potential role in other conditions such as atherosclerosis and autoimmune diseases (e.g., multiple sclerosis and lupus) is being recognised. The cancer genome is frequently characterised by hypermethylation of specific genes concurrently with an overall decrease in the level of 5 methyl cytosine. This hypomethylation of the genome largely affects the intergenic and intronic regions of the DNA, particularly repeat sequences and transposable elements, and is believed to result in chromosomal instability and increased mutation events. This review examines our understanding of the patterns of cancer-associated hypomethylation, and how recent advances in understanding of chromatin biology may help elucidate the mechanisms underlying repeat sequence demethylation. It also considers how global demethylation of repeat sequences including transposable elements and the site-specific hypomethylation of certain genes might contribute to the deleterious effects that ultimately result in the initiation and progression of cancer and other diseases. The use of hypomethylation of interspersed repeat sequences and genes as potential biomarkers in the early detection of tumors and their prognostic use in monitoring disease progression are also examined.",
"title": ""
},
{
"docid": "6089388e6baf7177db7f51e3c8f94be4",
"text": "Lean approaches to product development (LPD) have had a strong influence on many industries and in recent years there have been many proponents for lean in software development as it can support the increasing industry need of scaling agile software development. With it's roots in industrial manufacturing and, later, industrial product development, it would seem natural that LPD would adapt well to large-scale development projects of increasingly software-intensive products, such as in the automotive industry. However, it is not clear what kind of experience and results have been reported on the actual use of lean principles and practices in software development for such large-scale industrial contexts. This was the motivation for this study as the context was an ongoing industry process improvement project at Volvo Car Corporation and Volvo Truck Corporation. The objectives of this study are to identify and classify state of the art in large-scale software development influenced by LPD approaches and use this established knowledge to support industrial partners in decisions on a software process improvement (SPI) project, and to reveal research gaps and proposed extensions to LPD in relation to its well-known principles and practices. For locating relevant state of the art we conducted a systematic mapping study, and the industrial applicability and relevance of results and said extensions to LPD were further analyzed in the context of an actual, industrial case. A total of 10,230 papers were found in database searches, of which 38 papers were found relevant. Of these, only 42 percent clearly addressed large-scale development. Furthermore, a majority of papers (76 percent) were non-empirical and many lacked information about study design, context and/or limitations. Most of the identified results focused on eliminating waste and creating flow in the software development process, but there was a lack of results for other LPD principles and practices. Overall, it can be concluded that research in the much hyped field of lean software development is in its nascent state when it comes to large scale development. There is very little support available for practitioners who want to apply lean approaches for improving large-scale software development, especially when it comes to inter-departmental interactions during development. This paper explicitly maps the area, qualifies available research, and identifies gaps, as well as suggests extensions to lean principles relevant for large scale development of software intensive systems.",
"title": ""
},
{
"docid": "57ccc061377399b669d5ece668b7e030",
"text": "We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.",
"title": ""
},
{
"docid": "f32187a3253c9327c26f83826e0b03b8",
"text": "Spatiotemporal forecasting has significant implications in sustainability, transportation and health-care domain. Traffic forecasting is one canonical example of such learning task. This task is challenging due to (1) non-linear temporal dynamics with changing road conditions, (2) complex spatial dependencies on road networks topology and (3) inherent difficulty of long-term time series forecasting. To address these challenges, we propose Graph Convolutional Recurrent Neural Network to incorporate both spatial and temporal dependency in traffic flow. We further integrate the encoder-decoder framework and scheduled sampling to improve long-term forecasting. When evaluated on real-world road network traffic data, our approach can accurately capture spatiotemporal correlations and consistently outperforms state-of-the-art baselines by 12% 15%.",
"title": ""
},
{
"docid": "f7f9bd286808d885b25c3403ffd2bc4d",
"text": "For scatterplots with gaussian distributions of dots, the perception of Pearson correlation r can be described by two simple laws: a linear one for discrimination, and a logarithmic one for perceived magnitude (Rensink & Baldridge, 2010). The underlying perceptual mechanisms, however, remain poorly understood. To cast light on these, four different distributions of datapoints were examined. The first had 100 points with equal variance in both dimensions. Consistent with earlier results, just noticeable difference (JND) was a linear function of the distance away from r = 1, and the magnitude of perceived correlation a logarithmic function of this quantity. In addition, these laws were linked, with the intercept of the JND line being the inverse of the bias in perceived magnitude. Three other conditions were also examined: a dot cloud with 25 points, a horizontal compression of the cloud, and a cloud with a uniform distribution of dots. Performance was found to be similar in all conditions. The generality and form of these laws suggest that what underlies correlation perception is not a geometric structure such as the shape of the dot cloud, but the shape of the probability distribution of the dots, likely inferred via a form of ensemble coding. It is suggested that this reflects the ability of observers to perceive the information entropy in an image, with this quantity used as a proxy for Pearson correlation.",
"title": ""
}
] |
scidocsrr
|
ae36b4cdd36366693aaeb07099996155
|
An overview of pipeline leak detection and location systems
|
[
{
"docid": "9901be4dddeb825f6443d75a6566f2d0",
"text": "In this paper a new approach to gas leakage detection in high pressure natural gas transportation networks is proposed. The pipeline is modelled as a Linear Parameter Varying (LPV) System driven by the source node massflow with the gas inventory variation in the pipe (linepack variation, proportional to the pressure variation) as the scheduling parameter. The massflow at the offtake node is taken as the system output. The system is identified by the Successive Approximations LPV System Subspace Identification Algorithm which is also described in this paper. The leakage is detected using a Kalman filter where the fault is treated as an augmented state. Given that the gas linepack can be estimated from the massflow balance equation, a differential method is proposed to improve the leakage detector effectiveness. A small section of a gas pipeline crossing Portugal in the direction South to North is used as a case study. LPV models are identified from normal operational data and their accuracy is analyzed. The proposed LPV Kalman filter based methods are compared with a standard mass balance method in a simulated 10% leakage detection scenario. The Differential Kalman Filter method proved to be highly efficient.",
"title": ""
}
] |
[
{
"docid": "71164831cb7376d92461f1cfd95c9244",
"text": "Blood coagulation and complement pathways are two important natural defense systems. The high affinity interaction between the anticoagulant vitamin K-dependent protein S and the complement regulator C4b-binding protein (C4BP) is a direct physical link between the two systems. In human plasma, ~70% of total protein S circulates in complex with C4BP; the remaining is free. The anticoagulant activity of protein S is mainly expressed by the free form, although the protein S-C4BP complex has recently been shown to have some anticoagulant activity. The high affinity binding of protein S to C4BP provides C4BP with the ability to bind to negatively charged phospholipid membranes, which serves the purpose of localizing complement regulatory activity close to the membrane. Even though C4BP does not directly affect the coagulation system, it still influences the regulation of blood coagulation through its interaction with protein S. This is particularly important in states of inherited deficiency of protein S where the tight binding of protein S to C4BP results in a pronounced and selective drop in concentration of free protein S, whereas the concentration of protein S in complex with C4BP remains relatively unchanged. This review summarizes the current knowledge on C4BP with respect to its association with thrombosis and hemostasis.",
"title": ""
},
{
"docid": "2a340eff099d778791dd2474a1f9d6e6",
"text": "Previously, no general-purpose algorithm was known for the elliptic curve logarithm problem that ran in better than exponential time. In this paper we demonstrate the reduction of the elliptic curve logarithm problem to the logarithm problem in the multiplicative group of an extension of the underlying hit e field. For the class of supersingular elliptic curves, the reduction takes probabilistic polynomial time, thus providing a probabilistic subexponential time algorithm for the former problem. The implications of our results to public key cryptography are discussed.",
"title": ""
},
{
"docid": "3c103640a41779e8069219b9c4849ba7",
"text": "Electronic banking is becoming more popular every day. Financial institutions have accepted the transformation to provide electronic banking facilities to their customers in order to remain relevant and thrive in an environment that is competitive. A contributing factor to the customer retention rate is the frequent use of multiple online functionality however despite all the benefits of electronic banking, some are still hesitant to use it because of security concerns. The perception is that gender, age, education level, salary, culture and profession all have an impact on electronic banking usage. This study reports on how the Knowledge Discovery and Data Mining (KDDM) process was used to determine characteristics and electronic banking behavior of high net worth individuals at a South African bank. Findings JIBC December 2017, Vol. 22, No.3 2 indicate that product range and age had the biggest impact on electronic banking behavior. The value of user segmentation is that the financial institution can provide a more accurate service to their users based on their preferences and online banking behavior.",
"title": ""
},
{
"docid": "87e56672751a8eb4d5a08f0459e525ca",
"text": "— The Internet of Things (IoT) has transformed many aspects of modern manufacturing, from design to production to quality control. In particular, IoT and digital manufacturing technologies have substantially accelerated product development cycles and manufacturers can now create products of a complexity and precision not heretofore possible. New threats to supply chain security have arisen from connecting machines to the Internet and introducing complex IoT-based systems controlling manufacturing processes. By attacking these IoT-based manufacturing systems and tampering with digital files, attackers can manipulate physical characteristics of parts and change the dimensions, shapes, or mechanical properties of the parts, which can result in parts that fail in the field. These defects increase manufacturing costs and allow silent problems to occur only under certain loads that can threaten safety and/or lives. To understand potential dangers and protect manufacturing system safety, this paper presents two taxonomies: one for classifying cyber-physical attacks against manufacturing processes and another for quality control measures for counteracting these attacks. We systematically identify and classify possible cyber-physical attacks and connect the attacks with variations in manufacturing processes and quality control measures. Our tax-onomies also provide a scheme for linking emerging IoT-based manufacturing system vulnerabilities to possible attacks and quality control measures.",
"title": ""
},
{
"docid": "a7111544f5240a8f42c7b564b4f8e292",
"text": "increasingly agile and integrated across their functions. Enterprise models play a critical role in this integration, enabling better designs for enterprises, analysis of their performance, and management of their operations. This article motivates the need for enterprise models and introduces the concepts of generic and deductive enterprise models. It reviews research to date on enterprise modeling and considers in detail the Toronto virtual enterprise effort at the University of Toronto.",
"title": ""
},
{
"docid": "24b8df8f9402c37e685bd4c3156e3464",
"text": "We quantify the dynamical implications of the small-world phenomenon by considering the generic synchronization of oscillator networks of arbitrary topology. The linear stability of the synchronous state is linked to an algebraic condition of the Laplacian matrix of the network. Through numerics and analysis, we show how the addition of random shortcuts translates into improved network synchronizability. Applied to networks of low redundancy, the small-world route produces synchronizability more efficiently than standard deterministic graphs, purely random graphs, and ideal constructive schemes. However, the small-world property does not guarantee synchronizability: the synchronization threshold lies within the boundaries, but linked to the end of the small-world region.",
"title": ""
},
{
"docid": "14a8b362e7ba287d21d5ce3c4f87c733",
"text": "A novel model-based approach to 3D hand tracking from monocular video is presented. The 3D hand pose, the hand texture, and the illuminant are dynamically estimated through minimization of an objective function. Derived from an inverse problem formulation, the objective function enables explicit use of temporal texture continuity and shading information while handling important self-occlusions and time-varying illumination. The minimization is done efficiently using a quasi-Newton method, for which we provide a rigorous derivation of the objective function gradient. Particular attention is given to terms related to the change of visibility near self-occlusion boundaries that are neglected in existing formulations. To this end, we introduce new occlusion forces and show that using all gradient terms greatly improves the performance of the method. Qualitative and quantitative experimental results demonstrate the potential of the approach.",
"title": ""
},
{
"docid": "36fef38de53386e071ee2a1996aa733f",
"text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.",
"title": ""
},
{
"docid": "1da28109eb4ade7a0544646bf7783843",
"text": "Current soft pneumatic grippers cannot robustly grasp flat materials and flexible objects on curved surfaces without distorting them. Current electroadhesive grippers, on the other hand, are difficult to actively deform to complex shapes to pick up free-form surfaces or objects. An easyto-implement PneuEA gripper is proposed by the integration of an electroadhesive gripper and a two-fingered soft pneumatic gripper. The electroadhesive gripper was fabricated by segmenting a soft conductive silicon sheet into a two-part electrode design and embedding it in a soft dielectric elastomer. The two-fingered soft pneumatic gripper was manufactured using a standard soft lithography approach. This novel integration has combined the benefits of both the electroadhesive and soft pneumatic grippers. As a result, the proposed PneuEA gripper was not only able to pick-and-place flat and flexible materials such as a porous cloth but also delicate objects such as a light bulb. By combining two soft touch sensors with the electroadhesive, an intelligent and shape-adaptive PneuEA material handling system has been developed. This work is expected to widen the applications of both soft gripper and electroadhesion technologies. Supplementary material for this article is available online",
"title": ""
},
{
"docid": "45bd038dd94d388f945c041e7c04b725",
"text": "Entomophagy is widespread among nonhuman primates and is common among many human communities. However, the extent and patterns of entomophagy vary substantially both in humans and nonhuman primates. Here we synthesize the literature to examine why humans and other primates eat insects and what accounts for the variation in the extent to which they do so. Variation in the availability of insects is clearly important, but less understood is the role of nutrients in entomophagy. We apply a multidimensional analytical approach, the right-angled mixture triangle, to published data on the macronutrient compositions of insects to address this. Results showed that insects eaten by humans spanned a wide range of protein-to-fat ratios but were generally nutrient dense, whereas insects with high protein-to-fat ratios were eaten by nonhuman primates. Although suggestive, our survey exposes a need for additional, standardized, data.",
"title": ""
},
{
"docid": "b1c0fb9a020d8bc85b23f696586dd9d3",
"text": "Most instances of real-life language use involve discourses in which several sentences or utterances are coherently linked through the use of repeated references. Repeated reference can take many forms, and the choice of referential form has been the focus of much research in several related fields. In this article we distinguish between three main approaches: one that addresses the ‘why’ question – why are certain forms used in certain contexts; one that addresses the ‘how’ question – how are different forms processed; and one that aims to answer both questions by seriously considering both the discourse function of referential expressions, and the cognitive mechanisms that underlie their processing cost. We argue that only the latter approach is capable of providing a complete view of referential processing, and that in so doing it may also answer a more profound ‘why’ question – why does language offer multiple referential forms. Coherent discourse typically involves repeated references to previously mentioned referents, and these references can be made with different forms. For example, a person mentioned in discourse can be referred to by a proper name (e.g., Bill), a definite description (e.g., the waiter), or a pronoun (e.g., he). When repeated reference is made to a referent that was mentioned in the same sentence, the choice and processing of referential form may be governed by syntactic constraints such as binding principles (Chomsky 1981). However, in many cases of repeated reference to a referent that was mentioned in the same sentence, and in all cases of repeated reference across sentences, the choice and processing of referential form reflects regular patterns and preferences rather than strong syntactic constraints. The present article focuses on the factors that underlie these patterns. Considerable research in several disciplines has aimed to explain how speakers and writers choose which form they should use to refer to objects and events in discourse, and how listeners and readers process different referential forms (e.g., Chafe 1976; Clark & Wilkes 1986; Kintsch 1988; Gernsbacher 1989; Ariel 1990; Gordon, Grosz & Gilliom 1993; Gundel, Hedberg & Zacharski 1993; Garrod & Sanford 1994; Gordon & Hendrick 1998; Almor 1999; Cowles & Garnham 2005). One of the central observations in this research is that there exists an inverse relation between the specificity of the referential",
"title": ""
},
{
"docid": "e23ba3e45f913cd0bb682252a96a5f33",
"text": "Recently, there has been an increasing number of depth cameras available at commodity prices. These cameras can usually capture both color and depth images in real-time, with limited resolution and accuracy. In this paper, we study the problem of 3D deformable face tracking with such commodity depth cameras. A regularized maximum likelihood deformable model fitting (DMF) algorithm is developed, with special emphasis on handling the noisy input depth data. In particular, we present a maximum likelihood solution that can accommodate sensor noise represented by an arbitrary covariance matrix, which allows more elaborate modeling of the sensor’s accuracy. Furthermore, an 1 regularization scheme is proposed based on the semantics of the deformable face model, which is shown to be very effective in improving the tracking results. To track facial movement in subsequent frames, feature points in the texture images are matched across frames and integrated into the DMF framework seamlessly. The effectiveness of the proposed method is demonstrated with multiple sequences with ground truth information.",
"title": ""
},
{
"docid": "688b702425c53e844d28758182306ce1",
"text": "DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-petaflop and exaflop machines, the memory pressure is likely to be so severe that we need to rethink our memory usage models. Fortunately, the advent of non-volatile memory (NVM) offers a unique opportunity in this space. Current NVM offerings possess several desirable properties, such as low cost and power efficiency, but suffer from high latency and lifetime issues. We need rich techniques to be able to use them alongside DRAM. In this paper, we propose a novel approach for exploiting NVM as a secondary memory partition so that applications can explicitly allocate and manipulate memory regions therein. More specifically, we propose an NVMalloc library with a suite of services that enables applications to access a distributed NVM storage system. We have devised ways within NVMalloc so that the storage system, built from compute node-local NVM devices, can be accessed in a byte-addressable fashion using the memory mapped I/O interface. Our approach has the potential to re-energize out-of-core computations on large-scale machines by having applications allocate certain variables through NVMalloc, thereby increasing the overall memory capacity available. Our evaluation on a 128-core cluster shows that NVMalloc enables applications to compute problem sizes larger than the physical memory in a cost-effective manner. It can bring more performance/efficiency gain with increased computation time between NVM memory accesses or increased data access locality. In addition, our results suggest that while NVMalloc enables transparent access to NVM-resident variables, the explicit control it provides is crucial to optimize application performance.",
"title": ""
},
{
"docid": "e32068682c313637f97718e457914381",
"text": "Optimal load shedding is a very critical issue in power systems. It plays a vital role, especially in third world countries. A sudden increase in load can affect the important parameters of the power system like voltage, frequency and phase angle. This paper presents a case study of Pakistan’s power system, where the generated power, the load demand, frequency deviation and load shedding during a 24-hour period have been provided. An artificial neural network ensemble is aimed for optimal load shedding. The objective of this paper is to maintain power system frequency stability by shedding an accurate amount of load. Due to its fast convergence and improved generalization ability, the proposed algorithm helps to deal with load shedding in an efficient manner.",
"title": ""
},
{
"docid": "67925645b590cba622dd101ed52cf9e2",
"text": "This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from \"thin slices\" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used these excerpts to complete assessments of overall psychopathy and its Factor 1 and Factor 2 components, various personality disorders, violence proneness, and attractiveness. Thin-slice ratings of psychopathy correlated moderately and significantly with psychopathy criterion measures, especially those related to interpersonal features of psychopathy, particularly in the 5- and 10-s excerpt conditions and in the video and combined channel conditions. These findings demonstrate that first impressions of psychopathy and related constructs, particularly those pertaining to interpersonal functioning, can be reasonably reliable and valid. They also raise intriguing questions regarding how individuals form first impressions and about the extent to which first impressions may influence the assessment of personality disorders. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "fef5e04bf8ddb05dfd02f10c7862ce6b",
"text": "With the rise of computer networks in the past decades, the sp read of distributed applications with components across multiple machines, and with new notions such as mobile code, there has been a need for formal methods to model and reason about concurrency and mobility. The study of sequ ential computations has been based on notions such as Turing machines, recursive functions, the -calculus, all equivalent formalisms capturing the essenc e of sequential computations. Unfortunately, for concurrent programs, th eories for sequential computation are not enough. Many programs are not simply programs that compute a result and re turn it to the user, but rather interact with other programs, and even move from machine to machine. Process calculi are an attempt at getting a formal foundatio n based on such ideas. They emerged from the work of Hoare [4] and Milner [6] on models of concurrency. These calc uli are meant to model systems made up of processes communicating by exchanging values across channels. They a llow for the dynamic creation and removal of processes, allowing the modelling of dynamic systems. A typical proces s calculus in that vein is CCS [6, 7]. The -calculus extends CCS with the ability to create and remove communicat ion links between processes, a new form of dynamic behaviour. By allowing links to be created and deleted, it is po sible to model a form of mobility, by identifying the position of a process by its communication links. This book, “The -calculus: A Theory of Mobile Processes”, by Davide Sangior gi and David Walker, is a in-depth study of the properties of the -calculus and its variants. In a sense, it is the logical foll owup to the recent introduction to concurrency and the -calculus by Milner [8], reviewed in SIGACT News, 31(4), Dec ember 2000. What follows is a whirlwind introduction to CCS and the -calculus. It is meant as a way to introduce the notions discussed in much more depth by the book under review. Let us s tart with the basics. CCS provides a syntax for writing processes. The syntax is minimalist, in the grand tradition of foundational calculi such as the -calculus. Processes perform actions, which can be of three forms: the sending of a message over channel x (written x), the receiving of a message over channel x (written x), and internal actions (written ), the details of which are unobservable. Send and receive actions are called synchronizationactions, since communication occurs when the correspondin g processes synchronize. Let stand for actions, including the internal action , while we reserve ; ; : : : for synchronization actions. 1 Processes are written using the following syntax: P ::= Ahx1; : : : ; xki jXi2I i:Pi j P1jP2 j x:P We write 0 for the empty summation (when I = ;). The idea behind process expressions is simple. The proces s 0 represents the process that does nothing and simply termina tes. A process of the form :P awaits to synchronize with a process of the form :Q, after which the processes continue as process P andQ respectively. A generalization 1In the literature, the actions of CCS are often given a much mo re abstract interpretation, as simply names and co-names. T he send/receive interpretation is useful when one moves to the -calculus.",
"title": ""
},
{
"docid": "19d53b5a9ee4e4e6731b572bdc7dfbd7",
"text": "Today, crowdfunding has emerged as a popular means for fundraising. Among various crowdfunding platforms, reward-based ones are the most well received. However, to the best knowledge of the authors, little research has been performed on rewards. In this paper, we analyze a Kickstarter dataset, which consists of approximately 3K projects and 30K rewards. The analysis employs various statistical methods, including Pearson correlation tests, Kolmogorov-Smirnow test and Kaplan-Meier estimation, to study the relationships between various reward characteristics and project success. We find that projects with more rewards, with limited offerings and late-added rewards are more likely to succeed.",
"title": ""
},
{
"docid": "694add359ddb1ba8ebad89e5c9a2c6ce",
"text": "Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.",
"title": ""
},
{
"docid": "7340866fa3965558e1571bcc5294b896",
"text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.",
"title": ""
},
{
"docid": "35c7cb1e50059c3e77fcee20ed663234",
"text": "Electronic discovery is an interesting sub problem of information retrieval in which one identifies documents that are potentially relevant to issues and facts of a legal case from an electronically stored document collection (a corpus). In this paper, we consider representing documents in a topic space using the well-known topic models such as latent Dirichlet allocation and latent semantic indexing, and solving the information retrieval problem via finding document similarities in the topic space rather doing it in the corpus vocabulary space. We also develop an iterative SMART ranking and categorization framework including human-in-the-loop to label a set of seed (training) documents and using them to build a semi-supervised binary document classification model based on Support Vector Machines. To improve this model, we propose a method for choosing seed documents from the whole population via an active learning strategy. We report the results of our experiments on a real dataset in the electronic",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.